WO2023005115A1 - 图像处理方法、图像处理装置、电子设备和可读存储介质 - Google Patents

图像处理方法、图像处理装置、电子设备和可读存储介质 Download PDF

Info

Publication number
WO2023005115A1
WO2023005115A1 PCT/CN2021/139362 CN2021139362W WO2023005115A1 WO 2023005115 A1 WO2023005115 A1 WO 2023005115A1 CN 2021139362 W CN2021139362 W CN 2021139362W WO 2023005115 A1 WO2023005115 A1 WO 2023005115A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
initial
neural network
network model
parameters
Prior art date
Application number
PCT/CN2021/139362
Other languages
English (en)
French (fr)
Inventor
刘永劼
Original Assignee
爱芯元智半导体(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 爱芯元智半导体(上海)有限公司 filed Critical 爱芯元智半导体(上海)有限公司
Publication of WO2023005115A1 publication Critical patent/WO2023005115A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4015Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present application relates to the field of information processing, and in particular, relates to an image processing method, an image processing device, electronic equipment, and a readable storage medium.
  • Image Signal Processing is mainly used to process the output signal of the front-end image sensor to match the image sensors of different manufacturers.
  • NPU embedded neural network processor
  • the NPU adopts a "data-driven parallel computing" architecture, and is particularly good at processing massive multimedia data such as videos and images. It is specially designed for the artificial intelligence of the Internet of Things, used to accelerate the operation of the neural network, and solve the problem of low efficiency of the traditional chip in the operation of the neural network.
  • using the NPU to process image signals requires high energy consumption and high cost.
  • Embodiments of the present application provide an image processing method, an image processing device, an electronic device, and a readable storage medium, so as to reduce the dependence on the NPU in the image processing process, thereby reducing processing energy consumption and processing cost.
  • An embodiment of the present application provides an image processing method.
  • the image processing method may include: acquiring an image to be processed; performing demosaic processing on the image to be processed by using preset processing rules to obtain an initial image after demosaicing;
  • the network processor corrects the initial image with respect to the correction parameters output by the image to be processed to obtain a target image. In this way, processing energy consumption and processing costs can be reduced.
  • the correction parameters may include a first correction parameter and a second correction parameter; and the correction parameters output by the neural network processor for the image to be processed are used to correct the initial image to obtain a target image, which may include : Correcting the demosaicing parameters of the initial image by using the first correction parameter to obtain a corrected intermediate image; correcting the intermediate image by using the second correction parameter to obtain a target image. Reduced cost and energy consumption.
  • the first modification parameter may include an interpolation weight parameter
  • the demosaicing parameter may include an interpolation parameter
  • the demosaicing parameter of the initial image is modified by using the first modification parameter to obtain a modified
  • the intermediate image may include: modifying the interpolation parameters by using the interpolation weight parameters to obtain interpolation results corresponding to each pixel of the initial image; Perform interpolation processing to obtain the intermediate image. In this way, it is not necessary to manually participate in the determination process of the interpolation direction, which reduces the labor cost.
  • the neural network processor may include a first neural network model, and the interpolation weight parameter may be obtained by the first neural network model based on the image to be processed; and the training of the first neural network model
  • the steps may include: acquiring a first sample image set; the first sample image set includes at least one first sample image, and the first sample image includes a RAW image; inputting the first sample image into a second an initial neural network model, and obtain initial interpolation weight parameters corresponding to each pixel of the first sample image output by the first initial neural network model; initial interpolation parameters; according to the initial interpolation weight parameters and the initial interpolation parameters, determine the initial interpolation results corresponding to the respective pixels, and use the initial interpolation results to perform interpolation processing on the first sample image , to obtain the first initial target image; use the first preset loss function to determine the first loss function value between the first initial target image and the expected image of the first sample image; use the first loss function adjust the model parameters corresponding to the first initial neural network model, so that the first initial neural network
  • the neural network processor may include a second neural network model, and the residual correction parameter may be obtained by the second neural network model based on the image to be processed; and the second neural network model
  • the training step may include: obtaining a second sample image set; the second sample image set includes at least one second sample image, the second sample image includes a RAW image; inputting the second sample image into a second initial neural network model, and obtain the initial residual correction parameters output by the second initial neural network model; use preset processing rules to obtain the demosaiced image corresponding to the second sample image; use the initial residual correction parameters to correct the The residual between the demosaiced image and the expected initial target image is obtained to obtain a second initial target image; a second preset loss function is used to determine the second initial target image between the second initial target image and the expected initial target image A loss function value: using the second loss function value to adjust model parameters corresponding to the second initial neural network model, so that the second initial neural network model converges to obtain the second neural network model.
  • the second neural network model can
  • the step of training the second neural network model may further include: performing tone mapping processing on the second sample image to reduce the bit width. Then, the calculation amount of the image to be processed can be reduced, and the energy consumption can be reduced.
  • the neural network processor may include a third neural network model, and the false color correction parameters may be obtained by the third neural network model based on the image to be processed; and the third neural network model
  • the training step may include: obtaining a third sample image set; the third sample image set may include at least one third sample image, and the third sample image may include a RAW image; input the third sample image into a third initial neural network model, and obtain the initial pseudo-color correction parameters corresponding to each pixel of the third sample image output by the third initial neural network model; use the preset processing rules to obtain the corresponding initial pseudo-color correction parameters of the third sample image
  • the initial false color correction parameter may include an initial false color weight parameter or an initial false color compensation value; wherein, the initial false color weight parameter may be used to characterize the degree of chroma desalination corresponding to each pixel ; The initial false color compensation value may be used to compensate the chromaticity corresponding to each pixel. In this way, appropriate false color correction parameters can be selected according to local conditions in specific application scenarios.
  • the neural network processor may include a fourth neural network model, and the purple fringing correction parameters may be obtained by the fourth neural network model based on the image to be processed; and the fourth neural network model
  • the training step may include: obtaining a fourth sample image set; the fourth sample image set includes at least one fourth sample image, and the fourth sample image includes a demosaiced image; inputting the fourth sample image into a fourth initial neural network model, and obtain the initial purple fringing correction parameters corresponding to the fourth sample image output by the fourth initial neural network model; using the initial purple fringing correction parameters to correct the purple fringing area corresponding to the demosaiced image, Obtain a corrected fourth initial target image; use a fourth preset loss function to determine a fourth loss function value between the fourth initial target image and an expected image corresponding to the fourth sample image; use the fourth The loss function value adjusts the model parameters corresponding to the fourth initial neural network model, so that the fourth initial neural network model converges to obtain the fourth neural network model.
  • the fourth neural network model can
  • the initial purple fringing correction parameter may include an initial purple fringing weight parameter or an initial purple fringing compensation value; wherein, the initial purple fringing weight parameter may be used to characterize the color spot elimination degree of the purple fringing area; the initial The purple fringing compensation value can be used to compensate the saturation of each pixel in the purple fringing area.
  • suitable purple fringing correction parameters can be selected according to local conditions in specific application scenarios.
  • the neural network processor may include a fifth neural network model, the sharpening correction parameters are obtained by the fifth neural network model based on the image to be processed; and the training of the fifth neural network model
  • the steps may include: obtaining a fifth sample image set; the fifth sample image set includes at least one fifth sample image, and the fifth sample image includes a YUV image; inputting the fifth sample image into a fifth initial neural network model, and Obtain the lightness weight parameters corresponding to each pixel of the fifth sample image output by the fifth initial neural network model; use the lightness weight parameters to guide the fifth sample image to be sharpened, and obtain the sharpened an initial brightness value; using a fifth preset loss function to determine a fifth loss function value between the initial brightness value and the expected brightness value of the fifth sample image; using the fifth loss function value to adjust the fifth Model parameters corresponding to the initial neural network model, so that the fifth initial neural network model converges to obtain the fifth neural network model.
  • the fifth neural network model can be trained so that the fifth neural network model can output brightness weight parameters.
  • the neural network processor may include a sixth neural network model, and the sharpening correction parameter may be obtained by the sixth neural network model based on the image to be processed; and the sixth neural network model
  • the training step may include: obtaining a sixth sample image set; the sixth sample image set includes at least one sixth sample image, and the sixth sample image includes a YUV image; the sixth sample image is input into the sixth initial neural network model, And obtain at least one target processing area output by the sixth initial neural network model, the at least one target processing area corresponds to a lightness weight parameter respectively; use the lightness weight parameter corresponding to the target processing area to process the target Correct the brightness information of the region to obtain the corrected initial brightness value; use the sixth preset loss function to determine the sixth loss function value between the initial brightness value and the expected brightness value of the target processing area; use the The sixth loss function value adjusts the model parameters corresponding to the sixth initial neural network model, so that the sixth initial neural network model converges to obtain the sixth neural network model.
  • the sixth neural network model can be trained, so that
  • an image processing device may include: an acquisition module that may be configured to acquire an image to be processed; a demosaic processing module that may be configured to use a preset processing rule Demosaic processing is performed on the image to be processed to obtain an initial image after demosaicing; the correction module may be configured to correct the initial image with correction parameters output by a neural network processor for the image to be processed to obtain a target image.
  • the correction parameters include a first correction parameter and a second correction parameter
  • the correction module includes a first correction module and a second correction module, wherein the first correction module is configured to use the The first correction parameter modifies the demosaicing parameters of the initial image to obtain a corrected intermediate image; the second correction module may be configured to use the second correction parameter to modify the intermediate image to obtain a target image.
  • Still other embodiments of the present application provide an electronic device, which may include a processor and a memory, the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, it can run according to The image processing method provided by the foregoing embodiments.
  • Still other embodiments of the present application provide a readable storage medium, on which a computer program can be stored, and when the computer program is executed by a processor, the image processing method provided according to the foregoing embodiments can be run.
  • FIG. 1 is a flow chart of an image processing method provided in an embodiment of the present application
  • FIG. 2 is a flow chart of another image processing method provided by the embodiment of the present application.
  • FIG. 3 is a structural block diagram of an image processing device provided in an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an electronic device for executing an image processing method provided by an embodiment of the present application.
  • the application provides an image processing method, image processing device and electronic equipment; in addition, by obtaining the image to be processed; using Demosaic processing is performed on the image to be processed by a preset processing rule to obtain an initial image after demosaicing; the initial image is corrected by a correction parameter output by a neural network processor for the image to be processed to obtain a target image.
  • the NPU can be used to obtain the correction parameters of the image to be processed, so as to correct the initial image processed by the preset processing rules.
  • the image processing method provided by this application can be assisted by the correction parameters output by the NPU, which reduces the dependence on the NPU in the image processing process, and then reduces the processing time. Energy consumption and disposal costs.
  • the above image processing method may be applied to a camera, a camera, and other devices that capture images, for processing image information output by an image sensor.
  • the above image processing method may also be applied to an image processing server to process received image information.
  • FIG. 1 shows a flowchart of an image processing method provided by an embodiment of the present application.
  • the image processing method includes the following steps 101 to 103.
  • Step 101 acquiring an image to be processed
  • the above-mentioned image to be processed may be captured by, for example, a camera, a video camera, or other equipment.
  • the image to be processed may include the sensor (Complementary Metal Oxide Semiconductor, referred to as CMOS) or charge coupled device (charge coupled device, referred to as CCD) to convert the captured light source signal into a digital signal , images in RAW format).
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD charge coupled device
  • Step 102 using preset processing rules to perform demosaic processing on the image to be processed to obtain a demosaiced initial image
  • the aforementioned preset processing rules may include, for example, implementation of demosaicing algorithms such as simple interpolation and bilinear interpolation in the prior art to perform interpolation processing on the RAW image to obtain a demosaiced initial image.
  • the image to be processed may be demosaiced using the above preset processing rules to obtain a processed initial image.
  • white balance processing may be performed on the image to be processed so that the color of the image to be processed is not distorted as much as possible.
  • Step 103 using the neural network processor to correct the initial image with the correction parameters output by the image to be processed to obtain a target image.
  • the image to be processed can be input into the neural network processor, so that the neural network processor can output correction parameters for correcting the initial image.
  • the above correction parameters may include, for example, correction parameters for correcting purple fringing, correction parameters for correcting demosaicing results, and the like.
  • the corresponding position of the initial image can be corrected according to the contents corrected by the correction parameters to obtain the target image.
  • the demosaic correction parameters can be used to correct the demosaic area in the initial image, so as to obtain a target image with better demosaic effect.
  • the neural network processor can be used to output correction parameters to correct the initial image, so as to obtain a target image with better effect.
  • the image processing method provided in this embodiment can be realized through the correction parameters output by the NPU, which reduces the dependence on the NPU in the image processing process, thereby reducing the Processing energy consumption and processing costs.
  • the correction parameters include a first correction parameter and a second correction parameter.
  • the above-mentioned first correction parameter and the second correction parameter can correct different image effects.
  • FIG. 2 shows a flowchart of another image processing method provided by an embodiment of the present application.
  • the image processing method includes the following steps 201 to 204 .
  • Step 201 acquiring an image to be processed
  • step 201 The implementation process of the above step 201 and the obtained technical effects may be the same as or similar to the step 101 in the embodiment shown in FIG. 1 , and details are not described here.
  • Step 202 performing demosaic processing on the image to be processed by using preset processing rules to obtain an initial image after demosaicing.
  • step 202 The implementation process of the above step 202 and the obtained technical effects may be the same as or similar to the step 102 in the embodiment shown in FIG. 1 , and details are not described here.
  • Step 203 using the first correction parameter to correct the demosaicing parameters of the initial image to obtain a corrected intermediate image
  • the demosaicing parameter of the original image may be modified by using the first modification parameter.
  • the correction content corresponding to the first correction parameter is to correct the demosaic effect of the initial image.
  • the intermediate image can be obtained after the initial image is corrected by the first correction parameter. In this way, there is no need to manually participate in the determination process of the interpolation result, which reduces the labor cost.
  • Step 204 using the second correction parameter to correct the intermediate image to obtain a target image.
  • the above-mentioned second correction parameter can further optimize the intermediate image obtained after demosaicing to obtain the target image.
  • the above-mentioned second correction parameters may include, for example, purple fringing correction parameters, false color correction parameters, and the like.
  • the steps of using the first correction parameter and the second correction parameter to correct the initial image and the intermediate image respectively are highlighted, so that the obtained target image has the same effect as the image directly output after being processed by the neural network processor. difference, but with reduced cost and energy consumption.
  • the first correction parameter includes an interpolation weight parameter
  • the demosaicing parameter includes an interpolation parameter
  • Sub-step 2031 using the interpolation weight parameters to modify the interpolation parameters to obtain interpolation results corresponding to each pixel of the initial image
  • the above interpolation weight parameters may represent the weights of the target pixel points to be interpolated in different interpolation directions.
  • different interpolation directions may represent pixels adjacent to the target pixel. That is, the interpolation weight parameter may represent the weight of the default color value of the target pixel taking the color values of neighboring pixels.
  • the above-mentioned interpolation parameters may represent the color values of adjacent pixels taken by the target pixel of the default color value obtained by using preset processing rules.
  • the interpolation parameter can be corrected by using the interpolation weight parameter.
  • the interpolation weight parameter and the interpolation parameter may be weighted and summed to obtain the final interpolation result of the target pixel.
  • Sub-step 2032 perform interpolation processing on each pixel according to the interpolation results corresponding to each pixel to obtain the intermediate image.
  • the interpolation result of each pixel After the interpolation result of each pixel is obtained, the interpolation result can be used to perform interpolation processing on each pixel of the initial image, so that the three-channel color values of each pixel can be completed, and then an intermediate image can be obtained.
  • the neural network processor includes a first neural network model, and the interpolation weight parameters are obtained by the first neural network model based on the image to be processed; and the first neural network
  • the training steps of the model include:
  • Step A1 acquiring a first sample image set; the first sample image set includes at least one first sample image, and the first sample image includes a RAW image;
  • the aforementioned RAW image that is, the raw image data that converts the captured light source signal into a digital signal by the sensor CMOS or CCD, is common in Bayer array technology.
  • an image in the RAW domain is interpolated into an image in the RGB domain. That is, the default color values of each pixel of the RAW image are interpolated to obtain an RGB image with complete color values of the three channels.
  • a plurality of RAW images may be obtained and organized into the above-mentioned first sample image set.
  • Step A2 inputting the first sample image into a first initial neural network model, and obtaining initial interpolation weight parameters corresponding to each pixel of the first sample image output by the first initial neural network model;
  • the initial interpolation weight parameter here means that the default color value representing the pixel is weighted by the color value of the adjacent pixel.
  • Step A3 using preset interpolation rules to obtain initial interpolation parameters corresponding to the respective pixel points;
  • an existing demosaicing algorithm in the related art may be used to obtain initial interpolation parameters corresponding to each pixel.
  • Step A4 according to the initial interpolation weight parameters and the initial interpolation parameters, determine the initial interpolation results corresponding to the respective pixels, and use the initial interpolation results to perform interpolation processing on the first sample image to obtain a first initial target image;
  • the initial interpolation result may be determined according to the initial interpolation weight parameter and the initial interpolation parameter corresponding to the first sample image. Specifically, for example, initial interpolation weight parameters corresponding to the first sample image and initial interpolation parameters may be weighted and summed to obtain initial interpolation results corresponding to respective pixel points. Then, after performing interpolation processing on the first sample image by using the initial interpolation result, a first initial target image corresponding to the first sample image, that is, an initial RGB image corresponding to the first sample image can be obtained.
  • Step A5 using a first preset loss function to determine a first loss function value between the first initial target image and the expected image of the first sample image;
  • the aforementioned expected image may include, for example, an RGB image obtained directly after processing the first sample image by using a neural network processor in the related art.
  • the aforementioned first preset loss function may include, for example, a mean square error loss function (also called an L2 loss function) or an average absolute error loss function (also called an L1 loss function).
  • a mean square error loss function also called an L2 loss function
  • an average absolute error loss function also called an L1 loss function
  • Step A6 using the first loss function value to adjust the model parameters corresponding to the first initial neural network model, so as to make the first initial neural network model converge, and obtain the first neural network model.
  • the first loss function value can be used to adjust the model parameters of the first initial neural network model, so as to make the first initial neural network model converge.
  • each first sample image in the above-mentioned first sample image set can perform the above-mentioned operations from Step A1 to Step A6, so that the first initial neural network model can be Convergence, the first neural network model is obtained.
  • the first neural network model can output interpolation weight parameters for the image to be processed.
  • the second correction parameter includes a residual correction parameter; and the above step 204 may include: using the residual correction parameter to correct the intermediate image, so as to eliminate the difference between the intermediate image and the expected target Residual between images to get the target image.
  • the above-mentioned expected target image may include, for example, an RGB image obtained directly after processing the image to be processed by directly using a neural network processor in the related art.
  • the RGB image can be regarded as a target image in an ideal state.
  • the calculation requirements for the preset processing rule and the demosaic effect of the initial image can be reduced Requirements, etc., can also achieve the effect of reducing costs to a certain extent.
  • the obtained residual correction parameter can be multiplied by the color value of each pixel of the RGB image obtained by using the preset processing rules to obtain the target image.
  • the neural network processor includes a second neural network model, and the residual correction parameters are obtained by the second neural network model based on the image to be processed; and the second neural network
  • the training steps of the network model include:
  • Step B1 acquiring a second sample image set; the second sample image set includes at least one second sample image, and the second sample image includes a RAW image;
  • step B1 The implementation process and technical effect of the above step B1 may be the same as or similar to the above step A1, and will not be repeated here.
  • Step B2 inputting the second sample image into a second initial neural network model, and obtaining initial residual correction parameters output by the second initial neural network model;
  • one of the second sample images can be selected and input to the second initial neural network model, and the second initial neural network model is set to output the initial residual correction parameters corresponding to the second sample image direction training.
  • Step B3 using the preset processing rules to obtain the demosaiced image corresponding to the second sample image;
  • an existing demosaicing algorithm in related technologies can be used to obtain interpolation parameters corresponding to each pixel, and then a corresponding demosaiced image can be obtained by using the interpolation parameters.
  • Step B4 using the initial residual correction parameters to correct the residual between the demosaiced image and the expected initial target image to obtain a second initial target image;
  • the initial residual correction parameter corresponding to the second sample image may be multiplied by the color value of each pixel of the demosaiced image to obtain the second initial target image.
  • Step B5 using a second preset loss function to determine a second loss function value between the second initial target image and the expected initial target image;
  • Step B6 using the second loss function value to adjust the model parameters corresponding to the second initial neural network model, so as to make the second initial neural network model converge, and obtain the second neural network model.
  • each second sample image in the above-mentioned second sample image set can also perform the above-mentioned operations from Step B1 to Step B6, so as to enable the second initial neural network model to converge, Obtain the second neural network model.
  • the second neural network model can output residual correction parameters for the image to be processed.
  • the step of training the second neural network model further includes: performing tone mapping processing on the second sample image to reduce the bitness of the second sample image. Width.
  • tone mapping may be performed on each second sample image in the second sample image set, so as to convert the high-bit-width second sample image into a low-bit-width second sample image.
  • the sample images used when training the second neural network model are all low-bit-width image data, which can reduce the amount of calculation in the training process.
  • the image to be processed with a high bit width may also be subjected to tone mapping processing, so as to reduce the calculation amount of the image to be processed and reduce energy consumption.
  • the second correction parameter includes a false color correction parameter; and the above step 204 may include: using the false color correction parameter to correct the false color area of the intermediate image to obtain the target image .
  • the above-mentioned pseudo-color area is also a pseudo-color area.
  • the color of each pixel of a pseudo-color image is not directly determined by the value of each basic color component, but the pixel is regarded as the entry address of the palette or color lookup table, according to which the address can be found Contains the actual R, G, B intensity values. If the color in the image does not exist in the palette or color lookup table, the palette will match with the closest color. The color generated by finding the R, G, and B intensity values is not the real color of the image itself, so it is called false color.
  • the false color area can be corrected. Specifically, the false color area can be corrected by using the false color correction parameter to obtain the target image.
  • the neural network processor includes a third neural network model, the false color correction parameters are obtained by the third neural network model based on the image to be processed; and the third neural network
  • the training steps of the network model include:
  • Step C1 acquiring a third sample image set; the third sample image set includes at least one third sample image, and the third sample image includes a RAW image;
  • Step C2 inputting the third sample image into a third initial neural network model, and obtaining initial pseudo-color correction parameters corresponding to each pixel of the third sample image output by the third initial neural network model;
  • the initial pseudo-color correction parameters may include correction parameters for luma information and chrominance signals.
  • Step C3 using the preset processing rules to obtain the demosaiced image corresponding to the third sample image
  • Step C4 using the initial pseudo-color correction parameters to correct the pseudo-color area corresponding to the demosaiced image to obtain a corrected third initial target image;
  • the false color areas of the demosaiced image can be processed by using the initial false color correction parameters.
  • the demosaiced image is generally in the RGB domain, so the demosaiced image can be converted from the RGB domain to the YUV domain, so as to correct the converted demosaiced image by using the pseudo-color correction parameters.
  • the conversion process of the image between the RGB domain and the YUV domain is well known to those skilled in the art and will not be repeated here.
  • the YUV domain can be used to represent the brightness and chrominance of an image.
  • "Y" represents the brightness, that is, the lightness value
  • "U” and "V” represent the chroma, which are used to describe the color and saturation of the image, and are used to specify the color of the pixel. Therefore, the initial correction parameters can be corrected for the chrominance information U, V.
  • the initial false color correction parameters include initial false color weight parameters or initial false color compensation values; wherein, the initial false color weight parameters are used to characterize the color values corresponding to the respective pixels. degree of lightening; the initial false color compensation value is used to compensate the chroma corresponding to each pixel.
  • the initial pseudo-color weight parameter when the initial pseudo-color correction parameter is an initial pseudo-color weight parameter, the initial pseudo-color weight parameter may be used to perform chroma lightening processing on each pixel in the demosaiced image.
  • the initial pseudo-color weight parameter can be multiplied by the chromaticity of each pixel to obtain an image after the chromaticity is lightened. It should be noted that the value corresponding to the initial pseudo-color weight parameter should be between (0,1), and each value represents a piece of weight information.
  • the initial false color compensation value when the initial false color correction parameter is an initial false color compensation value, the initial false color compensation value may be used to compensate the chromaticity of each pixel of the demosaiced image.
  • the obtained initial pseudo-color compensation value can be added to the chromaticity of the pixel to obtain a compensated image.
  • Step C5 using a third preset loss function to determine a third loss function value between the third initial target image and the expected image of the third sample image;
  • the aforementioned third preset loss function may include, for example, a mean square error loss function (also called an L2 loss function) or an average absolute error loss function (also called an L1 loss function).
  • a mean square error loss function also called an L2 loss function
  • an average absolute error loss function also called an L1 loss function
  • the above-mentioned expected image may include, for example, an RGB image obtained directly after processing the third sample image by using a neural network processor in the related art.
  • the third initial target image in the YUV domain needs to be converted back to the RGB domain. Then use the above-mentioned third preset loss function to perform loss calculation.
  • Step C6 using the third loss function value to adjust the model parameters corresponding to the third initial neural network model, so as to make the third initial neural network model converge, and obtain the third neural network model.
  • step C6 The implementation process and technical effects of the above step C6 may be similar to those of step A6, and will not be repeated here.
  • each third sample image in the above-mentioned third sample image set can also perform the above-mentioned operations from Step C1 to Step C6, so as to enable the third initial neural network model to converge, Get the third neural network model.
  • the third neural network model can output the false color correction parameters for the image to be processed.
  • the second correction parameters include purple fringe correction parameters; and using the second correction parameters to correct the intermediate image to obtain the target image includes: using the purple fringe correction parameters Correcting the purple fringe area of the intermediate image to obtain the target image.
  • Purple fringing refers to the phenomenon that color spots appear at the junction of high-light and low-light parts due to the large contrast of the subject during the shooting process of digital cameras. Therefore, the purple fringing area existing in the intermediate image can also be corrected by using the purple fringing correction parameters to obtain the target image.
  • the neural network processor includes a fourth neural network model
  • the purple fringing correction parameters are obtained by the fourth neural network model based on the image to be processed; and the fourth neural network
  • the training steps of the network model include:
  • Step D1 acquiring a fourth sample image set; the fourth sample image set includes at least one fourth sample image, and the fourth sample image includes a demosaiced image;
  • demosaiced image can be obtained by, for example, a demosaicing algorithm, and the specific implementation process can refer to the implementation process of the aforementioned demosaicing algorithm, which will not be repeated here.
  • the above fourth sample image set can be obtained.
  • Step D2 inputting the fourth sample image into the fourth initial neural network model, and obtaining initial purple fringing correction parameters corresponding to the fourth sample image output by the fourth initial neural network model;
  • step D2 may be similar to those of step C2 and will not be repeated here.
  • the fourth sample image before inputting the fourth sample image into the fourth initial neural network model, the fourth sample image may be converted from the RGB domain to the YUV domain, so as to realize the initial purple fringing correction parameters for correcting chromaticity.
  • the initial purple fringing correction parameter includes an initial purple fringing weight parameter or an initial purple fringing compensation value; wherein, the initial purple fringing weight parameter is used to characterize the degree of stain removal in the purple fringing area;
  • the initial purple fringing compensation value is used to compensate the saturation of each pixel in the purple fringing area.
  • the initial purple fringing weight parameter when the initial purple fringing correction parameter is the initial purple fringing weight parameter, the initial purple fringing weight parameter can be used to perform a desaturation processing operation on each pixel in the purple fringing area in the demosaiced image.
  • the initial purple fringe weight parameter can be multiplied by the saturation value of each pixel to obtain a desaturated image. It should be noted that the value corresponding to the initial purple fringe weight parameter should be between (0,1), and each value corresponds to a piece of weight information.
  • the initial purple fringing compensation value may be used to compensate the saturation value of each pixel in the purple fringing area in the demosaiced image.
  • the obtained initial purple fringing compensation value may be added to the corresponding saturation value of the pixel to obtain a compensated image.
  • Step D3 using the initial purple fringing correction parameters to correct the purple fringing area corresponding to the demosaiced image to obtain a corrected fourth initial target image;
  • corresponding processing may be performed according to the initial purple fringing correction parameter, specifically the initial purple fringing weight parameter or the initial purple fringing compensation value, to obtain a corrected fourth initial target image.
  • Step D4 using a fourth preset loss function to determine a fourth loss function value between the fourth initial target image and the expected image corresponding to the fourth sample image;
  • step D4 The implementation process of step D4 and the obtained technical effects may be similar to those of step C5, and will not be repeated here.
  • Step D5 using the fourth loss function value to adjust the model parameters corresponding to the fourth initial neural network model, so as to make the fourth initial neural network model converge, and obtain the fourth neural network model.
  • step D5 may be similar to that of step C6, and will not be repeated here.
  • each of the fourth sample images in the fourth sample image set can perform the above steps D1 to D5, so that the fourth initial neural network model can be converged to obtain the first Four neural network models.
  • the fourth neural network model can output purple fringing correction parameters for the image to be processed.
  • the second correction parameter includes a sharpening correction parameter; and step 204 may include: using the sharpening correction parameter to correct the blurred area of the intermediate image to obtain a target image; the sharpening
  • the optimization correction parameters include lightness weight parameters for lightness correction.
  • some or all of the intermediate images may be partially or completely blurred after being processed by preset processing rules. Therefore, the blurred area of the intermediate image can be corrected by using the sharpening correction parameters to obtain the target image.
  • the sharpening operation can focus on blurred edges, improve the clarity or focus of a certain part of the image, and make the color of a specific area of the intermediate image more vivid.
  • the sharpening correction parameters include parameters for correcting brightness, that is, the sharpening correction parameters can be used to correct "Y" information in the YUV domain image.
  • the sharpening correction parameter may also include a parameter for correcting chroma. That is, the correction of "U” information and "V" information in the YUV domain image.
  • the neural network processor includes a fifth neural network model, and the sharpening correction parameter is obtained by the fifth neural network model based on the image to be processed; and the fifth neural network
  • the training steps of the network model include:
  • Step E1 acquiring a fifth sample image set; the fifth sample image set includes at least one fifth sample image, and the fifth sample image includes a YUV image;
  • YUV images can also be used to characterize images with brightness and chrominance.
  • "Y” represents the brightness, that is, the lightness value
  • "U” and “V” represent the chroma, which are used to describe the color and saturation of the image, and are used to specify the color of the pixel.
  • multiple YUV images may be selected and sorted into the above-mentioned fifth sample image set.
  • Step E2 inputting the fifth sample image into the fifth initial neural network model, and obtaining lightness weight parameters corresponding to each pixel of the fifth sample image output by the fifth initial neural network model;
  • the input of the fifth initial neural network model may be a YUV image
  • the fifth initial neural network model is set to output brightness weight parameters for training.
  • the value corresponding to the above lightness weight parameter may be between (0, 1), and each value may represent a sharpening intensity.
  • Step E3 using the lightness weight parameter to guide the fifth sample image to be sharpened to obtain an initial lightness value after sharpening.
  • the brightness weight parameter may be used to guide the sharpening of the fifth sample image. For example, after obtaining the brightness weight parameter of pixel A as 0.5 and the brightness weight parameter of pixel B as 0.2, when sharpening the fifth sample image, the pixels of pixel A and pixel B of the fifth sample image can be sharpened The brightness is multiplied by the corresponding brightness weight parameters to obtain the initial brightness value corresponding to each pixel.
  • Step E4 using a fifth preset loss function to determine a fifth loss function value between the initial brightness value and the expected brightness value of the fifth sample image;
  • the above-mentioned fifth preset loss function may include, for example, a mean square error loss function (also called an L2 loss function) or an average absolute error loss function (also called an L1 loss function).
  • a mean square error loss function also called an L2 loss function
  • an average absolute error loss function also called an L1 loss function
  • a fifth preset loss function may be used to perform loss calculation on the initial brightness value and the expected brightness value corresponding to the fifth sample image, so as to obtain a fifth loss function value.
  • Step E5 using the fifth loss function value to adjust the model parameters corresponding to the fifth initial neural network model, so as to make the fifth initial neural network model converge, and obtain the fifth neural network model.
  • step E5 may be similar to those of step C6, and will not be repeated here.
  • each of the fifth sample images in the fifth sample image set can perform the above steps E1 to E5, so that the fifth initial neural network model can be converged to obtain the first Five neural network models.
  • the fifth neural network model can output brightness weight parameters for the image to be processed.
  • the neural network processor includes a sixth neural network model, and the sharpening correction parameter is obtained by the sixth neural network model based on the image to be processed; and the sixth neural network
  • the training steps of the network model include:
  • Step F1 acquiring a sixth sample image set; the sixth sample image set includes at least one sixth sample image, and the sixth sample image includes a YUV image;
  • step F1 may be the same as or similar to step E1, and will not be repeated here.
  • Step F2 inputting the sixth sample image into the sixth initial neural network model, and obtaining at least one target processing area output by the sixth initial neural network model, the at least one target processing area respectively corresponding to a lightness weight parameter;
  • the above-mentioned target processing area can be regarded as an area requiring special sharpening treatment.
  • the location where the person's face is determined as the first target processing area may be output. Then, the sharpening intensity at the face can be strengthened to improve the definition of the face.
  • the hand may also be determined as the second target processing area, and then the sharpening strength of the hand may be enhanced to improve the definition of the hand.
  • the process of determining the target processing area by the sixth initial neural network model may be the same as or similar to the target detection algorithm in the related art, and will not be repeated here.
  • Each target processing area may correspond to a luma weight parameter, so that different target processing areas may have different sharpening strengths.
  • the first target processing area may correspond to a lightness weight parameter of 0.9
  • the second target processing area may correspond to a lightness weight parameter of 0.7.
  • the lightness weight parameter here may be preset.
  • the lightness weight parameter of the first target processing area may be preset as 0.9. In this way, after the first target processing area is output, the pixels of the first target processing area may be associated with the lightness weight parameter 0.9.
  • Step F3 using the lightness weight parameters corresponding to the target processing area to correct the lightness information of the target processing area to obtain a corrected initial lightness value
  • the brightness information of the target processing area associated with it can be corrected by using the brightness weight parameter to obtain the corresponding initial brightness value.
  • the corresponding initial brightness value can be obtained by multiplying the pixel brightness value of the target processing area by the corresponding brightness weight parameter.
  • Step F4 using a sixth preset loss function to determine a sixth loss function value between the initial brightness value and the expected brightness value of the target processing area;
  • step F4 may be similar to step E4, and will not be repeated here.
  • Step F5 using the sixth loss function value to adjust the model parameters corresponding to the sixth initial neural network model, so as to make the sixth initial neural network model converge, and obtain the sixth neural network model.
  • step F5 The implementation process and technical effect of step F5 may be similar to step E5, and will not be repeated here.
  • each of the sixth sample images in the sixth sample image set can perform the above steps F1 to F5, so that the sixth initial neural network model can be converged to obtain the sixth Six neural network models.
  • the sixth neural network model can output the target processing area for the image to be processed.
  • FIG. 3 shows a structural block diagram of an image processing apparatus provided by an embodiment of the present application.
  • the image processing apparatus may be a module, program segment or code on an electronic device. It should be understood that the device corresponds to the above-mentioned method embodiment in FIG. 1 , and can execute various steps involved in the method embodiment in FIG. 1 .
  • the specific functions of the device can refer to the description above. To avoid repetition, detailed descriptions are appropriately omitted here.
  • the above image processing apparatus includes an acquisition module 301 , a demosaic processing module 302 and a correction module 303 .
  • the acquisition module 301 can be configured to acquire the image to be processed;
  • the demosaic processing module 302 can be configured to perform demosaic processing on the image to be processed by using preset processing rules to obtain an initial image after demosaicing
  • the correction module 303 may be configured to use the correction parameters output by the neural network processor for the image to be processed to correct the initial image to obtain a target image.
  • the correction parameters include a first correction parameter and a second correction parameter; and the correction module 303 further includes a first correction module and a second correction module, wherein the first correction module can be configured to use the first correction
  • a correction parameter corrects the demosaicing parameters of the initial image to obtain a corrected intermediate image;
  • a second correction module may be configured to use the second correction parameter to correct the intermediate image to obtain a target image.
  • the first modification parameter includes an interpolation weight parameter
  • the demosaicing parameter includes an interpolation parameter
  • the first modification module may be further configured to: use the interpolation weight parameter to modify the interpolation parameter, Obtain interpolation results corresponding to each pixel of the initial image; perform interpolation processing on each pixel according to the interpolation results corresponding to each pixel to obtain the intermediate image.
  • the neural network processor includes a first neural network model, and the interpolation weight parameters are obtained by the first neural network model based on the image to be processed; and the training step of the first neural network model includes : Obtain a first sample image set; the first sample image set includes at least one first sample image, and the first sample image includes a RAW image; the first sample image is input into the first initial neuron Network model, and obtain the initial interpolation weight parameters corresponding to each pixel of the first sample image output by the first initial neural network model; use the preset interpolation rules to obtain the initial interpolation corresponding to each pixel parameters; according to the initial interpolation weight parameters and the initial interpolation parameters, determine the initial interpolation results corresponding to the respective pixels, and use the initial interpolation results to perform interpolation processing on the first sample image to obtain the second an initial target image; using a first preset loss function to determine a first loss function value between the first initial target image and the expected image of the first sample image; using the first loss function value to adjust the model parameters
  • the second modification parameter includes a residual modification parameter; and the second modification module may be further configured to: use the residual modification parameter to modify the intermediate image, so as to eliminate the difference between the intermediate image and Residuals between expected target images, resulting in said target images.
  • the neural network processor includes a second neural network model, the residual correction parameters are obtained by the second neural network model based on the image to be processed; and the training step of the second neural network model Including: obtaining a second sample image set; the second sample image set includes at least one second sample image, and the second sample image includes a RAW image; inputting the second sample image into a second initial neural network model, and Obtain an initial residual correction parameter output by the second initial neural network model; use a preset processing rule to obtain a demosaiced image corresponding to the second sample image; use the initial residual correction parameter to correct the demosaic The residual between the post image and the expected initial target image to obtain a second initial target image; using a second preset loss function to determine the second loss function value between the second initial target image and the expected initial target image ; using the second loss function value to adjust the model parameters corresponding to the second initial neural network model, so that the second initial neural network model converges, and obtain the second neural network model.
  • the step of training the second neural network model further includes: performing tone mapping processing on the second sample image to reduce the bitness of the second sample image. Width.
  • the second correction parameter includes a false color correction parameter; and the second correction module may be further configured to: use the false color correction parameter to correct a false color area of the intermediate image to obtain the target image.
  • the neural network processor includes a third neural network model, the false color correction parameters are obtained by the third neural network model based on the image to be processed; and the training step of the third neural network model Including: obtaining a third sample image set; the third sample image set includes at least one third sample image, and the third sample image includes a RAW image; input the third sample image into a third initial neural network model, and obtain the obtained Initial pseudo-color correction parameters corresponding to each pixel of the third sample image output by the third initial neural network model; using the preset processing rules to obtain the demosaiced image corresponding to the third sample image; Using the initial pseudo-color correction parameters to correct the pseudo-color area corresponding to the demosaiced image to obtain a corrected third initial target image; using a third preset loss function to determine the difference between the third initial target image and the A third loss function value between the expected images of the third sample image; using the third loss function value to adjust the model parameters corresponding to the third initial neural network model, so that the third initial neural network model converges,
  • the initial false color correction parameter includes an initial false color weight parameter or an initial false color compensation value; wherein, the initial false color weight parameter is used to characterize the degree of chroma desalination corresponding to each pixel; The initial false color compensation value is used to compensate the chroma corresponding to each pixel.
  • the second correction parameters include purple fringing correction parameters; and the second correction module may be further configured to: use the purple fringing correction parameters to correct the purple fringing area of the intermediate image to obtain a target image.
  • the neural network processor includes a fourth neural network model, the purple fringing correction parameters are obtained by the fourth neural network model based on the image to be processed; and the training step of the fourth neural network model Including: obtaining a fourth sample image set; the fourth sample image set includes at least one fourth sample image, and the fourth sample image includes a demosaiced image; inputting the fourth sample image into a fourth initial neural network model, and Obtain the initial purple fringing correction parameters corresponding to the fourth sample image output by the fourth initial neural network model; use the initial purple fringing correction parameters to correct the purple fringing area corresponding to the demosaiced image, and obtain the corrected
  • the fourth initial target image use the fourth preset loss function to determine the fourth loss function value between the fourth initial target image and the expected image corresponding to the fourth sample image; use the fourth loss function value Adjusting model parameters corresponding to the fourth initial neural network model, so that the fourth initial neural network model converges to obtain the fourth neural network model.
  • the initial purple fringing correction parameter includes an initial purple fringing weight parameter or an initial purple fringing compensation value; wherein, the initial purple fringing weight parameter is used to characterize the degree of stain removal in the purple fringing area; the initial purple fringing The compensation value is used to compensate the saturation of each pixel in the purple fringing area.
  • the second correction parameter includes a sharpening correction parameter; and the second correction module may be further configured to: use the sharpening correction parameter to correct a blurred area of the intermediate image to obtain a target image;
  • the sharpening correction parameters include lightness weight parameters for lightness correction.
  • the neural network processor includes a fifth neural network model, the sharpening correction parameters are obtained by the fifth neural network model based on the image to be processed; and the training step of the fifth neural network model Including: obtaining a fifth sample image set; the fifth sample image set includes at least one fifth sample image, and the fifth sample image includes a YUV image; the fifth sample image is input into the fifth initial neural network model, and the obtained Lightness weight parameters corresponding to each pixel of the fifth sample image output by the fifth initial neural network model; use the lightness weight parameters to guide the fifth sample image to sharpen, and obtain the sharpened initial lightness value; use the fifth preset loss function to determine the fifth loss function value between the initial brightness value and the expected brightness value of the fifth sample image; use the fifth loss function value to adjust the fifth initial neuron Model parameters corresponding to the network model, so that the fifth initial neural network model converges to obtain the fifth neural network model.
  • the neural network processor includes a sixth neural network model, the sharpening correction parameters are obtained by the sixth neural network model based on the image to be processed; and the training step of the sixth neural network model Including: obtaining a sixth sample image set; the sixth sample image set includes at least one sixth sample image, and the sixth sample image includes a YUV image; input the sixth sample image into the sixth initial neural network model, and obtain the At least one target processing area output by the sixth initial neural network model, the at least one target processing area corresponds to a lightness weight parameter respectively; correct the information to obtain the corrected initial brightness value; use the sixth preset loss function to determine the sixth loss function value between the initial brightness value and the expected brightness value of the target processing area; use the sixth loss The function value adjusts the model parameters corresponding to the sixth initial neural network model, so that the sixth initial neural network model converges to obtain the sixth neural network model.
  • FIG. 4 is a schematic structural diagram of an electronic device for performing an image processing method provided by an embodiment of the present application.
  • the electronic device may include: at least one processor 401, such as a CPU, and at least one communication interface 402 , at least one memory 403 and at least one communication bus 404 .
  • the communication bus 404 is used to realize the direct connection and communication of these components.
  • the communication interface 402 of the device in the embodiment of the present application is used for signaling or data communication with other node devices.
  • the memory 403 may be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as at least one disk memory.
  • the memory 403 may also be at least one storage device located far away from the aforementioned processor.
  • Computer-readable instructions are stored in the memory 403 , and when the computer-readable instructions are executed by the processor 401 , the electronic device can execute the above-mentioned method process shown in FIG. 1 .
  • FIG. 4 is only for illustration, and the electronic device may also include more or less components than those shown in FIG. 4 , or have a configuration different from that shown in FIG. 4 .
  • Each component shown in FIG. 4 may be implemented by hardware, software or a combination thereof.
  • An embodiment of the present application provides a readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the method process performed by the electronic device in the method embodiment shown in FIG. 1 can be executed.
  • This embodiment discloses a computer program product, the computer program product includes a computer program stored on a non-transitory computer-readable storage medium, the computer program includes program instructions, and when the program instructions are executed by the computer, the computer
  • the methods provided by the above method embodiments can be executed, for example, the method may include: acquiring an image to be processed; performing demosaic processing on the image to be processed by using preset processing rules to obtain an initial image after demosaicing; The network processor corrects the initial image with respect to the correction parameters output by the image to be processed to obtain a target image.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • a unit described as a separate component may or may not be physically separated, and a component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional module in each embodiment of the present application can be integrated together to form an independent part, or each module can exist independently, or two or more modules can be integrated to form an independent part.
  • the present application provides an image processing method, device, electronic equipment, and readable storage medium.
  • a specific embodiment of the method includes: acquiring an image to be processed; performing demosaic processing on the image to be processed by using preset processing rules to obtain The initial image after demosaicing; using the neural network processor to correct the initial image with the correction parameters output by the image to be processed to obtain the target image.
  • the method reduces the dependence on the NPU in the image processing process, thereby reducing processing energy consumption and processing cost.
  • the image processing method, device, electronic equipment and readable storage medium of the present application are reproducible and can be used in various industrial applications.
  • the image processing method, image processing device, electronic device, and readable storage medium of the present application can be used in any field requiring image processing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法、图像处理装置、电子设备和可读存储介质,该方法包括:获取待处理图像(101);利用预设处理规则对所述待处理图像进行去马赛克处理,得到去马赛克后的初始图像(102);利用神经网络处理器针对所述待处理图像输出的修正参数修正所述初始图像,得到目标图像(103)。该方法降低了图像处理过程中对于NPU的依赖,继而降低了处理能耗和处理成本。

Description

图像处理方法、图像处理装置、电子设备和可读存储介质
相关申请的交叉引用
本申请要求于2021年07月28日提交中国国家知识产权局的申请号为202110860833.5、名称为“图像处理方法、装置、电子设备和可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及信息处理领域,具体而言,涉及一种图像处理方法、图像处理装置、电子设备和可读存储介质。
背景技术
图像信号处理(Image Signal Processing,简称ISP)主要用来对前端图像传感器的输出信号进行处理,以匹配不同厂商的图像传感器。
在处理上述输出信号的过程中,需要利用嵌入式神经网络处理器(NPU)完成全部或者部分的处理任务。NPU采用“数据驱动并行计算”的架构,特别擅长处理视频、图像类的海量多媒体数据。其专门为物联网人工智能而设计,用于加速神经网络的运算,解决传统芯片在神经网络运算时效率低下的问题。但是利用NPU处理图像信号,需要消耗较高的能耗,成本花费较大。
发明内容
本申请实施例提供了一种图像处理方法、图像处理装置、电子设备和可读存储介质,用以降低图像处理过程中对于NPU的依赖,继而降低了处理能耗和处理成本。
本申请实施例提供了一种图像处理方法,该图像处理方法可以包括:获取待处理图像;利用预设处理规则对所述待处理图像进行去马赛克处理,得到去马赛克后的初始图像;利用神经网络处理器针对所述待处理图像输出的修正参数修正所述初始图像,得到目标图像。这样,可以降低处理能耗和处理成本。
可选地,所述修正参数可以包括第一修正参数和第二修正参数;以及所述利用神经网络处理器针对所述待处理图像输出的修正参数修正所述初始图像,得到目标图像,可以包括:利用所述第一修正参数修正所述初始图像的去马赛克参数,得到修正后的中间图像;利用所述第二修正参数修正所述中间图像,得到目标图像。降低了成本和能耗。
可选地,所述第一修正参数可以包括插值权重参数,所述去马赛克参数可以包括插值参数;以及所述利用所述第一修正参数修正所述初始图像的去马赛克参数,得到修正后的中间图像,可以包括:利用所述插值权重参数修正所述插值参数,得到所述初始图像的各个像素点分别对应的插值结果;根据所述各个像素点分别对应的插值结果对所述各个像素点进行插值处理,得到所述中间图像。这样,不需要人工参与插值方向的确定过程,降低了人工成本。
可选地,所述神经网络处理器可以包括第一神经网络模型,所述插值权重参数可以由所述第一神经网络模型基于所述待处理图像得到;以及所述第一神经网络模型的训练步骤可以包括:获取第一样本图像集;所述第一样本图像集包括至少一个第一样本图像,所述第一样本图像包括RAW图像;将所述第一样本图像输入第一初始神经网络模型,并获得所述第一初始神经网络模型输出的所述第一样本图像的各个像素点分别对应的初始插值权重参数;利用预设插值规则得到所述各个像素点分别对应的初始插值参数;根据所述初始插值权重参数与所述初始插值参数,确定所述各个像素点分别对应的初始插值结果,并利用所述初始插值结果对所述第一样本图像进行插值处理,得到第一初始目标图像;利用第一预设损失函数确定所述第一初始目标图像与所述第一样本图像的预期图像之间的第一损失函数值;利用所述第 一损失函数值调整所述第一初始神经网络模型对应的模型参数,以使所述第一初始神经网络模型收敛,得到所述第一神经网络模型。这样,可以训练得到第一神经网络模型,以使第一神经网络模型可以输出插值权重参数。
可选地,所述第二修正参数可以包括残差修正参数;以及所述利用所述第二修正参数修正所述中间图像,得到目标图像,可以包括:利用所述残差修正参数修正所述中间图像,以消除所述中间图像与预期目标图像之间的残差,得到所述目标图像。
可选地,所述神经网络处理器可以包括第二神经网络模型,所述残差修正参数可以由所述第二神经网络模型基于所述待处理图像得到;以及所述第二神经网络模型的训练步骤可以包括:获取第二样本图像集;所述第二样本图像集包括至少一个第二样本图像,所述第二样本图像包括RAW图像;将所述第二样本图像输入第二初始神经网络模型,并获得所述第二初始神经网络模型输出的初始残差修正参数;利用预设处理规则得到所述第二样本图像所对应的去马赛克后图像;利用所述初始残差修正参数修正所述去马赛克后图像与预期初始目标图像之间的残差,得到第二初始目标图像;利用第二预设损失函数确定所述第二初始目标图像与所述预期初始目标图像之间的第二损失函数值;利用所述第二损失函数值调整所述第二初始神经网络模型对应的模型参数,以使所述第二初始神经网络模型收敛,得到所述第二神经网络模型。这样,可以训练得到第二神经网络模型,以使第二神经网络模型可以输出残差修正参数。
可选地,在所述获取第二样本图像集之后,所述第二神经网络模型的训练步骤还可以包括:将所述第二样本图像进行色调映射处理,以降低所述第二样本图像的位宽。继而可以降低对待处理图像的计算量,降低能耗。
可选地,所述第二修正参数可以包括伪彩修正参数;以及所述利用所述第二修正参数修正所述中间图像,得到目标图像,可以包括:利用所述伪彩修正参数修正所述中间图像的伪彩区域,得到所述目标图像。
可选地,所述神经网络处理器可以包括第三神经网络模型,所述伪彩修正参数可以由所述第三神经网络模型基于所述待处理图像得到;以及所述第三神经网络模型的训练步骤可以包括:获取第三样本图像集;所述第三样本图像集可以包括至少一个第三样本图像,所述第三样本图像可以包括RAW图像;将第三样本图像输入第三初始神经网络模型,并获得所述第三初始神经网络模型输出的所述第三样本图像的各个像素点分别对应的初始伪彩修正参数;利用所述预设处理规则得到所述第三样本图像所对应的去马赛克后图像;利用所述初始伪彩修正参数修正所述去马赛克后图像所对应的伪彩区域,得到修正后的第三初始目标图像;利用第三预设损失函数确定所述第三初始目标图像与所述第三样本图像的预期图像之间的第三损失函数值;利用所述第三损失函数值调整所述第三初始神经网络模型对应的模型参数,以使所述第三初始神经网络模型收敛,得到所述第三神经网络模型。这样,可以训练得到第三神经网络模型,以使第三神经网络模型可以输出伪彩修正参数。
可选地,所述初始伪彩修正参数可以包括初始伪彩权重参数或者初始伪彩补偿值;其中,所述初始伪彩权重参数可以用于表征所述各个像素点分别对应的色度淡化程度;所述初始伪彩补偿值可以用于补偿所述各个像素点分别对应的色度。这样,可以在具体应用场景时可以因地制宜选取相适应的伪彩修正参数。
可选地,所述第二修正参数可以包括紫边修正参数;以及所述利用所述第二修正参数修正所述中间图像,得到目标图像,可以包括:利用所述紫边修正参数修正所述中间图像的紫边区域,得到目标图像。
可选地,所述神经网络处理器可以包括第四神经网络模型,所述紫边修正参数可以由所述第四神经网络模型基于所述待处理图像得到;以及所述第四神经网络模型的训练步骤可以包括:获取第四样本图像集;所述第四样本图像集包括至少一个第四样本图像,所述第四样本图像包括去马赛克后图像;将第 四样本图像输入第四初始神经网络模型,并获得所述第四初始神经网络模型输出的所述第四样本图像对应的初始紫边修正参数;利用所述初始紫边修正参数修正所述去马赛克后图像所对应的紫边区域,得到修正后的第四初始目标图像;利用第四预设损失函数确定所述第四初始目标图像与所述第四样本图像对应的预期图像之间的第四损失函数值;利用所述第四损失函数值调整所述第四初始神经网络模型对应的模型参数,以使所述第四初始神经网络模型收敛,得到所述第四神经网络模型。这样,可以训练得到第四神经网络模型,以使第四神经网络模型可以输出紫边修正参数。
可选地,所述初始紫边修正参数可以包括初始紫边权重参数或者初始紫边补偿值;其中,所述初始紫边权重参数可以用于表征紫边区域的色斑消除程度;所述初始紫边补偿值可以用于补偿紫边区域的各个像素点的饱和度。这样,可以在具体应用场景时可以因地制宜选取相适应的紫边修正参数。
可选地,所述第二修正参数可以包括锐化修正参数;以及所述利用所述第二修正参数修正所述中间图像,得到目标图像,可以包括:利用所述锐化修正参数修正所述中间图像的模糊区域,得到目标图像;所述锐化修正参数包括对明度进行修正的明度权重参数。
可选地,所述神经网络处理器可以包括第五神经网络模型,所述锐化修正参数由所述第五神经网络模型基于所述待处理图像得到;以及所述第五神经网络模型的训练步骤可以包括:获取第五样本图像集;所述第五样本图像集包括至少一个第五样本图像,所述第五样本图像包括YUV图像;将第五样本图像输入第五初始神经网络模型,并获得所述第五初始神经网络模型输出的所述第五样本图像的各个像素点分别对应的明度权重参数;利用所述明度权重参数指导所述第五样本图像进行锐化,得到锐化后的初始明度值;利用第五预设损失函数确定所述初始明度值与所述第五样本图像的预期明度值之间的第五损失函数值;利用所述第五损失函数值调整所述第五初始神经网络模型对应的模型参数,以使所述第五初始神经网络模型收敛,得到所述第五神经网络模型。这样,可以训练得到第五神经网络模型,以使第五神经网络模型可以输出明度权重参数。
可选地,所述神经网络处理器可以包括第六神经网络模型,所述锐化修正参数可以由所述第六神经网络模型基于所述待处理图像得到;以及所述第六神经网络模型的训练步骤可以包括:获取第六样本图像集;所述第六样本图像集包括至少一个第六样本图像,所述第六样本图像包括YUV图像;将第六样本图像输入第六初始神经网络模型,并获得所述第六初始神经网络模型输出的至少一个目标处理区域,所述至少一个目标处理区域分别对应于一个明度权重参数;利用所述目标处理区域所对应的明度权重参数对所述目标处理区域的明度信息进行修正,得到修正后的初始明度值;利用第六预设损失函数确定所述初始明度值与所述目标处理区域的预期明度值之间的第六损失函数值;利用所述第六损失函数值调整所述第六初始神经网络模型对应的模型参数,以使所述第六初始神经网络模型收敛,得到所述第六神经网络模型。这样,可以训练得到第六神经网络模型,以使第六神经网络模型可以输出至少一个目标处理区域,以指导待处理图像进行不同位置的锐化操作。
本申请的另一些实施例提供了一种图像处理装置,该图像处理装置可以包括:获取模块,可以配置成用于获取待处理图像;去马赛克处理模块,可以配置成用于利用预设处理规则对所述待处理图像进行去马赛克处理,得到去马赛克后的初始图像;修正模块,可以配置成用于利用神经网络处理器针对所述待处理图像输出的修正参数修正所述初始图像,得到目标图像。
可选地,所述修正参数包括第一修正参数和第二修正参数,以及所述修正模块包括第一修正模块和第二修正模块,其中,所述第一修正模块配置成用于利用所述第一修正参数修正所述初始图像的去马赛克参数,得到修正后的中间图像;所述第二修正模块可以配置成用于利用所述第二修正参数修正所述中间图像,得到目标图像。
本申请的又一些实施例提供一种电子设备,可以包括处理器以及存储器,所述存储器存储有计算机 可读取指令,当所述计算机可读取指令由所述处理器执行时,可以运行根据前述实施例所提供的图像处理方法。
本申请的再一些实施例提供一种可读存储介质,在可读存储介质上可以存储有计算机程序,所述计算机程序被处理器执行时可以运行根据前述实施例所提供的图像处理方法。
本申请的其他特征和优点将在随后的说明书阐述,并且,部分地从说明书中变得显而易见,或者通过实施本申请实施例了解。本申请的目的和其他优点可通过在所写的说明书、权利要求书、以及附图中所特别指出的结构来实现和获得。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1为本申请实施例提供的一种图像处理方法的流程图;
图2为本申请实施例提供的另一种图像处理方法的流程图;
图3为本申请实施例提供的一种图像处理装置的结构框图;
图4为本申请实施例提供的一种用于执行图像处理方法的电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中附图,对本申请实施例中的技术方案进行清楚、完整地描述。
下面将结合本申请实施例中附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本申请实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本申请的实施例的详细描述并非旨在限制要求保护的本申请的范围,而是仅仅表示本申请的选定实施例。基于本申请的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。同时,在本申请的描述中,术语“第一”、“第二”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
相关技术中,存在利用NPU处理图像信息时成本高、能耗高的问题;为了解决该问题,本申请提供一种图像处理方法、图像处理装置和电子设备;此外,通过获取待处理图像;利用预设处理规则对所述待处理图像进行去马赛克处理,得到去马赛克后的初始图像;利用神经网络处理器针对所述待处理图像输出的修正参数修正所述初始图像,得到目标图像。这样,可以利用NPU得到对待处理图像的修正参数,以能够对经过预设处理规则处理后的初始图像进行修正。相比于相关技术中利用神经网络处理器直接输出目标图像的方式,本申请提供的图像处理方法可以通过NPU输出的修正参数辅助实现,降低了图像处理过程中对于NPU的依赖,继而降低了处理能耗和处理成本。在一些应用场景中,上述图像处理方法可以应用于摄像机、相机等拍摄图像的设备中,用于对图像传感器输出的图像信息进行处理。在另一些应用场景中,上述图像处理方法也可以应用于图像处理服务端中,以对接收到的图像信息进行处理。
以上相关技术中的方案所存在的缺陷,均是发明人在经过实践并仔细研究后得出的结果,因此,上述问题的发现过程以及下文中本申请实施例针对上述问题所提出的解决方案,都应该是发明人在本申请过程中对本申请做出的贡献。
请参考图1,其示出了本申请实施例提供的一种图像处理方法的流程图。如图1所示,该图像处理 方法包括以下步骤101至步骤103。
步骤101,获取待处理图像;
在一些应用场景中,上述待处理图像例如可以通过相机、摄像机等设备进行拍摄获取。在这些应用场景中,待处理图像可以包括传感器(Complementary Metal Oxide Semiconductor,简称CMOS)或者电荷耦合器件(charge coupled device,简称CCD)将捕捉到的光源信号转化为数字信号的原始图像数据(也即,RAW格式的图像)。RAW图像的每个像素点只有一个通道的颜色值。例如,RAW图像的像素点A可以只有R通道上的颜色值。
步骤102,利用预设处理规则对所述待处理图像进行去马赛克处理,得到去马赛克后的初始图像;
上述预设处理规则例如可以包括利用现有技术中的诸如单纯内插法、双线性内插法等去马赛克算法实现,以对RAW图像进行插值处理,得到去马赛克后的初始图像。
在一些应用场景中,在获取到待处理图像之后,可以利用上述预设处理规则对待处理图像进行去马赛克处理,以得到处理后的初始图像。在这些应用场景中,在对待处理图像进行去马赛克处理之前,可以先对待处理图像进行白平衡处理,以使待处理图像的颜色尽量不失真。
步骤103,利用神经网络处理器针对所述待处理图像输出的修正参数修正所述初始图像,得到目标图像。
获取到待处理图像之后,可以将待处理图像输入神经网络处理器中,以利用神经网络处理器输出用于修正初始图像的修正参数。在一些应用场景中,上述修正参数例如可以包括对紫边进行修正的修正参数、对去马赛克结果进行修正的修正参数等。
在神经网络处理器输出相应的修正参数之后,可以根据修正参数所修正的内容去修正初始图像的相应位置,得到目标图像。例如,神经网络处理器输出去马赛克修正参数之后,可以利用该去马赛克修正参数修正初始图像中的去马赛克区域,以得到去马赛克效果更佳的目标图像。
通过上述步骤101至步骤103,可以利用神经网络处理器输出修正参数对初始图像进行修正,以得到效果更佳的目标图像。相比于相关技术中利用神经网络处理器直接输出目标图像的方式,本实施例提供的图像处理方法可以通过NPU输出的修正参数辅助实现,降低了图像处理过程中对于NPU的依赖,继而降低了处理能耗和处理成本。
在一些可选的实现方式中,所述修正参数包括第一修正参数和第二修正参数。上述第一修正参数与第二修正参数可以修正不同的图像效果。
请参考图2,其示出了本申请实施例提供的另一种图像处理方法的流程图。如图2所示,该图像处理方法包括以下步骤201至步骤204。
步骤201,获取待处理图像;
上述步骤201的实现过程以及取得的技术效果可以与图1所示实施例中的步骤101相同或相似,此处不赘述。
步骤202,利用预设处理规则对所述待处理图像进行去马赛克处理,得到去马赛克后的初始图像。
上述步骤202的实现过程以及取得的技术效果可以与图1所示实施例中的步骤102相同或相似,此处不赘述。
步骤203,利用所述第一修正参数修正所述初始图像的去马赛克参数,得到修正后的中间图像;
在一些应用场景中,可以利用第一修正参数修正初始图像的去马赛克参数。这里,上述第一修正参数对应的修正内容即为修正初始图像的去马赛克效果。
在这些应用场景中,在利用相关技术中的去马赛克算法对待处理图像进行去马赛克处理时,需要利用缺省颜色值的像素点周围的像素点的颜色值将该像素点的缺省颜色值插值出来。此外,在插值时,需 要通过人工不断调节该缺省颜色值的像素点与其周围像素点之间的颜色值误差阈值,来判断出最优的插值结果。这样,耗费了人工成本。
本步骤将初始图像经过第一修正参数修正后,可以得到中间图像。这样,不需要人工参与插值结果的确定过程,降低了人工成本。
步骤204,利用所述第二修正参数修正所述中间图像,得到目标图像。
上述第二修正参数可以进一步对经过去马赛克处理后得到的中间图像进行优化,得到目标图像。此外,上述第二修正参数例如可以包括紫边修正参数、伪彩修正参数等。
在本实施例中,突出了利用第一修正参数和第二修正参数分别去修正初始图像以及中间图像的步骤,使得到的目标图像与利用神经网络处理器处理后直接输出的图像效果无较大差别,但是降低了成本和能耗。
在一些可选的实现方式中,所述第一修正参数包括插值权重参数,所述去马赛克参数包括插值参数;以及上述步骤203可以包括:
子步骤2031,利用所述插值权重参数修正所述插值参数,得到所述初始图像的各个像素点分别对应的插值结果;
上述插值权重参数可以表征待插值的目标像素点在不同插值方向上的权重。这里,不同插值方向可以表征与该目标像素点邻近的像素点。也即,插值权重参数可以表征目标像素点的缺省颜色值取邻近像素点的颜色值的权重。
上述插值参数可以表征利用预设处理规则得到的缺省颜色值的目标像素点取的邻近像素点的颜色值。
得到神经网络处理器输出的插值权重参数之后,可以利用该插值权重参数修正插值参数。在一些应用场景中,例如可以将插值权重参数与插值参数进行加权求和,得到目标像素点的最终插值结果。
子步骤2032,根据所述各个像素点分别对应的插值结果对所述各个像素点进行插值处理,得到所述中间图像。
得到各个像素点的插值结果之后,可以利用该插值结果对初始图像的各个像素点进行插值处理,使各个像素点的三通道颜色值完整,继而可以得到中间图像。
通过上述子步骤2031至子步骤2032,突出了利用插值权重参数修正插值参数的过程,使得神经网络处理器输出的插值权重参数可以与相关技术中利用去马赛克算法得到的插值参数进行结合,这样,不需要人工参与插值结果的确定过程,降低了人工成本。
在一些可选的实现方式中,所述神经网络处理器包括第一神经网络模型,所述插值权重参数由所述第一神经网络模型基于所述待处理图像得到;以及所述第一神经网络模型的训练步骤包括:
步骤A1,获取第一样本图像集;所述第一样本图像集包括至少一个第一样本图像,所述第一样本图像包括RAW图像;
上述RAW图像,也即由传感器CMOS或者CCD将捕捉到的光源信号转化为数字信号的原始图像数据,其常见于拜耳阵列技术中。
实践中,在进行去马赛克处理时,是将RAW域的图像插值成RGB域的图像。也即,将RAW图像的各个像素点缺省的颜色值插值出来,以得到三个通道颜色值均完整的RGB图像。
因此,为了训练能够进行对去马赛克参数修正的插值权重参数,可以获取多个RAW图像,整理成上述第一样本图像集。
步骤A2,将所述第一样本图像输入第一初始神经网络模型,并获得所述第一初始神经网络模型输出的所述第一样本图像的各个像素点分别对应的初始插值权重参数;
获取到第一样本图像集之后,可以选取其中一个第一样本图像输入至第一初始神经网络模型中,并设置该第一初始神经网络模型朝着输出第一样本图像的各个像素点所对应的初始插值权重参数的方向进行训练。这里的初始插值权重参数也即表征像素点的缺省颜色值取邻近像素点的颜色值的权重。
步骤A3,利用预设插值规则得到所述各个像素点分别对应的初始插值参数;
在一些应用场景中,可以利用相关技术中已存在的去马赛克算法得到各个像素点分别对应的初始插值参数。
步骤A4,根据所述初始插值权重参数与所述初始插值参数,确定所述各个像素点分别对应的初始插值结果,并利用所述初始插值结果对所述第一样本图像进行插值处理,得到第一初始目标图像;
在这些应用场景中,可以根据第一样本图像所对应的初始插值权重参数与初始插值参数,确定初始插值结果。具体的,例如可以将第一样本图像所对应的初始插值权重参数与初始插值参数进行加权求和,得到各个像素点分别对应的初始插值结果。继而,利用该初始插值结果对第一样本图像进行插值处理之后,可以得到第一样本图像所对应的第一初始目标图像,也即第一样本图像所对应的初始RGB图像。
步骤A5,利用第一预设损失函数确定所述第一初始目标图像与所述第一样本图像的预期图像之间的第一损失函数值;
在一些应用场景中,上述预期图像例如可以包括通过相关技术中利用神经网络处理器对第一样本图像进行处理之后直接得到的RGB图像。
上述第一预设损失函数例如可以包括均方误差损失函数(也称L2损失函数)或者平均绝对误差损失函数(也称L1损失函数)。
这里,利用上述第一预设损失函数确定第一初始目标图像与预期图像之间的第一损失函数值为本领域技术人员所周知,此处不赘述。
步骤A6,利用所述第一损失函数值调整所述第一初始神经网络模型对应的模型参数,以使所述第一初始神经网络模型收敛,得到所述第一神经网络模型。
得到第一损失函数值之后,可以利用该第一损失函数值对第一初始神经网络模型的模型参数进行调整,以使第一初始神经网络模型收敛。
此外,在训练第一初始神经模型的过程中,可以将上述第一样本图像集中的每一个第一样本图像均执行上述步骤A1至步骤A6的操作,以能够使第一初始神经网络模型收敛,得到第一神经网络模型。这样,在将待处理图像输入第一神经网络模型之后,第一神经网络模型可以输出针对于该待处理图像的插值权重参数。
在一些可选的实现方式中,所述第二修正参数包括残差修正参数;以及上述步骤204可以包括:利用所述残差修正参数修正所述中间图像,以消除所述中间图像与预期目标图像之间的残差,得到所述目标图像。
上述预期目标图像例如可以包括通过相关技术中直接利用神经网络处理器对待处理图像进行处理之后直接得到的RGB图像。该RGB图像可以视为在理想状态下的目标图像。
实践中,在利用相关技术中的去马赛克算法进行插值处理之后得到的RGB图像(也即中间图像)与预期RGB图像(也即预期目标图像)之间存在一个像素级残差。因此,需要消除该像素级残差。具体的,可以利用残差修正参数进行消除,以得到图像画质更好的目标图像。
另外,由于可以通过残差修正参数消除中间图像与预期目标图像之间的残差,因此在确定预设处理规则时,可以降低对该预设处理规则的计算量要求以及初始图像的去马赛克效果要求等,在一定程度上也可以达到降低成本的效果。
在一些应用场景中,例如可以将得到的残差修正参数与利用预设处理规则得到的RGB图像的各个 像素点的颜色值进行相乘处理,以得到目标图像。
在一些可选的实现方式中,所述神经网络处理器包括第二神经网络模型,所述残差修正参数由所述第二神经网络模型基于所述待处理图像得到;以及所述第二神经网络模型的训练步骤包括:
步骤B1,获取第二样本图像集;所述第二样本图像集包括至少一个第二样本图像,所述第二样本图像包括RAW图像;
上述步骤B1的实现过程以及取得的技术效果可以与上述步骤A1相同或相似,此处不赘述。
步骤B2,将所述第二样本图像输入第二初始神经网络模型,并获得所述第二初始神经网络模型输出的初始残差修正参数;
获取到第二样本图像集之后,可以选取其中一个第二样本图像输入至第二初始神经网络模型中,并设置该第二初始神经网络模型朝着输出第二样本图像对应的初始残差修正参数的方向训练。
步骤B3,利用所述预设处理规则得到所述第二样本图像所对应的去马赛克后图像;
在一些应用场景中,可以利用相关技术中已存在的去马赛克算法得到各个像素点分别对应的插值参数,然后可以利用该插值参数得到对应的去马赛克后图像。
步骤B4,利用所述初始残差修正参数修正所述去马赛克后图像与预期初始目标图像之间的残差,得到第二初始目标图像;
在这些应用场景中,可以将第二样本图像所对应的初始残差修正参数与去马赛克后图像的各个像素点的颜色值进行相乘处理,以得到第二初始目标图像。
步骤B5,利用第二预设损失函数确定所述第二初始目标图像与所述预期初始目标图像之间的第二损失函数值;
上述步骤B5的实现过程以及取得的技术效果可以与上述步骤A5相似,此处不赘述。
步骤B6,利用所述第二损失函数值调整所述第二初始神经网络模型对应的模型参数,以使所述第二初始神经网络模型收敛,得到所述第二神经网络模型。
上述步骤B6的实现过程以及取得的技术效果可以与上述步骤A6相似,此处不赘述。
在训练第二初始神经网络模型的过程中,也可以将上述第二样本图像集中的每一个第二样本图像均执行上述步骤B1至步骤B6的操作,以能够使第二初始神经网络模型收敛,得到第二神经网络模型。这样,在将待处理图像输入第二神经网络模型之后,第二神经网络模型可以输出针对于该待处理图像的残差修正参数。
在一些可选的实现方式中,在上述步骤B1之后,所述第二神经网络模型的训练步骤还包括:将所述第二样本图像进行色调映射处理,以降低所述第二样本图像的位宽。
在一些应用场景中,可以将第二样本图像集中的每一个第二样本图像进行色调映射处理,以将高位宽的第二样本图像转换为低位宽的第二样本图像。这样,在训练第二神经网络模型时利用的样本图像均为低位宽的图像数据,可以减少训练过程中的计算量。相应的,将待处理图像输入第二神经网络模型之前,也可以将高位宽的待处理图像进行色调映射处理,以降低对待处理图像的计算量,降低能耗。
在一些可选的实现方式中,所述第二修正参数包括伪彩修正参数;以及上述步骤204可以包括:利用所述伪彩修正参数修正所述中间图像的伪彩区域,得到所述目标图像。
上述伪彩区域也即伪彩色区域。实践中,伪彩色图像的每个像素点的颜色不是由每个基本色分量的数值直接决定,而是把像素点当成调色板或颜色查找表的表项入口地址,根据该地址可查找出包含实际R、G、B的强度值,如果图像中的颜色在调色板或彩色查找表中不存在,则调色板会用一个最为接近的颜色来匹配。通过查找出的R、G、B强度值产生的色彩不是图像本身真正的颜色,因此称为伪彩色。
实践中,在对待处理图像进行插值处理时,可能引入伪彩区域。因此,可以对该伪彩区域进行修正。 具体的,可以利用伪彩修正参数对伪彩区域进行修正,以得到目标图像。
在一些可选的实现方式中,所述神经网络处理器包括第三神经网络模型,所述伪彩修正参数由所述第三神经网络模型基于所述待处理图像得到;以及所述第三神经网络模型的训练步骤包括:
步骤C1,获取第三样本图像集;所述第三样本图像集包括至少一个第三样本图像,所述第三样本图像包括RAW图像;
上述步骤C1的实现过程以及取得的技术效果可以与上述步骤B1相似,此处不赘述。
步骤C2,将第三样本图像输入第三初始神经网络模型,并获得所述第三初始神经网络模型输出的所述第三样本图像的各个像素点分别对应的初始伪彩修正参数;
选取任意一个RAW图像输入到第三初始神经网络模型中,并设置该第三初始神经网络模型朝着输出初始伪彩修正参数的方向训练。
在一些应用场景中,可以对图像的明亮度信号以及色度信号进行处理,实现去除伪彩色的目的。因此,初始伪彩修正参数可以包括针对明度信息以及色度信号的修正参数。
步骤C3,利用所述预设处理规则得到所述第三样本图像所对应的去马赛克后图像;
上述步骤C3的实现过程以及取得的技术效果可以与上述步骤B3相似,此处不赘述。
步骤C4,利用所述初始伪彩修正参数修正所述去马赛克后图像所对应的伪彩区域,得到修正后的第三初始目标图像;
利用第三初始神经网络模型输出初始伪彩修正参数之后,可以利用该初始伪彩修正参数对去马赛克后图像的伪彩区域进行处理。
在一些应用场景中,去马赛克后图像一般处于RGB域,因此可以将去马赛克后图像从RGB域转换为YUV域,以利用伪彩修正参数对转换后的去马赛克后图像进行修正。这里,图像在RGB域与YUV域之间的转换过程为本领域技术人员所周知,此处不赘述。其中,YUV域可以用于表征图像具有明亮度以及色度。其中,“Y”表示明亮度,也就是明度值,“U”和“V”表示的则是色度,作用是描述影像色彩及饱和度,用于指定像素点的颜色。因此,初始修正参数可以针对于色度信息U、V进行修正。
在一些可选的实现方式中,所述初始伪彩修正参数包括初始伪彩权重参数或者初始伪彩补偿值;其中,所述初始伪彩权重参数用于表征所述各个像素点分别对应的色度淡化程度;所述初始伪彩补偿值用于补偿所述各个像素点分别对应的色度。
在一些应用场景中,当初始伪彩修正参数为初始伪彩权重参数时,可以利用该初始伪彩权重参数对去马赛克后图像中的各个像素点进行色度淡化处理。在这些应用场景中,例如可以利用初始伪彩权重参数与各个像素点的色度相乘,以得到色度淡化后的图像。应当说明的是,初始伪彩权重参数对应的数值应当在(0,1)之间,每一个数值表征一个权重信息。
在另一些应用场景中,当初始伪彩修正参数为初始伪彩补偿值时,可以利用该初始伪彩补偿值对去马赛克后图像的各个像素点的色度进行补偿。在这些应用场景中,例如可以将得到的初始伪彩补偿值与像素点的色度相加,以得到补偿后的图像。
步骤C5,利用第三预设损失函数确定所述第三初始目标图像与所述第三样本图像的预期图像之间的第三损失函数值;
上述第三预设损失函数例如可以包括均方误差损失函数(也称L2损失函数)或者平均绝对误差损失函数(也称L1损失函数)。
在一些应用场景中,上述预期图像例如可以包括通过相关技术中直接利用神经网络处理器对第三样本图像进行处理之后直接得到的RGB图像。
为了与预期图像进行损失计算,需要将处于YUV域的第三初始目标图像又转换回RGB域。然后再 利用上述第三预设损失函数进行损失计算。
步骤C6,利用所述第三损失函数值调整所述第三初始神经网络模型对应的模型参数,以使所述第三初始神经网络模型收敛,得到所述第三神经网络模型。
上述步骤C6的实现过程以及取得的技术效果可以与步骤A6相似,此处不赘述。
在训练第三初始神经网络模型的过程中,也可以将上述第三样本图像集中的每一个第三样本图像均执行上述步骤C1至步骤C6的操作,以能够使第三初始神经网络模型收敛,得到第三神经网络模型。这样,在将待处理图像输入第三神经网络模型之后,第三神经网络模型可以输出针对于该待处理图像的伪彩修正参数。
在一些可选的实现方式中,所述第二修正参数包括紫边修正参数;以及所述利用所述第二修正参数修正所述中间图像,得到目标图像,包括:利用所述紫边修正参数修正所述中间图像的紫边区域,得到目标图像。
紫边是指数码相机在拍摄过程中由于被摄物体反差较大,在高光与低光部位交界处出现色斑的现象。因此,也可以利用紫边修正参数修正中间图像中存在的紫边区域,得到目标图像。
在一些可选的实现方式中,所述神经网络处理器包括第四神经网络模型,所述紫边修正参数由所述第四神经网络模型基于所述待处理图像得到;以及所述第四神经网络模型的训练步骤包括:
步骤D1,获取第四样本图像集;所述第四样本图像集包括至少一个第四样本图像,所述第四样本图像包括去马赛克后图像;
上述去马赛克图像例如可以通过去马赛克算法得到,具体实现过程可参照前述去马赛克算法的实现过程,此处不赘述。
得到多个去马赛克图像后图像之后,可以整理得到上述第四样本图像集。
步骤D2,将第四样本图像输入第四初始神经网络模型,并获得所述第四初始神经网络模型输出的所述第四样本图像对应的初始紫边修正参数;
步骤D2的实现过程以及取得的技术效果可以与步骤C2相似,此处不赘述。
在一些应用场景中,在将第四样本图像输入第四初始神经网络模型之前,可以将第四样本图像从RGB域转换为YUV域,以实现对修正色度的初始紫边修正参数。
在一些可选的实现方式中,所述初始紫边修正参数包括初始紫边权重参数或者初始紫边补偿值;其中,所述初始紫边权重参数用于表征紫边区域的色斑消除程度;所述初始紫边补偿值用于补偿紫边区域的各个像素点的饱和度。
在一些应用场景中,当初始紫边修正参数为初始紫边权重参数时,可以利用该初始紫边权重参数对去马赛克后图像中的紫边区域的各个像素点进行降低饱和度的处理操作。在这些应用场景中,例如可以利用初始紫边权重参数与各个像素点的饱和度值相乘,以得到降低饱和度后的图像。应当说明的是,初始紫边权重参数对应的数值应当在(0,1)之间,每一个数值对应于一个权重信息。
在另一些应用场景中,当初始紫边修正参数为初始紫边补偿值时,可以利用该初始紫边补偿值对去马赛克后图像中的紫边区域的各个像素点的饱和度值进行补偿。在这些应用场景中,例如可以将得到的初始紫边补偿值与像素点的对应饱和度值相加,以得到补偿后的图像。
步骤D3,利用所述初始紫边修正参数修正所述去马赛克后图像所对应的紫边区域,得到修正后的第四初始目标图像;
这里,在修正紫边区域时,可以根据初始紫边修正参数具体为初始紫边权重参数或者初始紫边补偿值进行相应的处理,得到修正后的第四初始目标图像。
步骤D4,利用第四预设损失函数确定所述第四初始目标图像与所述第四样本图像对应的预期图像 之间的第四损失函数值;
步骤D4的实现过程以及取得的技术效果可以与步骤C5相似,此处不赘述。
步骤D5,利用所述第四损失函数值调整所述第四初始神经网络模型对应的模型参数,以使所述第四初始神经网络模型收敛,得到所述第四神经网络模型。
步骤D5的实现过程以及取得的技术效果可以与步骤C6相似,此处不赘述。
在训练第四初始神经模型的过程中,可以将上述第四样本图像集中的每一个第四样本图像均执行上述步骤D1至步骤D5的操作,以能够使第四初始神经网络模型收敛,得到第四神经网络模型。这样,在将待处理图像输入第四神经网络模型之后,第四神经网络模型可以输出针对于该待处理图像的紫边修正参数。
在一些可选的实现方式中,所述第二修正参数包括锐化修正参数;以及步骤204可以包括:利用所述锐化修正参数修正所述中间图像的模糊区域,得到目标图像;所述锐化修正参数包括对明度进行修正的明度权重参数。
在一些应用场景中,经过预设处理规则处理后的中间图像中可能存在部分或全部模糊的情况。因此,可以利用锐化修正参数修正中间图像的模糊区域,得到目标图像。在这些应用场景中,通过锐化操作可以聚焦模糊边缘,提高图像中某一部位的清晰度或者焦距程度,使中间图像特定区域的色彩更加鲜明。
在一些应用场景中,锐化修正参数包括对明度进行修正的参数,也即,锐化修正参数可以用于对YUV域图像中的“Y”信息进行修正。
在另一些应用场景中,锐化修正参数也可以包括对色度进行修正的参数。也即,对YUV域图像中的“U”信息以及“V”信息的修正。
在一些可选的实现方式中,所述神经网络处理器包括第五神经网络模型,所述锐化修正参数由所述第五神经网络模型基于所述待处理图像得到;以及所述第五神经网络模型的训练步骤包括:
步骤E1,获取第五样本图像集;所述第五样本图像集包括至少一个第五样本图像,所述第五样本图像包括YUV图像;
YUV图像也即可以用于表征具有明亮度以及色度的图像。其中,“Y”表示明亮度,也就是明度值,“U”和“V”表示的则是色度,作用是描述影像色彩及饱和度,用于指定像素点的颜色。
为了对图像的明度信息以及色度信息进行修正,可以选取多个YUV图像,整理成上述第五样本图像集。
步骤E2,将第五样本图像输入第五初始神经网络模型,并获得所述第五初始神经网络模型输出的所述第五样本图像的各个像素点分别对应的明度权重参数;
也即,第五初始神经网络模型的输入可以为YUV图像,并且设置第五初始神经网络模型向输出明度权重参数的方向进行训练。上述明度权重参数对应的数值可以在(0,1)之间,每一个数值可以表征一个锐化强度。
步骤E3,利用所述明度权重参数指导所述第五样本图像进行锐化,得到锐化后的初始明度值。
得到明度权重参数之后,可以利用明度权重参数指导第五样本图像进行锐化。例如,得到像素点A的明度权重参数为0.5,像素点B的明度权重参数为0.2之后,可以在对第五样本图像进行锐化时,将第五样本图像的像素点A、像素点B的明度与各自对应的明度权重参数相乘,得到各个像素点对应的初始明度值。
在一些应用场景中,可以先将第五样本图像输入高通滤波器中,利用高通滤波器滤除低频分量,留下高频分量,然后再利用明度权重参数指导第五样本图像进行锐化,以加强对第五样本图像的锐化强度。
步骤E4,利用第五预设损失函数确定所述初始明度值与所述第五样本图像的预期明度值之间的第 五损失函数值;
上述第五预设损失函数例如可以包括均方误差损失函数(也称L2损失函数)或者平均绝对误差损失函数(也称L1损失函数)。
在一些应用场景中,可以利用第五预设损失函数对第五样本图像对应的初始明度值和预期明度值进行损失计算,以得到第五损失函数值。
步骤E5,利用所述第五损失函数值调整所述第五初始神经网络模型对应的模型参数,以使所述第五初始神经网络模型收敛,得到所述第五神经网络模型。
步骤E5的实现过程以及取得的技术效果可以与步骤C6相似,此处不赘述。
在训练第五初始神经模型的过程中,可以将上述第五样本图像集中的每一个第五样本图像均执行上述步骤E1至步骤E5的操作,以能够使第五初始神经网络模型收敛,得到第五神经网络模型。这样,在将待处理图像输入第五神经网络模型之后,第五神经网络模型可以输出针对于该待处理图像的明度权重参数。
在一些可选的实现方式中,所述神经网络处理器包括第六神经网络模型,所述锐化修正参数由所述第六神经网络模型基于所述待处理图像得到;以及所述第六神经网络模型的训练步骤包括:
步骤F1,获取第六样本图像集;所述第六样本图像集包括至少一个第六样本图像,所述第六样本图像包括YUV图像;
步骤F1的实现过程以及取得的技术效果可以与步骤E1相同或相似,此处不赘述。
步骤F2,将第六样本图像输入第六初始神经网络模型,并获得所述第六初始神经网络模型输出的至少一个目标处理区域,所述至少一个目标处理区域分别对应于一个明度权重参数;
上述目标处理区域可以视为需要进行特殊锐化处理的区域。例如,在包括人物的第六样本图像中,可以输出将人脸处确定为第一目标处理区域。继而可以将人脸处的锐化强度加强,以提高人脸的清晰度。也可以将手部确定为第二目标处理区域,继而可以将手部的锐化强度也加强,提高手部的清晰度。这里,第六初始神经网络模型确定目标处理区域的过程可以与相关技术中的目标检测算法相同或相似,此处不赘述。
每一个目标处理区域可以对应于一个明度权重参数,以使得不同的目标处理区域可以具有不同的锐化强度。例如,上述第一目标处理区域可以对应于明度权重参数0.9,上述第二目标处理区域可以对应于明度权重参数0.7。
在一些应用场景中,这里的明度权重参数可以是预先设置的。例如,可以预先设置第一目标处理区域的明度权重参数为0.9。这样,当输出该第一目标处理区域之后,可以将该第一目标处理区域的像素点与明度权重参数0.9进行关联。
步骤F3,利用所述目标处理区域所对应的明度权重参数对所述目标处理区域的明度信息进行修正,得到修正后的初始明度值;
得到目标处理区域之后,可以利用明度权重参数对与其关联的目标处理区域的明度信息进行修正,得到对应的初始明度值。在一些应用场景中,例如可以通过将目标处理区域的像素点明度值与对应的明度权重参数相乘,得到对应的初始明度值。
在一些应用场景中,可以先将第六样本图像输入高通滤波器中,利用高通滤波器滤除低频分量,留下高频分量,然后再利用明度权重参数指导第六样本图像的目标处理区域进行锐化,以加强对第六样本图像的锐化强度。
步骤F4,利用第六预设损失函数确定所述初始明度值与所述目标处理区域的预期明度值之间的第六损失函数值;
步骤F4的实现过程以及取得的技术效果可以与步骤E4相似,此处不赘述。
步骤F5,利用所述第六损失函数值调整所述第六初始神经网络模型对应的模型参数,以使所述第六初始神经网络模型收敛,得到所述第六神经网络模型。
步骤F5的实现过程以及取得的技术效果可以与步骤E5相似,此处不赘述。
在训练第六初始神经模型的过程中,可以将上述第六样本图像集中的每一个第六样本图像均执行上述步骤F1至步骤F5的操作,以能够使第六初始神经网络模型收敛,得到第六神经网络模型。这样,在将待处理图像输入第六神经网络模型之后,第六神经网络模型可以输出针对于该待处理图像的目标处理区域。
请参考图3,其示出了本申请实施例提供的一种图像处理装置的结构框图,该图像处理装置可以是电子设备上的模块、程序段或代码。应理解,该装置与上述图1方法实施例对应,能够执行图1方法实施例涉及的各个步骤,该装置具体的功能可以参见上文中的描述,为避免重复,此处适当省略详细描述。
可选地,上述图像处理装置包括获取模块301,去马赛克处理模块302以及修正模块303。其中,获取模块301,可以配置成用于获取待处理图像;去马赛克处理模块302,可以配置成用于利用预设处理规则对所述待处理图像进行去马赛克处理,得到去马赛克后的初始图像;修正模块303,可以配置成用于利用神经网络处理器针对所述待处理图像输出的修正参数修正所述初始图像,得到目标图像。
可选地,所述修正参数包括第一修正参数和第二修正参数;以及修正模块303进一步包括第一修正模块和第二修正模块,其中,第一修正模块可以配置成用于利用所述第一修正参数修正所述初始图像的去马赛克参数,得到修正后的中间图像;第二修正模块可以配置成用于利用所述第二修正参数修正所述中间图像,得到目标图像。
可选地,所述第一修正参数包括插值权重参数,所述去马赛克参数包括插值参数;以及所述第一修正模块可以进一步配置成用于:利用所述插值权重参数修正所述插值参数,得到所述初始图像的各个像素点分别对应的插值结果;根据所述各个像素点分别对应的插值结果对所述各个像素点进行插值处理,得到所述中间图像。
可选地,所述神经网络处理器包括第一神经网络模型,所述插值权重参数由所述第一神经网络模型基于所述待处理图像得到;以及所述第一神经网络模型的训练步骤包括:获取第一样本图像集;所述第一样本图像集包括至少一个第一样本图像,所述第一样本图像包括RAW图像;将所述第一样本图像输入第一初始神经网络模型,并获得所述第一初始神经网络模型输出的所述第一样本图像的各个像素点分别对应的初始插值权重参数;利用预设插值规则得到所述各个像素点分别对应的初始插值参数;根据所述初始插值权重参数与所述初始插值参数,确定所述各个像素点分别对应的初始插值结果,并利用所述初始插值结果对所述第一样本图像进行插值处理,得到第一初始目标图像;利用第一预设损失函数确定所述第一初始目标图像与所述第一样本图像的预期图像之间的第一损失函数值;利用所述第一损失函数值调整所述第一初始神经网络模型对应的模型参数,以使所述第一初始神经网络模型收敛,得到所述第一神经网络模型。
可选地,所述第二修正参数包括残差修正参数;以及所述第二修正模块可以进一步配置成用于:利用所述残差修正参数修正所述中间图像,以消除所述中间图像与预期目标图像之间的残差,得到所述目标图像。
可选地,所述神经网络处理器包括第二神经网络模型,所述残差修正参数由所述第二神经网络模型基于所述待处理图像得到;以及所述第二神经网络模型的训练步骤包括:获取第二样本图像集;所述第二样本图像集包括至少一个第二样本图像,所述第二样本图像包括RAW图像;将所述第二样本图像输入第二初始神经网络模型,并获得所述第二初始神经网络模型输出的初始残差修正参数;利用预设处理 规则得到所述第二样本图像所对应的去马赛克后图像;利用所述初始残差修正参数修正所述去马赛克后图像与预期初始目标图像之间的残差,得到第二初始目标图像;利用第二预设损失函数确定所述第二初始目标图像与所述预期初始目标图像之间的第二损失函数值;利用所述第二损失函数值调整所述第二初始神经网络模型对应的模型参数,以使所述第二初始神经网络模型收敛,得到所述第二神经网络模型。
可选地,在所述获取第二样本图像集之后,所述第二神经网络模型的训练步骤还包括:将所述第二样本图像进行色调映射处理,以降低所述第二样本图像的位宽。
可选地,所述第二修正参数包括伪彩修正参数;以及所述第二修正模块可以进一步配置成用于:利用所述伪彩修正参数修正所述中间图像的伪彩区域,得到所述目标图像。
可选地,所述神经网络处理器包括第三神经网络模型,所述伪彩修正参数由所述第三神经网络模型基于所述待处理图像得到;以及所述第三神经网络模型的训练步骤包括:获取第三样本图像集;所述第三样本图像集包括至少一个第三样本图像,所述第三样本图像包括RAW图像;将第三样本图像输入第三初始神经网络模型,并获得所述第三初始神经网络模型输出的所述第三样本图像的各个像素点分别对应的初始伪彩修正参数;利用所述预设处理规则得到所述第三样本图像所对应的去马赛克后图像;利用所述初始伪彩修正参数修正所述去马赛克后图像所对应的伪彩区域,得到修正后的第三初始目标图像;利用第三预设损失函数确定所述第三初始目标图像与所述第三样本图像的预期图像之间的第三损失函数值;利用所述第三损失函数值调整所述第三初始神经网络模型对应的模型参数,以使所述第三初始神经网络模型收敛,得到所述第三神经网络模型。
可选地,所述初始伪彩修正参数包括初始伪彩权重参数或者初始伪彩补偿值;其中,所述初始伪彩权重参数用于表征所述各个像素点分别对应的色度淡化程度;所述初始伪彩补偿值用于补偿所述各个像素点分别对应的色度。
可选地,所述第二修正参数包括紫边修正参数;以及第二修正模块可以进一步配置成用于:利用所述紫边修正参数修正所述中间图像的紫边区域,得到目标图像。
可选地,所述神经网络处理器包括第四神经网络模型,所述紫边修正参数由所述第四神经网络模型基于所述待处理图像得到;以及所述第四神经网络模型的训练步骤包括:获取第四样本图像集;所述第四样本图像集包括至少一个第四样本图像,所述第四样本图像包括去马赛克后图像;将第四样本图像输入第四初始神经网络模型,并获得所述第四初始神经网络模型输出的所述第四样本图像对应的初始紫边修正参数;利用所述初始紫边修正参数修正所述去马赛克后图像所对应的紫边区域,得到修正后的第四初始目标图像;利用第四预设损失函数确定所述第四初始目标图像与所述第四样本图像对应的预期图像之间的第四损失函数值;利用所述第四损失函数值调整所述第四初始神经网络模型对应的模型参数,以使所述第四初始神经网络模型收敛,得到所述第四神经网络模型。
可选地,所述初始紫边修正参数包括初始紫边权重参数或者初始紫边补偿值;其中,所述初始紫边权重参数用于表征紫边区域的色斑消除程度;所述初始紫边补偿值用于补偿紫边区域的各个像素点的饱和度。
可选地,所述第二修正参数包括锐化修正参数;以及所述第二修正模块可以进一步配置成用于:利用所述锐化修正参数修正所述中间图像的模糊区域,得到目标图像;所述锐化修正参数包括对明度进行修正的明度权重参数。
可选地,所述神经网络处理器包括第五神经网络模型,所述锐化修正参数由所述第五神经网络模型基于所述待处理图像得到;以及所述第五神经网络模型的训练步骤包括:获取第五样本图像集;所述第五样本图像集包括至少一个第五样本图像,所述第五样本图像包括YUV图像;将第五样本图像输入第五初始神经网络模型,并获得所述第五初始神经网络模型输出的所述第五样本图像的各个像素点分别对 应的明度权重参数;利用所述明度权重参数指导所述第五样本图像进行锐化,得到锐化后的初始明度值;利用第五预设损失函数确定所述初始明度值与所述第五样本图像的预期明度值之间的第五损失函数值;利用所述第五损失函数值调整所述第五初始神经网络模型对应的模型参数,以使所述第五初始神经网络模型收敛,得到所述第五神经网络模型。
可选地,所述神经网络处理器包括第六神经网络模型,所述锐化修正参数由所述第六神经网络模型基于所述待处理图像得到;以及所述第六神经网络模型的训练步骤包括:获取第六样本图像集;所述第六样本图像集包括至少一个第六样本图像,所述第六样本图像包括YUV图像;将第六样本图像输入第六初始神经网络模型,并获得所述第六初始神经网络模型输出的至少一个目标处理区域,所述至少一个目标处理区域分别对应于一个明度权重参数;利用所述目标处理区域所对应的明度权重参数对所述目标处理区域的明度信息进行修正,得到修正后的初始明度值;利用第六预设损失函数确定所述初始明度值与所述目标处理区域的预期明度值之间的第六损失函数值;利用所述第六损失函数值调整所述第六初始神经网络模型对应的模型参数,以使所述第六初始神经网络模型收敛,得到所述第六神经网络模型。
需要说明的是,本领域技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再重复描述。
请参照图4,图4为本申请实施例提供的一种用于执行图像处理方法的电子设备的结构示意图,所述电子设备可以包括:至少一个处理器401,例如CPU,至少一个通信接口402,至少一个存储器403和至少一个通信总线404。其中,通信总线404用于实现这些组件直接的连接通信。其中,本申请实施例中设备的通信接口402用于与其他节点设备进行信令或数据的通信。存储器403可以是高速RAM存储器,也可以是非易失性的存储器(non-volatile memory),例如至少一个磁盘存储器。存储器403可选的还可以是至少一个位于远离前述处理器的存储装置。存储器403中存储有计算机可读取指令,当所述计算机可读取指令由所述处理器401执行时,电子设备可以执行上述图1所示方法过程。
可以理解,图4所示的结构仅为示意,所述电子设备还可包括比图4中所示更多或者更少的组件,或者具有与图4所示不同的配置。图4中所示的各组件可以采用硬件、软件或其组合实现。
本申请实施例提供一种可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时,可以执行如图1所示方法实施例中电子设备所执行的方法过程。
本实施例公开一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,计算机能够执行上述各方法实施例所提供的方法,例如,该方法可以包括:获取待处理图像;利用预设处理规则对所述待处理图像进行去马赛克处理,得到去马赛克后的初始图像;利用神经网络处理器针对所述待处理图像输出的修正参数修正所述初始图像,得到目标图像。
在本申请所提供的实施例中,应该理解到,所揭露装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
另外,作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
再者,在本申请各个实施例中的各功能模块可以集成在一起形成一个独立的部分,也可以是各个模块单独存在,也可以两个或两个以上模块集成形成一个独立的部分。
在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。
以上所述仅为本申请的实施例而已,并不用于限制本申请的保护范围,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。
工业实用性
本申请提供一种图像处理方法、装置、电子设备和可读存储介质,该方法的一具体实施方式包括:获取待处理图像;利用预设处理规则对所述待处理图像进行去马赛克处理,得到去马赛克后的初始图像;利用神经网络处理器针对所述待处理图像输出的修正参数修正所述初始图像,得到目标图像。该方法降低了图像处理过程中对于NPU的依赖,继而降低了处理能耗和处理成本。
此外,可以理解的是,本申请的图像处理方法、装置、电子设备和可读存储介质是可以重现的,并且可以用在多种工业应用中。例如,本申请的图像处理方法、图像处理装置、电子设备和可读存储介质可以任何用于需要进行图像处理的领域。

Claims (20)

  1. 一种图像处理方法,其特征在于,包括:
    获取待处理图像;
    利用预设处理规则对所述待处理图像进行去马赛克处理,得到去马赛克后的初始图像;以及
    利用神经网络处理器针对所述待处理图像输出的修正参数修正所述初始图像,得到目标图像。
  2. 根据权利要求1所述的图像处理方法,其特征在于,所述修正参数包括第一修正参数和第二修正参数;以及
    所述利用神经网络处理器针对所述待处理图像输出的修正参数修正所述初始图像,得到目标图像,包括:
    利用所述第一修正参数修正所述初始图像的去马赛克参数,得到修正后的中间图像;
    利用所述第二修正参数修正所述中间图像,得到目标图像。
  3. 根据权利要求2所述的图像处理方法,其特征在于,所述第一修正参数包括插值权重参数,所述去马赛克参数包括插值参数;以及
    所述利用所述第一修正参数修正所述初始图像的去马赛克参数,得到修正后的中间图像,包括:
    利用所述插值权重参数修正所述插值参数,得到所述初始图像的各个像素点分别对应的插值结果;
    根据所述各个像素点分别对应的插值结果对所述各个像素点进行插值处理,得到所述中间图像。
  4. 根据权利要求3所述的图像处理方法,其特征在于,所述神经网络处理器包括第一神经网络模型,所述插值权重参数由所述第一神经网络模型基于所述待处理图像得到;以及
    所述第一神经网络模型的训练步骤包括:
    获取第一样本图像集;所述第一样本图像集包括至少一个第一样本图像,所述第一样本图像包括RAW图像;
    将所述第一样本图像输入第一初始神经网络模型,并获得所述第一初始神经网络模型输出的所述第一样本图像的各个像素点分别对应的初始插值权重参数;
    利用预设插值规则得到所述各个像素点分别对应的初始插值参数;
    根据所述初始插值权重参数与所述初始插值参数,确定所述各个像素点分别对应的初始插值结果,并利用所述初始插值结果对所述第一样本图像进行插值处理,得到第一初始目标图像;
    利用第一预设损失函数确定所述第一初始目标图像与所述第一样本图像的预期图像之间的第一损失函数值;
    利用所述第一损失函数值调整所述第一初始神经网络模型对应的模型参数,以使所述第一初始神经网络模型收敛,得到所述第一神经网络模型。
  5. 根据权利要求2至4中的任一项所述的图像处理方法,其特征在于,所述第二修正参数包括残差修正参数;以及
    所述利用所述第二修正参数修正所述中间图像,得到目标图像,包括:
    利用所述残差修正参数修正所述中间图像,以消除所述中间图像与预期目标图像之间的残差,得到所述目标图像。
  6. 根据权利要求5所述的图像处理方法,其特征在于,所述神经网络处理器包括第二神经网络模型,所述残差修正参数由所述第二神经网络模型基于所述待处理图像得到;以及
    所述第二神经网络模型的训练步骤包括:
    获取第二样本图像集;所述第二样本图像集包括至少一个第二样本图像,所述第二样本图像包括 RAW图像;
    将所述第二样本图像输入第二初始神经网络模型,并获得所述第二初始神经网络模型输出的初始残差修正参数;
    利用预设处理规则得到所述第二样本图像所对应的去马赛克后图像;
    利用所述初始残差修正参数修正所述去马赛克后图像与预期初始目标图像之间的残差,得到第二初始目标图像;
    利用第二预设损失函数确定所述第二初始目标图像与所述预期初始目标图像之间的第二损失函数值;
    利用所述第二损失函数值调整所述第二初始神经网络模型对应的模型参数,以使所述第二初始神经网络模型收敛,得到所述第二神经网络模型。
  7. 根据权利要求6所述的图像处理方法,其特征在于,在所述将所述第二样本图像输入第二初始神经网络模型之前,所述第二神经网络模型的训练步骤还包括:
    将所述第二样本图像进行色调映射处理,以降低所述第二样本图像的位宽。
  8. 根据权利要求2至7中的任一项所述的图像处理方法,其特征在于,所述第二修正参数包括伪彩修正参数;以及
    所述利用所述第二修正参数修正所述中间图像,得到目标图像,包括:
    利用所述伪彩修正参数修正所述中间图像的伪彩区域,得到所述目标图像。
  9. 根据权利要求8所述的图像处理方法,其特征在于,所述神经网络处理器包括第三神经网络模型,所述伪彩修正参数由所述第三神经网络模型基于所述待处理图像得到;以及
    所述第三神经网络模型的训练步骤包括:
    获取第三样本图像集;所述第三样本图像集包括至少一个第三样本图像,所述第三样本图像包括RAW图像;
    将第三样本图像输入第三初始神经网络模型,并获得所述第三初始神经网络模型输出的所述第三样本图像的各个像素点分别对应的初始伪彩修正参数;
    利用所述预设处理规则得到所述第三样本图像所对应的去马赛克后图像;
    利用所述初始伪彩修正参数修正所述去马赛克后图像所对应的伪彩区域,得到修正后的第三初始目标图像;
    利用第三预设损失函数确定所述第三初始目标图像与所述第三样本图像的预期图像之间的第三损失函数值;
    利用所述第三损失函数值调整所述第三初始神经网络模型对应的模型参数,以使所述第三初始神经网络模型收敛,得到所述第三神经网络模型。
  10. 根据权利要求9所述的图像处理方法,其特征在于,所述初始伪彩修正参数包括初始伪彩权重参数或者初始伪彩补偿值;其中,所述初始伪彩权重参数用于表征所述各个像素点分别对应的色度淡化程度;所述初始伪彩补偿值用于补偿所述各个像素点分别对应的色度。
  11. 根据权利要求2至10中的任一项所述的图像处理方法,其特征在于,所述第二修正参数包括紫边修正参数;以及
    所述利用所述第二修正参数修正所述中间图像,得到目标图像,包括:
    利用所述紫边修正参数修正所述中间图像的紫边区域,得到目标图像。
  12. 根据权利要求11所述的图像处理方法,其特征在于,所述神经网络处理器包括第四神经网络模型,所述紫边修正参数由所述第四神经网络模型基于所述待处理图像得到;以及
    所述第四神经网络模型的训练步骤包括:
    获取第四样本图像集;所述第四样本图像集包括至少一个第四样本图像,所述第四样本图像包括去马赛克后图像;
    将第四样本图像输入第四初始神经网络模型,并获得所述第四初始神经网络模型输出的所述第四样本图像对应的初始紫边修正参数;
    利用所述初始紫边修正参数修正所述去马赛克后图像所对应的紫边区域,得到修正后的第四初始目标图像;
    利用第四预设损失函数确定所述第四初始目标图像与所述第四样本图像对应的预期图像之间的第四损失函数值;
    利用所述第四损失函数值调整所述第四初始神经网络模型对应的模型参数,以使所述第四初始神经网络模型收敛,得到所述第四神经网络模型。
  13. 根据权利要求12所述的图像处理方法,其特征在于,所述初始紫边修正参数包括初始紫边权重参数或者初始紫边补偿值;其中,所述初始紫边权重参数用于表征紫边区域的色斑消除程度;所述初始紫边补偿值用于补偿紫边区域的各个像素点的饱和度。
  14. 根据权利要求2至13中的任一项所述的图像处理方法,其特征在于,所述第二修正参数包括锐化修正参数;以及
    所述利用所述第二修正参数修正所述中间图像,得到目标图像,包括:
    利用所述锐化修正参数修正所述中间图像的模糊区域,得到目标图像;所述锐化修正参数包括对明度进行修正的明度权重参数。
  15. 根据权利要求14所述的图像处理方法,其特征在于,所述神经网络处理器包括第五神经网络模型,所述锐化修正参数由所述第五神经网络模型基于所述待处理图像得到;以及
    所述第五神经网络模型的训练步骤包括:
    获取第五样本图像集;所述第五样本图像集包括至少一个第五样本图像,所述第五样本图像包括YUV图像;
    将第五样本图像输入第五初始神经网络模型,并获得所述第五初始神经网络模型输出的所述第五样本图像的各个像素点分别对应的明度权重参数;
    利用所述明度权重参数指导所述第五样本图像进行锐化,得到锐化后的初始明度值;
    利用第五预设损失函数确定所述初始明度值与所述第五样本图像的预期明度值之间的第五损失函数值;
    利用所述第五损失函数值调整所述第五初始神经网络模型对应的模型参数,以使所述第五初始神经网络模型收敛,得到所述第五神经网络模型。
  16. 根据权利要求14或15所述的图像处理方法,其特征在于,所述神经网络处理器包括第六神经网络模型,所述锐化修正参数由所述第六神经网络模型基于所述待处理图像得到;以及
    所述第六神经网络模型的训练步骤包括:
    获取第六样本图像集;所述第六样本图像集包括至少一个第六样本图像,所述第六样本图像包括YUV图像;
    将第六样本图像输入第六初始神经网络模型,并获得所述第六初始神经网络模型输出的至少一个目标处理区域,所述至少一个目标处理区域分别对应于一个明度权重参数;
    利用所述目标处理区域所对应的明度权重参数对所述目标处理区域的明度信息进行修正,得到修正后的初始明度值;
    利用第六预设损失函数确定所述初始明度值与所述目标处理区域的预期明度值之间的第六损失函数值;
    利用所述第六损失函数值调整所述第六初始神经网络模型对应的模型参数,以使所述第六初始神经网络模型收敛,得到所述第六神经网络模型。
  17. 一种图像处理装置,其特征在于,包括:
    获取模块,配置成用于获取待处理图像;
    去马赛克处理模块,配置成用于利用预设处理规则对所述待处理图像进行去马赛克处理,得到去马赛克后的初始图像;
    修正模块,配置成用于利用神经网络处理器针对所述待处理图像输出的修正参数修正所述初始图像,得到目标图像。
  18. 根据权利要求17所述的图像处理装置,其特征在于,所述修正参数包括第一修正参数和第二修正参数,以及所述修正模块包括第一修正模块和第二修正模块,
    其中,所述第一修正模块配置成用于利用所述第一修正参数修正所述初始图像的去马赛克参数,得到修正后的中间图像;所述第二修正模块可以配置成用于利用所述第二修正参数修正所述中间图像,得到目标图像。
  19. 一种电子设备,其特征在于,包括处理器以及存储器,所述存储器存储有计算机可读取指令,当所述计算机可读取指令由所述处理器执行时,运行根据权利要求1至16中的任一所述的图像处理方法。
  20. 一种可读存储介质,在所述可读存储介质上存储有计算机程序,其特征在于,所述计算机程序在被处理器执行时运行根据权利要求1至16中的任一所述的图像处理方法。
PCT/CN2021/139362 2021-07-28 2021-12-17 图像处理方法、图像处理装置、电子设备和可读存储介质 WO2023005115A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110860833.5A CN113658043A (zh) 2021-07-28 2021-07-28 图像处理方法、装置、电子设备和可读存储介质
CN202110860833.5 2021-07-28

Publications (1)

Publication Number Publication Date
WO2023005115A1 true WO2023005115A1 (zh) 2023-02-02

Family

ID=78490823

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/139362 WO2023005115A1 (zh) 2021-07-28 2021-12-17 图像处理方法、图像处理装置、电子设备和可读存储介质

Country Status (2)

Country Link
CN (1) CN113658043A (zh)
WO (1) WO2023005115A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274060A (zh) * 2023-10-18 2023-12-22 深圳深知未来智能有限公司 一种无监督的端到端去马赛克方法及系统
CN117392118A (zh) * 2023-12-07 2024-01-12 巴苏尼制造(江苏)有限公司 基于多特融合的纺织品染整染色异常检测方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658043A (zh) * 2021-07-28 2021-11-16 上海智砹芯半导体科技有限公司 图像处理方法、装置、电子设备和可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170185871A1 (en) * 2015-12-29 2017-06-29 Qiang Zhang Method and apparatus of neural network based image signal processor
WO2020215180A1 (zh) * 2019-04-22 2020-10-29 华为技术有限公司 图像处理方法、装置和电子设备
CN112700433A (zh) * 2021-01-11 2021-04-23 地平线(上海)人工智能技术有限公司 图像处理方法、装置、电子设备以及计算机可读存储介质
CN113658043A (zh) * 2021-07-28 2021-11-16 上海智砹芯半导体科技有限公司 图像处理方法、装置、电子设备和可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170185871A1 (en) * 2015-12-29 2017-06-29 Qiang Zhang Method and apparatus of neural network based image signal processor
WO2020215180A1 (zh) * 2019-04-22 2020-10-29 华为技术有限公司 图像处理方法、装置和电子设备
CN113168673A (zh) * 2019-04-22 2021-07-23 华为技术有限公司 图像处理方法、装置和电子设备
CN112700433A (zh) * 2021-01-11 2021-04-23 地平线(上海)人工智能技术有限公司 图像处理方法、装置、电子设备以及计算机可读存储介质
CN113658043A (zh) * 2021-07-28 2021-11-16 上海智砹芯半导体科技有限公司 图像处理方法、装置、电子设备和可读存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274060A (zh) * 2023-10-18 2023-12-22 深圳深知未来智能有限公司 一种无监督的端到端去马赛克方法及系统
CN117392118A (zh) * 2023-12-07 2024-01-12 巴苏尼制造(江苏)有限公司 基于多特融合的纺织品染整染色异常检测方法
CN117392118B (zh) * 2023-12-07 2024-02-06 巴苏尼制造(江苏)有限公司 基于多特融合的纺织品染整染色异常检测方法

Also Published As

Publication number Publication date
CN113658043A (zh) 2021-11-16

Similar Documents

Publication Publication Date Title
WO2023005115A1 (zh) 图像处理方法、图像处理装置、电子设备和可读存储介质
JP3399486B2 (ja) カラー画像処理装置及び方法
CN113454680A (zh) 图像处理器
CN109816608B (zh) 一种基于噪声抑制的低照度图像自适应亮度增强方法
CN113228094A (zh) 图像处理器
JP2013013060A (ja) 拡張rgb空間へのトーンマッピングを用いた高ダイナミックレンジ画像の処理方法
JP2006203841A (ja) 画像処理装置、カメラ装置、画像出力装置、画像処理方法、色補正処理プログラムおよび可読記録媒体
JP2003304549A (ja) カメラ及び画像信号処理システム
CN111107330A (zh) 一种Lab空间的偏色校正方法
JP7278096B2 (ja) 画像処理装置、画像処理方法、およびプログラム
JP4375580B2 (ja) 画像処理装置、画像処理方法、および画像処理プログラム
KR100700017B1 (ko) 조정 가능한 임계값을 이용한 컬러 보간 장치
JP5966603B2 (ja) 画像処理装置、画像処理方法、画像処理用プログラム、および、記録媒体
TWI531246B (zh) Color adjustment method and its system
JPH1117984A (ja) 画像処理装置
WO2022027469A1 (zh) 图像处理方法、装置和存储介质
KR101634652B1 (ko) 영상의 대비 강화 방법 및 장치
CN109600596B (zh) 一种非线性无色恒常的白平衡方法
CN113132562A (zh) 镜头阴影校正方法、装置及电子设备
JP2005260675A (ja) 画像処理装置およびプログラム
JP2002077647A (ja) 画像処理装置
JP2001078211A (ja) 色成分生成装置および色成分生成方法並びにこれを用いた多色画像撮像装置
CN112422940A (zh) 一种自适应颜色校正方法
CN112184588A (zh) 一种针对故障检测的图像增强系统及方法
JPH09147098A (ja) カラー画像処理方法及び装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21951687

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE