WO2022267494A1 - 一种图像数据生成方法和装置 - Google Patents

一种图像数据生成方法和装置 Download PDF

Info

Publication number
WO2022267494A1
WO2022267494A1 PCT/CN2022/076725 CN2022076725W WO2022267494A1 WO 2022267494 A1 WO2022267494 A1 WO 2022267494A1 CN 2022076725 W CN2022076725 W CN 2022076725W WO 2022267494 A1 WO2022267494 A1 WO 2022267494A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
pixel value
value
acquisition device
Prior art date
Application number
PCT/CN2022/076725
Other languages
English (en)
French (fr)
Inventor
朱才志
汝佩哲
周晓
王林
Original Assignee
英特灵达信息技术(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 英特灵达信息技术(深圳)有限公司 filed Critical 英特灵达信息技术(深圳)有限公司
Priority to US17/791,126 priority Critical patent/US20240179421A1/en
Publication of WO2022267494A1 publication Critical patent/WO2022267494A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Definitions

  • the present application relates to the technical field of image processing, in particular to a method and device for generating image data.
  • image enhancement processing can be performed on the collected images based on the image enhancement network model to remove noise in the collected images.
  • training image augmentation network models requires a large number of sample images.
  • the sample images under the conditions of sufficient ambient light brightness and insufficient ambient light brightness are usually collected manually by technicians through the image acquisition device, and then the image data of the collected sample images are used as training samples, and the preset structure The image augmentation network model is trained.
  • the purpose of the embodiments of the present application is to provide a method and device for generating image data, so as to improve the efficiency of generating image data.
  • the specific technical scheme is as follows:
  • the embodiment of the present application discloses a method for generating image data, the method includes: when the ambient light brightness is greater than the first preset brightness threshold, the original image of the first image collected by the image acquisition device Pixel value, to obtain the first pixel value of each pixel in the first image; divide each first pixel value by a preset multiple, to obtain the second pixel value corresponding to each pixel; wherein, the preset Let the multiple be: the second image collected by the image acquisition device when the ambient light brightness is greater than the first preset brightness threshold, and the original pixel of the third image collected when the ambient light brightness is smaller than the second preset brightness threshold The ratio of values; the second preset brightness threshold is less than the first preset brightness threshold; for each pixel, on the basis of the second pixel value corresponding to the pixel, based on the system of the image acquisition device Poisson noise is added to the total gain to obtain the third pixel value corresponding to the pixel point; wherein, the total gain of the system is:
  • Poisson noise is added based on the total system gain of the image acquisition device to obtain the third pixel corresponding to the pixel
  • the value includes: for each pixel, dividing the second pixel value corresponding to the pixel by the total system gain of the image acquisition device to obtain the number of charges corresponding to the pixel as the first number of charges; On the basis of the first charge number corresponding to the point, add Poisson noise to obtain the second charge number corresponding to the pixel point; multiply the second charge number corresponding to the pixel point by the total gain of the system to obtain the corresponding The third pixel value of .
  • the calculation process of the total gain of the system includes: acquiring a plurality of fourth images collected by the image acquisition device and including grayscale plates; for each pixel, calculating the The mean value and variance of the original pixel value; with the mean value as the abscissa and the variance as the ordinate, the mean value and variance corresponding to each pixel point are fitted with a straight line to obtain a target straight line; the slope of the target straight line is determined as the system total gain.
  • the generating the original pixel value of the target image corresponding to the first image and whose ambient light brightness is less than the second preset brightness threshold based on each third pixel value includes: for each pixel point, at On the basis of the corresponding third pixel value, add read-in noise and/or line noise to obtain a fourth pixel value corresponding to each pixel point; based on each fourth pixel value, obtain the ambient light brightness corresponding to the first image Original pixel values of the target image that are smaller than the second preset brightness threshold.
  • the read-in noise conforms to the first Gaussian distribution, the mean value of the first Gaussian distribution is 0, and the variance is the square root of the value of the ordinate when the abscissa is 0 in the target straight line;
  • the row noise Conforming to the second Gaussian distribution the mean value of the second Gaussian distribution is 0, and the variance is the standard deviation of the pixel values of each row in the fifth image collected by the image acquisition device when the exposure time is 0 and the incident light intensity is 0;
  • a row of pixel values is an average value of original pixel values of a row of pixel points in the fifth image.
  • the method further includes: Using the original pixel value of the target image as input data and the first pixel value as corresponding output data, the image enhancement network model to be trained is trained until convergence to obtain an image enhancement network model.
  • the image enhancement network model includes an encoding part and a decoding part
  • the input data of each network layer of the decoding part is: the first feature map output by the previous network layer of the network layer, and the encoding part. It is obtained by superimposing the second feature map output by the network layer corresponding to the network layer in the part.
  • the first pixel value of each pixel in the first image is obtained based on the original pixel value of the first image collected by the image acquisition device when the brightness of the ambient light is greater than the first preset brightness threshold, including : Obtain the pixel value of each pixel in the RAW data of the first image collected by the image acquisition device when the brightness of the ambient light is greater than the first preset brightness threshold, as the pixel value to be processed; subtract the pixel value from each pixel value to be processed The black level value of the image acquisition device is used to obtain the first pixel value of each pixel in the first image.
  • the embodiment of the present application discloses an image data generation device, the device includes: a first pixel value acquisition module, for when the brightness of the ambient light is greater than the first preset brightness threshold, the image The original pixel value of the first image collected by the acquisition device to obtain the first pixel value of each pixel in the first image; the second pixel value acquisition module is used to divide each first pixel value by a preset multiple , to obtain the second pixel value corresponding to each pixel point; wherein, the preset multiple is: the second image collected by the image acquisition device when the ambient light brightness is greater than the first preset brightness threshold, and the second image in the environment The ratio of the original pixel value of the third image collected when the light brightness is less than the second preset brightness threshold; the second preset brightness threshold is smaller than the first preset brightness threshold; the third pixel value acquisition module is used for For each pixel point, on the basis of the second pixel value corresponding to the pixel point, Poisson noise is added based on the total
  • the third pixel value acquisition module includes: a first charge number acquisition submodule, configured to, for each pixel, divide the second pixel value corresponding to the pixel by the system of the image acquisition device The total gain is to obtain the charge number corresponding to the pixel as the first charge number; the second charge number acquisition sub-module is used to add Poisson noise on the basis of the first charge number corresponding to the pixel point to obtain the pixel The second charge number corresponding to the point; the third pixel value acquisition sub-module is used to multiply the second charge number corresponding to the pixel point by the total gain of the system to obtain the third pixel value corresponding to the pixel point.
  • the device further includes: a fourth image acquisition module, configured to acquire a plurality of fourth images including grayscale plates captured by the image acquisition device; a calculation module, configured to calculate the The mean value and the variance of the original pixel values of the pixel points in each fourth image; the straight line fitting module is used to take the mean value as the abscissa and the variance as the ordinate, and carry out straight line fitting to the mean value and the variance corresponding to each pixel point to obtain A target straight line: a total system gain determining module, configured to determine the slope of the target straight line as the total system gain.
  • a fourth image acquisition module configured to acquire a plurality of fourth images including grayscale plates captured by the image acquisition device
  • a calculation module configured to calculate the The mean value and the variance of the original pixel values of the pixel points in each fourth image
  • the straight line fitting module is used to take the mean value as the abscissa and the variance as the ordinate, and carry out straight line fitting to the mean value and the variance corresponding to each pixel point to
  • the image data generation module includes: a fourth pixel value acquisition submodule, configured to add read-in noise and/or row noise to each pixel on the basis of the corresponding third pixel value, to obtain The fourth pixel value corresponding to each pixel point; the image data generation submodule is used to obtain the target image corresponding to the first image and whose ambient light brightness is less than the second preset brightness threshold based on each fourth pixel value raw pixel value.
  • the read-in noise conforms to the first Gaussian distribution, the mean value of the first Gaussian distribution is 0, and the variance is the square root of the value of the ordinate when the abscissa is 0 in the target straight line;
  • the row noise Conforming to the second Gaussian distribution the mean value of the second Gaussian distribution is 0, and the variance is the standard deviation of the pixel values of each row in the fifth image collected by the image acquisition device when the exposure time is 0 and the incident light intensity is 0;
  • a row of pixel values is an average value of original pixel values of a row of pixel points in the fifth image.
  • the device further includes: a training module, configured to, based on each of the third pixel values, generate a target image corresponding to the first image whose ambient light brightness is less than the second preset brightness threshold After the original pixel value, the original pixel value of the target image is used as input data, and the first pixel value is used as the corresponding output data, and the image enhancement network model to be trained is trained until convergence to obtain the image enhancement network model .
  • a training module configured to, based on each of the third pixel values, generate a target image corresponding to the first image whose ambient light brightness is less than the second preset brightness threshold After the original pixel value, the original pixel value of the target image is used as input data, and the first pixel value is used as the corresponding output data, and the image enhancement network model to be trained is trained until convergence to obtain the image enhancement network model .
  • the image enhancement network model includes an encoding part and a decoding part
  • the input data of each network layer of the decoding part is: the first feature map output by the previous network layer of the network layer, and the encoding part. It is obtained by superimposing the second feature map output by the network layer corresponding to the network layer in the part.
  • the first pixel value acquisition module includes: a pending pixel value acquisition submodule, configured to acquire the RAW data of the first image collected by the image acquisition device when the brightness of the ambient light is greater than the first preset brightness threshold The pixel value of each pixel is used as the pixel value to be processed; the first pixel value acquisition submodule is used to subtract the black level value of the image acquisition device from each pixel value to be processed to obtain the first image The first pixel value of each pixel in .
  • the embodiment of the present application also discloses an electronic device, the electronic device includes a processor, a communication interface, a memory, and a communication bus, wherein, the processor, the communication interface, and the The memory completes mutual communication through the communication bus; the memory is used to store computer programs; the processor is used to execute the programs stored in the memory to implement Image data generation method.
  • the embodiment of the present application also discloses a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the above-mentioned first A method for generating image data according to one aspect.
  • the embodiment of the present application also discloses a computer program product containing instructions, which, when run on a computer, causes the computer to execute any one of the image data generation methods described above.
  • the image data generation method can obtain the first pixel value of each pixel in the first image based on the original pixel value of the first image collected by the image acquisition device when the ambient light brightness is greater than the first preset brightness threshold.
  • Pixel value divide each first pixel value by a preset multiple to obtain a second pixel value corresponding to each pixel; wherein, the preset multiple is: when the ambient light brightness of the image acquisition device is greater than the first preset brightness threshold The ratio of the second image collected to the original pixel value of the third image collected when the ambient light brightness is less than the second preset brightness threshold; the second preset brightness threshold is less than the first preset brightness threshold; for each pixel , on the basis of the second pixel value corresponding to the pixel point, Poisson noise is added based on the total system gain of the image acquisition device to obtain the third pixel value corresponding to the pixel point; wherein, the total system gain is: based on the image acquisition device The distribution of the original pixel values of the collected fourth image including the gray
  • the generated original pixel value of the target image is obtained by dividing the original pixel value of the first image by a preset multiple, that is, the original pixel value of the target image can represent an image when the brightness of the ambient light is small.
  • the original pixel value of the target image is obtained by adding Poisson noise, so that the original pixel value of the target image can effectively simulate the noise in the image collected by the image acquisition device. Therefore, the method for generating image data provided in the embodiment of the present application can automatically generate image data corresponding to an image when the ambient light brightness is sufficient and an image with insufficient ambient light brightness. Compared with the prior art, the images when the ambient light brightness is insufficient are manually collected by an image acquisition device, which can improve the generation efficiency of image data.
  • any product or method of the present application does not necessarily need to achieve all the above-mentioned advantages at the same time.
  • FIG. 1 is a flow chart of a method for generating image data provided in an embodiment of the present application
  • FIG. 2 is a flow chart of another method for generating image data provided by an embodiment of the present application.
  • FIG. 3 is a flow chart of another method for generating image data provided by an embodiment of the present application.
  • Fig. 4 is a flow chart of a method for calculating the total gain of the system provided by the embodiment of the present application.
  • Fig. 5 is a schematic diagram of a gray scale plate provided by the embodiment of the present application.
  • FIG. 6 is a schematic diagram of a target straight line provided in the embodiment of the present application.
  • FIG. 7 is a flow chart of another method for generating image data provided by an embodiment of the present application.
  • FIG. 8 is a structural diagram of an image enhancement network model provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of image enhancement processing based on an image enhancement network model provided by an embodiment of the present application.
  • FIG. 10 is a comparison diagram of the results of adding noise provided by the embodiment of the present application.
  • FIG. 11 is a comparison diagram of image processing results based on the image enhancement network model provided by the embodiment of the present application.
  • FIG. 12 is a comparison diagram of another image processing result based on the image enhancement network model provided by the embodiment of the present application.
  • FIG. 13 is a comparison diagram of another image processing result based on the image enhancement network model provided by the embodiment of the present application.
  • FIG. 14 is a comparison diagram of another image processing result based on the image enhancement network model provided by the embodiment of the present application.
  • FIG. 15 is a comparison diagram of another image processing result based on the image enhancement network model provided by the embodiment of the present application.
  • Fig. 16 is another comparison diagram of image processing results based on the image enhancement network model provided by the embodiment of the present application.
  • FIG. 17 is a comparison diagram of another image processing result based on the image enhancement network model provided by the embodiment of the present application.
  • FIG. 18 is a comparison diagram of another image processing result based on the image enhancement network model provided by the embodiment of the present application.
  • FIG. 19 is a comparison diagram of another image processing result based on the image enhancement network model provided by the embodiment of the present application.
  • FIG. 20 is a comparison diagram of another image processing result based on the image enhancement network model provided by the embodiment of the present application.
  • FIG. 21 is a comparison diagram of another image processing result based on the image enhancement network model provided by the embodiment of the present application.
  • FIG. 22 is a comparison diagram of another image processing result based on the image enhancement network model provided by the embodiment of the present application.
  • FIG. 23 is a structural diagram of an image data generation device provided by an embodiment of the present application.
  • FIG. 24 is a structural diagram of an electronic device provided by an embodiment of the present application.
  • the sample images under the conditions of sufficient ambient light brightness and insufficient ambient light brightness are manually collected through an image acquisition device, which will reduce the generation efficiency of image data.
  • an embodiment of the present application provides a method for generating image data.
  • the method can be applied to electronic equipment, and the electronic equipment can obtain the first image data collected by the image acquisition device when the ambient light brightness is greater than the first preset brightness threshold.
  • Raw pixel values of an image Furthermore, based on the image data generation method provided by the embodiment of the present application, the electronic device can generate the original pixel value of the target image corresponding to the first image, and the ambient light brightness is less than the second preset brightness threshold, that is, the electronic device can directly generate the corresponding Image data for images with insufficient ambient light brightness.
  • FIG. 1 is a flow chart of a method for generating image data provided in an embodiment of the present application.
  • the method may include the following steps:
  • S101 Obtain a first pixel value of each pixel in the first image based on the original pixel value of the first image captured by the image acquisition device when the ambient light brightness is greater than a first preset brightness threshold.
  • the preset multiple is: the second image collected by the image acquisition device when the ambient light brightness is greater than the first preset brightness threshold, and the original pixel value of the third image collected when the ambient light brightness is smaller than the second preset brightness threshold ratio.
  • the second preset brightness threshold is smaller than the first preset brightness threshold.
  • the total gain of the system is determined based on the distribution of the original pixel values of the fourth image including the gray scale plate collected by the image acquisition device.
  • S104 Based on each third pixel value, generate an original pixel value of a target image corresponding to the first image and whose ambient light brightness is less than a second preset brightness threshold.
  • the original pixel value of the generated target image is obtained by dividing the original pixel value of the first image by a preset multiple, that is, the original pixel value of the target image can represent the environment Image at low brightness.
  • the original pixel value of the target image is obtained by adding Poisson noise, so that the original pixel value of the target image can effectively simulate the noise in the image collected by the image acquisition device. Therefore, the method for generating image data provided in the embodiment of the present application can automatically generate image data corresponding to an image when the ambient light brightness is sufficient and an image with insufficient ambient light brightness. Compared with the prior art, the images when the ambient light brightness is insufficient are manually collected by an image acquisition device, which can improve the generation efficiency of image data.
  • the first preset brightness threshold may be set by a technician based on experience, for example, the first preset brightness threshold may be 25 lux, or may also be 30 lux.
  • the ambient light brightness is greater than the first preset brightness threshold, indicating that the current ambient light brightness is sufficient.
  • the original pixel value of the image may be: the pixel value recorded in the RAW data of the image collected by the image collection device. That is, the CMOS (Complementary Metal Oxide Semiconductor, Complementary Metal Oxide Semiconductor) or CCD (Charge Coupled Device, Charge Coupled Device) image sensor of the image acquisition device converts the captured light source signal into the original data of the digital signal.
  • CMOS Complementary Metal Oxide Semiconductor, Complementary Metal Oxide Semiconductor
  • CCD Charge Coupled Device, Charge Coupled Device
  • the original pixel value of the first image captured by the image capture device may be directly used as the first pixel value of each pixel in the first image.
  • step S101 may include the following steps:
  • S1011 Obtain the pixel value of each pixel in the RAW data of the first image collected by the image acquisition device when the brightness of the ambient light is greater than the first preset brightness threshold, as the pixel value to be processed.
  • S1012 Subtract the black level value of the image acquisition device from each pixel value to be processed to obtain a first pixel value of each pixel in the first image.
  • the image acquisition device converts the light source signal into the original data of the digital signal
  • an offset value is usually added, and the offset value is also the black level (Black Level) of the image acquisition device. value. Therefore, in order to improve the accuracy of the acquired first pixel value, the black level value may be subtracted from the pixel value to be processed. In addition, if a pixel value to be processed minus the black level value is a negative number, it can be determined that the corresponding first pixel value is 0.
  • the second preset brightness threshold may be set by a technician based on experience, for example, the second preset brightness threshold may be 0.1 lux, or may also be 0.2 lux. If the ambient light brightness is less than the second preset brightness threshold, it means that the current ambient light brightness is insufficient.
  • the image acquisition device may be used in advance to collect multiple images when the ambient light brightness is greater than the first preset brightness threshold to obtain a second image, and to collect multiple images when the ambient light brightness is lower than the second preset brightness threshold. image to get the third image.
  • the average value of the original pixel values of the multiple second images (which may be referred to as the first average value) and the average value of the original pixel values of the multiple third images (which may be referred to as the second average value) may be calculated. Furthermore, the ratio of the first average value to the second average value may be calculated as a preset multiple.
  • the second pixel value is obtained by dividing the first pixel value by a preset multiple, so that the second pixel value can reflect the image when the ambient light brightness is less than the second preset brightness threshold, that is, it can reflect the image collected when the ambient light brightness is insufficient. image.
  • the image acquisition device can calculate the product of the photoelectrically converted charge corresponding to each pixel and the total gain of the system as the pixel the original pixel value of .
  • Poisson noise can be added based on the number of charges corresponding to each pixel.
  • step S103 may include the following steps:
  • S1033 Multiply the second charge amount corresponding to the pixel point by the total system gain to obtain a third pixel value corresponding to the pixel point.
  • a Poisson distribution with the first charge number corresponding to the pixel as the mean and variance can be determined, that is, the determined Poisson distribution has both the mean and the variance of the first number of charges.
  • a value conforming to the Poisson distribution can be generated as the second charge number, that is, the charge number corresponding to the pixel point after adding Poisson noise.
  • a plurality of values conforming to the Poisson distribution may be randomly generated, and one of the values may be selected as the second charge number.
  • a value conforming to the Poisson distribution may also be randomly generated as the second charge number.
  • FIG. 4 is a flow chart of a method for calculating the total gain of the system provided in an embodiment of the present application. The method may include the following steps:
  • S401 Acquire a plurality of fourth images including gray scale plates collected by an image acquisition device.
  • S404 Determine the slope of the target straight line as the total gain of the system.
  • the surface of the grayscale board is a grayscale image with gradient pixel values, for example, see FIG. 5 .
  • multiple (for example, 100) images (ie, the fourth image) including the gray scale plate may be captured by an image acquisition device in advance under preset ambient light levels.
  • the whitest position of the gray scale board is not exposed.
  • the original pixel values of each fourth image may be obtained, and then, for each pixel, the original pixel values of the pixel in each fourth image are counted, and the corresponding mean value and variance are determined. That is to say, for each pixel point, the mean value and variance of its corresponding original pixel value can be determined.
  • E(x) represents the mean value of the pixel value corresponding to the pixel point
  • Var(x) represents the variance of the pixel value corresponding to the pixel point
  • K represents the slope of the target line
  • ⁇ read 2 represents the abscissa (ie mean value) in the target line The value of the ordinate (ie variance) when it is 0.
  • FIG. 6 is a schematic diagram of a target straight line provided by an embodiment of the present application.
  • the abscissa represents the mean value of the pixel value corresponding to the pixel point
  • the ordinate represents the variance of the pixel value corresponding to the pixel point.
  • step S104 may include the following steps:
  • Step 1 For each pixel, on the basis of the corresponding third pixel value, add read-in noise and/or row noise to obtain a fourth pixel value corresponding to each pixel.
  • Step 2 Based on each of the fourth pixel values, the original pixel values of the target image corresponding to the first image and whose ambient light brightness is less than the second preset brightness threshold are obtained.
  • read-in noise and/or row noise may be added to obtain a fourth pixel value.
  • read-in noise means image noise caused by dark current noise, thermal noise, source follower noise, and the like.
  • the corresponding read-in noise conforms to a Gaussian distribution (ie, the first Gaussian distribution hereinafter).
  • m ⁇ n values conforming to the first Gaussian distribution may be generated, m represents the width of the first image, n represents the height of the first image, and the generated m ⁇ n values correspond to each pixel respectively. Furthermore, the generated m ⁇ n values may be respectively added to the third pixel value of the corresponding pixel point to obtain the fourth pixel value.
  • Row noise refers to the noise generated when the CMOS image sensor reads out data in units of row pixels.
  • the row noise corresponding to each row of pixel points is the same, and the row noise corresponding to each row of pixel points conforms to a Gaussian distribution (ie, the second Gaussian distribution hereinafter).
  • n values conforming to the second Gaussian distribution may be generated, where n represents the height of the first image, and the generated n values correspond to the pixel points of each row respectively.
  • the third pixel value corresponding to the row of pixels may be added to the generated corresponding numerical value to obtain a fourth pixel value.
  • the above-mentioned read-in noise conforms to the first Gaussian distribution
  • the mean value of the first Gaussian distribution is 0, and the variance is the square root of the value of the ordinate when the abscissa of the target line is 0, that is, in the above formula (1)
  • the delta read is the above-mentioned read-in noise conforms to the first Gaussian distribution
  • the mean value of the first Gaussian distribution is 0, and the variance is the square root of the value of the ordinate when the abscissa of the target line is 0, that is, in the above formula (1)
  • the delta read is, in the above formula (1)
  • the row noise conforms to the second Gaussian distribution, the mean value of the second Gaussian distribution is 0, and the variance is the standard deviation of the pixel values of each row in the fifth image collected by the image acquisition device when the exposure time is 0 and the incident light intensity is 0.
  • a row of pixel values is an average value of original pixel values of a row of pixel points in the fifth image.
  • the original pixel value of the image (that is, the fifth image) collected by the image acquisition device when the exposure time is 0 and the incident light intensity is 0 may be acquired.
  • the fifth image is the Bias Frame (offset frame) collected by the image acquisition device.
  • the lens of the image acquisition device can be covered, and the exposure time can be set to 0 to perform image acquisition, and the acquired image is the fifth image.
  • an average value of the original pixel values of the row of pixel points in the fifth image may be calculated.
  • the standard deviation of each mean value corresponding to each row of pixel points can be calculated, and the standard deviation can be used as the variance of the second Gaussian distribution.
  • the fourth pixel value can be directly used as the value corresponding to the first image, and the brightness of the ambient light is less than the second preset brightness threshold.
  • Raw pixel values of the target image can be directly used as the value corresponding to the first image, and the brightness of the ambient light is less than the second preset brightness threshold.
  • quantization noise may also be added on the basis of the fourth pixel value.
  • Quantization noise can be determined based on the number of bits of raw pixel values of an image captured by an image capture device.
  • the quantization noise can be Wherein, L represents the number of digits of the original pixel value of the image captured by the image capture device.
  • the original pixel value of the image captured by the image acquisition device is 12 bits, that is to say, the range of the pixel value of each pixel is 0 to 2 L -1 (ie 4095).
  • the quantization noise can be 1/4095, that is, add 1/4095 to each fourth pixel value to obtain the original pixel value corresponding to the first image, and the target image whose ambient light brightness is less than the second preset brightness threshold .
  • the image enhancement network model may also be trained based on the original pixel values of the first image and the original pixel values of the target image.
  • the method may further include the following steps:
  • the image enhancement network model can learn the conversion relationship from the target image to the first image.
  • the image enhancement network model can remove the noise and obtain the original pixel value of a clear image, that is, the corresponding image when the ambient light is sufficient can be obtained. image data.
  • the above image enhancement network model may be U-net.
  • the traditional U-net can also be improved to obtain the above image enhancement network model.
  • the image enhancement network model includes an encoding part and a decoding part
  • the input data of each network layer in the decoding part is: the first feature map output by the previous network layer of the network layer, and the first feature map in the encoding part and the It is obtained by superimposing the second feature map output by the network layer corresponding to the network layer.
  • FIG. 8 is a structural diagram of an image enhancement network model provided by an embodiment of the present application.
  • the image enhancement network model in Figure 8 can be divided into the encoding part on the left and the decoding part on the right.
  • the encoding part includes a plurality of network layers, and the decoding part also includes a plurality of network layers, and the number of network layers included in the two is the same. In the embodiment of the present application, it is only described by taking both of them including 5 network layers as an example, but it is not limited thereto.
  • the numbers in Figure 8 indicate the number of channels
  • the solid arrows to the right in the encoding part and the decoding part indicate convolution (Convolution), and the size of the convolution kernel is 3 ⁇ 3
  • the downward solid arrows indicate the maximum pooling (Max Pool) , the size of the pooling window is 2 ⁇ 2
  • the upward solid arrow indicates transposed convolution (Transposed Convolution), the size of the convolution kernel is 2 ⁇ 2, and the transposed convolution is also upsampling processing
  • the right hollow arrow Indicates convolution, and the size of the convolution kernel is 1 ⁇ 1.
  • Arrows between the encoding part and the decoding part indicate superposition processing.
  • the encoding part and the decoding part from top to bottom, they can be called the first network layer, the second network layer, the third network layer, the fourth network layer, and the fifth network layer, respectively.
  • the input data is 4-channel data
  • the output is also 4-channel data
  • the data of each channel can be called a feature map.
  • the input data of the first network layer of the encoding part in Figure 8 is a 4-channel feature map, and after being processed by this network layer, a 32-channel feature map can be obtained; after being processed by the second network layer of the encoding part, A feature map of 64 channels can be obtained; after being processed by the third network layer of the encoding part, a feature map of 128 channels can be obtained; after being processed by the fourth network layer of the encoding part, a feature map of 256 channels can be obtained; after encoding After part of the fifth network layer is processed, a feature map of 512 channels can be obtained.
  • the input data of the fifth network layer in the decoding part is a feature map of 512 channels. After being processed by this network layer, a feature map of 512 channels can be obtained. After upsampling the feature map of the 512 channels, a feature map of 256 channels can be obtained, and superimposed with the feature map of 256 channels obtained by the fourth network layer of the encoding part, and the superposition result is still a feature map of 256 channels . Then, the superimposed 256-channel feature map can be input to the fourth network layer of the decoding part to obtain a 256-channel feature map.
  • the 256-channel feature map output by the fourth network layer of the decoding part is up-sampled to obtain a 128-channel feature map, which is superimposed with the 128-channel feature map obtained by the third network layer of the encoding part, and the superposition result It is still a 128-channel feature map and is input to the third network layer of the decoding part.
  • the input data of the first network layer of the decoding part is: the 32-channel feature map obtained by sampling the 64-channel feature map output by the second network layer of the decoding part, and the first one of the encoding part The superposition result of the 32-channel feature map output by the network layer.
  • the first network layer in the decoding part outputs a 4-channel feature map.
  • the original pixel value of the target image can be split (Unpacked) into a 4-channel feature map, that is, the feature map of the RGGB (Red Green Green Blue, red, green, green, blue) channel, as the input of the image enhancement network model shown in Figure 8
  • the first pixel value is also split into 4-channel feature maps as the corresponding output data, and the image enhancement network model shown in Figure 8 is trained.
  • the original pixel value of the image to be processed can be split into a 4-channel feature map, which is input to the trained image enhancement network model. Furthermore, the 4-channel feature map output by the image enhancement network model can be obtained, and the output 4-channel feature map is packed (Packed) to obtain the enhanced original pixel value.
  • FIG. 9 is a schematic diagram of image enhancement processing based on an image enhancement network model provided by an embodiment of the present application.
  • the leftmost is the initial Bayer (Bayer) RAW, which represents the initial image RAW data.
  • a 4-channel feature map is obtained as the input of the image enhancement network model.
  • the output of the image enhancement network model is a 4-channel feature map, and then, the output 4-channel feature map is packaged to obtain enhanced Bayer RAW, that is, enhanced image Raw data.
  • the original pixel value that is, the pixel value recorded in the RAW data
  • the preset ISP Image Processing Pipeline, image processing pipeline
  • the figure on the left shows the original pixel value of the Bias Frame collected by the image acquisition device, and the image generated by the preset ISP processing; the figure in the middle shows the image data generation method based on the embodiment of the present application, in the original The pixel value is based on the black level value, the pixel value obtained by adding Poisson noise, read-in noise and line noise, and the image generated by the preset ISP processing; the figure on the right shows that based on the related technology, the original pixel value Based on the black level value, add Poisson noise, read-in noise and line noise to the pixel value, and the image generated by the preset ISP processing.
  • the total gain of the system when Poisson noise is added, the added read-in noise, and the row noise are all empirical values.
  • the image obtained based on the image data generation method provided by the embodiment of the present application is more similar to the image actually collected by the image acquisition device, that is to say, the image obtained based on the embodiment of the present application
  • the image data generated by the data method can more realistically simulate the image actually collected by the image acquisition device.
  • each set of figures contains 3 images.
  • the image on the left is an image containing noise collected when the ambient light is insufficient;
  • the image in the middle is based on the image enhancement network model provided by the embodiment of the application, and the image on the left is enhanced to obtain
  • the image on the right is an image obtained by enhancing the image on the left based on the image enhancement network model in the related art.
  • Fig. 23 is a structural diagram of an image data generation device provided by the embodiment of the present application, the device may include: first A pixel value acquisition module 2301, configured to obtain the first pixel value of each pixel in the first image based on the original pixel value of the first image collected by the image acquisition device when the ambient light brightness is greater than the first preset brightness threshold
  • the second pixel value acquisition module 2302 is used to divide each first pixel value by a preset multiple to obtain a second pixel value corresponding to each pixel; wherein, the preset multiple is: the image acquisition device The ratio of the second image collected when the ambient light brightness is greater than the first preset brightness threshold to the original pixel value of the third image collected when the ambient light brightness is smaller than the second preset brightness threshold; the second preset The brightness threshold is set to be smaller than the first preset brightness threshold; the third pixel value acquisition module 2303 is configured to, for each pixel point, on the basis of
  • the third pixel value acquisition module 2303 includes: a first charge number acquisition submodule, configured to, for each pixel, divide the second pixel value corresponding to the pixel by the image acquisition device The total gain of the system is to obtain the number of charges corresponding to the pixel as the first number of charges; the second number of charges acquisition sub-module is used to add Poisson noise on the basis of the first number of charges corresponding to the pixel to obtain the The second charge number corresponding to the pixel point; the third pixel value acquisition sub-module is used to multiply the second charge number corresponding to the pixel point by the total gain of the system to obtain the third pixel value corresponding to the pixel point.
  • the device further includes: a fourth image acquisition module, configured to acquire a plurality of fourth images including grayscale plates captured by the image acquisition device; a calculation module, configured to calculate the The mean value and the variance of the original pixel values of the pixel points in each fourth image; the straight line fitting module is used to take the mean value as the abscissa and the variance as the ordinate, and carry out straight line fitting to the mean value and the variance corresponding to each pixel point to obtain A target straight line: a total system gain determining module, configured to determine the slope of the target straight line as the total system gain.
  • a fourth image acquisition module configured to acquire a plurality of fourth images including grayscale plates captured by the image acquisition device
  • a calculation module configured to calculate the The mean value and the variance of the original pixel values of the pixel points in each fourth image
  • the straight line fitting module is used to take the mean value as the abscissa and the variance as the ordinate, and carry out straight line fitting to the mean value and the variance corresponding to each pixel point to
  • the image data generation module 2304 includes: a fourth pixel value acquisition submodule, configured to add read-in noise and/or line noise to each pixel on the basis of the corresponding third pixel value, Obtain the fourth pixel value corresponding to each pixel point; the image data generation submodule is used to obtain the target image corresponding to the first image and whose ambient light brightness is less than the second preset brightness threshold based on each fourth pixel value the original pixel value of .
  • the read-in noise conforms to the first Gaussian distribution, the mean value of the first Gaussian distribution is 0, and the variance is the square root of the value of the ordinate when the abscissa is 0 in the target straight line;
  • the row noise Conforming to the second Gaussian distribution the mean value of the second Gaussian distribution is 0, and the variance is the standard deviation of the pixel values of each row in the fifth image collected by the image acquisition device when the exposure time is 0 and the incident light intensity is 0;
  • a row of pixel values is an average value of original pixel values of a row of pixel points in the fifth image.
  • the device further includes: a training module, configured to, based on each of the third pixel values, generate a target image corresponding to the first image whose ambient light brightness is less than the second preset brightness threshold After the original pixel value, the original pixel value of the target image is used as input data, and the first pixel value is used as the corresponding output data, and the image enhancement network model to be trained is trained until convergence to obtain the image enhancement network model .
  • a training module configured to, based on each of the third pixel values, generate a target image corresponding to the first image whose ambient light brightness is less than the second preset brightness threshold After the original pixel value, the original pixel value of the target image is used as input data, and the first pixel value is used as the corresponding output data, and the image enhancement network model to be trained is trained until convergence to obtain the image enhancement network model .
  • the image enhancement network model includes an encoding part and a decoding part
  • the input data of each network layer of the decoding part is: the first feature map output by the previous network layer of the network layer, and the encoding part. It is obtained by superimposing the second feature map output by the network layer corresponding to the network layer in the part.
  • the first pixel value acquisition module 2301 includes: a pending pixel value acquisition submodule, configured to acquire the RAW data of the first image captured by the image acquisition device when the brightness of the ambient light is greater than the first preset brightness threshold The pixel value of each pixel in the pixel is used as the pixel value to be processed; the first pixel value acquisition sub-module is used to subtract the black level value of the image acquisition device from each pixel value to be processed to obtain the first The first pixel value of each pixel in the image.
  • a pending pixel value acquisition submodule configured to acquire the RAW data of the first image captured by the image acquisition device when the brightness of the ambient light is greater than the first preset brightness threshold The pixel value of each pixel in the pixel is used as the pixel value to be processed; the first pixel value acquisition sub-module is used to subtract the black level value of the image acquisition device from each pixel value to be processed to obtain the first The first pixel value of each pixel in the image.
  • the embodiment of the present application also provides an electronic device, as shown in FIG. 24 , including a processor 2401, a communication interface 2402, a memory 2403, and a communication bus 2404.
  • the memory 2403 is used to store computer programs;
  • the processor 2401 is used to execute the programs stored on the memory 2403 to implement the following steps: when the ambient light brightness is greater than the first preset brightness threshold, image acquisition The original pixel value of the first image collected by the device to obtain the first pixel value of each pixel in the first image; divide each first pixel value by a preset multiple to obtain the second corresponding to each pixel Pixel value; wherein, the preset multiple is: the second image collected by the image acquisition device when the ambient light brightness is greater than the first preset brightness threshold, and when the ambient light brightness is smaller than the second preset brightness threshold The ratio of the original pixel value of the third image collected; the second preset brightness threshold is smaller than the first preset brightness threshold; for each pixel, on the basis of the second pixel value corresponding to the pixel
  • the communication bus mentioned above for the electronic device may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus or the like.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the communication bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used for communication between the electronic device and other devices.
  • the memory may include a random access memory (Random Access Memory, RAM), and may also include a non-volatile memory (Non-Volatile Memory, NVM), such as at least one disk memory.
  • RAM Random Access Memory
  • NVM non-Volatile Memory
  • the memory may also be at least one storage device located far away from the aforementioned processor.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (Digital Signal Processor, DSP), a dedicated integrated Circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU Central Processing Unit
  • NP Network Processor
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • a computer-readable storage medium is also provided.
  • a computer program is stored in the computer-readable storage medium.
  • the computer program is executed by a processor, any of the above-mentioned image data can be generated. method steps.
  • a computer program product including instructions is also provided, which, when run on a computer, causes the computer to execute any method for generating image data in the above embodiments.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, DVD), or a semiconductor medium (for example, a Solid State Disk (SSD)).
  • SSD Solid State Disk

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

一种图像数据生成方法和装置,涉及图像处理技术领域,方法包括:基于环境光亮度大于第一预设亮度阈值时,图像采集设备采集的第一图像的原始像素值,得到第一图像中每一像素点的第一像素值;将每一第一像素值除以预设倍数,得到每一像素点对应的第二像素值;第二预设亮度阈值小于第一预设亮度阈值;针对每一像素点,在该像素点对应的第二像素值的基础上,基于图像采集设备的系统总增益添加泊松噪声,得到该像素点对应的第三像素值;基于各个第三像素值,生成第一图像对应的,环境光亮度小于第二预设亮度阈值的目标图像的原始像素值。如此,能够提高图像数据的生成效率。

Description

一种图像数据生成方法和装置
本申请要求于2021年06月22日提交中国专利局、申请号为202110690725.8发明名称为“一种图像数据生成方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种图像数据生成方法和装置。
背景技术
随着计算机技术的快速发展,利用图像采集设备采集图像,并对图像进行处理的方式,被广泛应用于安防、智能交通等各个方面。然而,当环境光亮度不足时,会导致图像采集设备采集到的图像包含较多的噪声,使得图像不清晰,进而,会降低处理结果的准确度。
为此,可以基于图像增强网络模型对采集到的图像进行图像增强处理,以去除采集到的图像中的噪声。然而,对图像增强网络模型进行训练需要大量的样本图像。相关技术中,通常由技术人员手动通过图像采集设备分别采集环境光亮度充足和环境光亮度不足情况下的样本图像,进而,将采集到的样本图像的图像数据作为训练样本,对预设结构的图像增强网络模型进行训练。
可见,相关技术中,由人工通过图像采集设备采集图像,会降低图像数据的生成效率。
发明内容
本申请实施例的目的在于提供一种图像数据生成方法和装置,以提高图像数据的生成效率。具体技术方案如下:
第一方面,为了达到上述目的,本申请实施例公开了一种图像数据生成方法,所述方法包括:基于环境光亮度大于第一预设亮度阈值时,图像采集设备采集的第一图像的原始像素值,得到所述第一图像中每一像素点的第一像素值;将每一第一像素值除以预设倍数,得到每一像素点对应的第二像素值;其中,所述预设倍数为:所述图像采集设备在环境光亮度大于所述第一预设亮度阈值时采集的第二图像,与在环境光亮度小于第二预设亮度阈值时采集的第三图像的原始像素值的比值;所述第二预设亮度阈值小于所述第一预设亮度阈值;针对每一像素点,在该像素点对应的第二像素值的基础上,基于所述图像采集设备的系统总增益添加泊松噪声,得到该像素点对应的第三像素值;其中,所述系统总增益为:基于所述图像采集设备采集的,包含灰阶板的第四图像的原始像素值的分布情况确定;基于各个第三像素值,生成所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值。
可选的,所述针对每一像素点,在该像素点对应的第二像素值的基础上,基于所述图像采集设备的系统总增益添加泊松噪声,得到该像素点对应的第三像素值,包括:针对每一像素点,将该像素点对应的第二像素值除以所述图像采集设备的系统总增益,得到该像素点对应的电荷数,作为第一电荷数;在该像素点对应的第一电荷数的基础上,添加泊松噪声,得到该像素点对应的第二电荷数;将该像素点对应的第二电荷数乘以所述系统总增益,得到该像素点对应的第三像素值。
可选的,所述系统总增益的计算过程包括:获取所述图像采集设备采集的包含灰阶板的多个第四图像;针对每一像素点,计算该像素点在各个第四图像中的原始像素值的均值和方差;以均值为横坐标,方差为纵坐标,对各个像素点对应的均值和方差进行直线拟合,得到目标直线;将所述目标直线的斜率,确定为所述系统总增益。
可选的,所述基于各个第三像素值,生成所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值,包括:针对各个像素点,在对应的第三像素值的基础上,添加读入噪声和/或行噪声,得到各像素点对应的第四像素值;基于各个第四像素值,得到所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值。
可选的,所述读入噪声符合第一高斯分布,所述第一高斯分布的均值为0,方差为所述目标直线中横坐标为0时的纵坐标的数值的平方根;所述行噪声符合第二高斯分布,所述第二高斯分布的均值为0,方差为所述图像采集设备在曝光时间为0,且入射光强为0时采集的第五图像中各行像素值的标准差;一个行像素值为所述第五图像中一行像素点的原始像素值的均值。
可选的,在所述基于各个第三像素值,生成所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值之后,所述方法还包括:将所述目标图像的原始像素值作为输入数据,以及将所述第一像素值作为对应的输出数据,对待训练的图像增强网络模型进行训练,直至收敛,得到图像增强网络模型。
可选的,所述图像增强网络模型包含编码部分和解码部分,所述解码部分的每一网络层的输入数据为:该网络层的前一网络层输出的第一特征图,与所述编码部分中与该网络层对应的网络层输出的第二特征图叠加得到的。
可选的,所述基于环境光亮度大于第一预设亮度阈值时,图像采集设备采集的第一图像的原始像素值,得到所述第一图像中每一像素点的第一像素值,包括:获取环境光亮度大于第一预设亮度阈值时,图像采集设备采集的第一图像的RAW数据中各像素点的像素值,作为待处理像素值;将每一待处理像素值分别减去所述图像采集设备的黑电平值,得到所述第一图像中每一像素点的第一像素值。
第二方面,为了达到上述目的,本申请实施例公开了一种图像数据生成装置,所述装置包括:第一像素值获取模块,用于基于环境光亮度大于第一预设亮度阈值时,图像采集设备采集的第一图像的原始像素值,得到所述第一图像中每一像素点的第一像素值;第二像素值获取模块,用于将每一第一像素值除以预设倍数,得到每一像素点对应的第二像素值;其中,所述预设倍数为:所述图像采集设备在环境光亮度大于所述第一预设亮度阈值时采集的第二图像,与在环境光亮度小于第二预设亮度阈值时采集的第三图像的原始像素值的比值;所述第二预设亮度阈值小于所述第一预设亮度阈值;第三像素值获取模块,用于针对每一像素点,在该像素点对应的第二像素值的基础上,基于所述图像采集设备的系统总增益添加泊松噪声,得到该像素点对应的第三像素值;其中,所述系统总增益为:基于所述图像采集设备 采集的,包含灰阶板的第四图像的原始像素值的分布情况确定;图像数据生成模块,用于基于各个第三像素值,生成所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值。
可选的,所述第三像素值获取模块,包括:第一电荷数获取子模块,用于针对每一像素点,将该像素点对应的第二像素值除以所述图像采集设备的系统总增益,得到该像素点对应的电荷数,作为第一电荷数;第二电荷数获取子模块,用于在该像素点对应的第一电荷数的基础上,添加泊松噪声,得到该像素点对应的第二电荷数;第三像素值获取子模块,用于将该像素点对应的第二电荷数乘以所述系统总增益,得到该像素点对应的第三像素值。
可选的,所述装置还包括:第四图像获取模块,用于获取所述图像采集设备采集的包含灰阶板的多个第四图像;计算模块,用于针对每一像素点,计算该像素点在各个第四图像中的原始像素值的均值和方差;直线拟合模块,用于以均值为横坐标,方差为纵坐标,对各个像素点对应的均值和方差进行直线拟合,得到目标直线;系统总增益确定模块,用于将所述目标直线的斜率,确定为所述系统总增益。
可选的,所述图像数据生成模块,包括:第四像素值获取子模块,用于针对各个像素点,在对应的第三像素值的基础上,添加读入噪声和/或行噪声,得到各像素点对应的第四像素值;图像数据生成子模块,用于基于各个第四像素值,得到所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值。
可选的,所述读入噪声符合第一高斯分布,所述第一高斯分布的均值为0,方差为所述目标直线中横坐标为0时的纵坐标的数值的平方根;所述行噪声符合第二高斯分布,所述第二高斯分布的均值为0,方差为所述图像采集设备在曝光时间为0,且入射光强为0时采集的第五图像中各行像素值的标准差;一个行像素值为所述第五图像中一行像素点的原始像素值的均值。
可选的,所述装置还包括:训练模块,用于在所述基于各个第三像素值,生成所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值之后,将所述目标图像的原始像素值作为输入数据,以及将所述第一像素值作为对应的输出数据,对待训练的图像增强网络模型进行训练,直至收敛,得到图像增强网络模型。
可选的,所述图像增强网络模型包含编码部分和解码部分,所述解码部分的每一网络层的输入数据为:该网络层的前一网络层输出的第一特征图,与所述编码部分中与该网络层对应的网络层输出的第二特征图叠加得到的。
可选的,所述第一像素值获取模块,包括:待处理像素值获取子模块,用于获取环境光亮度大于第一预设亮度阈值时,图像采集设备采集的第一图像的RAW数据中各像素点的像素值,作为待处理像素值;第一像素值获取子模块,用于将每一待处理像素值分别减去所述图像采集设备的黑电平值,得到所述第一图像中每一像素点的第一像素值。
第三方面,为了达到上述目的,本申请实施例还公开了一种电子设备,所述电子设备包括处理器、 通信接口、存储器和通信总线,其中,所述处理器,所述通信接口,所述存储器通过所述通信总线完成相互间的通信;所述存储器,用于存放计算机程序;所述处理器,用于执行所述存储器上所存放的程序时,实现如上述第一方面所述的图像数据生成方法。
第四方面,为了达到上述目的,本申请实施例还公开了一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现如上述第一方面所述的图像数据生成方法。
第五方面,为了达到上述目的,本申请实施例还公开了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述任一所述的图像数据生成方法。
本申请实施例有益效果:
本申请实施例提供的图像数据生成方法,可以基于环境光亮度大于第一预设亮度阈值时,图像采集设备采集的第一图像的原始像素值,得到第一图像中每一像素点的第一像素值;将每一第一像素值除以预设倍数,得到每一像素点对应的第二像素值;其中,预设倍数为:图像采集设备在环境光亮度大于第一预设亮度阈值时采集的第二图像,与在环境光亮度小于第二预设亮度阈值时采集的第三图像的原始像素值的比值;第二预设亮度阈值小于第一预设亮度阈值;针对每一像素点,在该像素点对应的第二像素值的基础上,基于图像采集设备的系统总增益添加泊松噪声,得到该像素点对应的第三像素值;其中,系统总增益为:基于图像采集设备采集的,包含灰阶板的第四图像的原始像素值的分布情况确定;基于各个第三像素值,生成第一图像对应的,环境光亮度小于第二预设亮度阈值的目标图像的原始像素值。
生成的目标图像的原始像素值是在第一图像的原始像素值的基础上除以预设倍数得到,即,目标图像的原始像素值能够表示环境光亮度较小时的图像。另外,目标图像的原始像素值为添加泊松噪声得到的,使得目标图像的原始像素值能够有效地模拟图像采集设备采集的图像中的噪声。因此,本申请实施例提供的图像数据生成方法,能够自动生成与环境光亮度充足时的图像对应的,环境光亮度不足的图像的图像数据。相对于现有技术中,由人工手动通过图像采集设备采集环境光亮度不足时的图像,能够提高图像数据的生成效率。当然,实施本申请的任一产品或方法并不一定需要同时达到以上所述的所有优点。
附图说明
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,本领域普通技术人员来讲还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种图像数据生成方法的流程图;
图2为本申请实施例提供的另一种图像数据生成方法的流程图;
图3为本申请实施例提供的另一种图像数据生成方法的流程图;
图4为本申请实施例提供的一种计算系统总增益的方法的流程图;
图5为本申请实施例提供的一种灰阶板的示意图;
图6为本申请实施例提供的一种目标直线的示意图;
图7为本申请实施例提供的另一种图像数据生成方法的流程图;
图8为本申请实施例提供的一种图像增强网络模型的结构图;
图9为本申请实施例提供的一种基于图像增强网络模型进行图像增强处理的示意图;
图10为本申请实施例提供的一种添加噪声的结果对比图;
图11为本申请实施例提供的一种基于图像增强网络模型处理图像的结果对比图;
图12为本申请实施例提供的另一种基于图像增强网络模型处理图像的结果对比图;
图13为本申请实施例提供的另一种基于图像增强网络模型处理图像的结果对比图;
图14为本申请实施例提供的另一种基于图像增强网络模型处理图像的结果对比图;
图15为本申请实施例提供的另一种基于图像增强网络模型处理图像的结果对比图;
图16为本申请实施例提供的另一种基于图像增强网络模型处理图像的结果对比图;
图17为本申请实施例提供的另一种基于图像增强网络模型处理图像的结果对比图;
图18为本申请实施例提供的另一种基于图像增强网络模型处理图像的结果对比图;
图19为本申请实施例提供的另一种基于图像增强网络模型处理图像的结果对比图;
图20为本申请实施例提供的另一种基于图像增强网络模型处理图像的结果对比图;
图21为本申请实施例提供的另一种基于图像增强网络模型处理图像的结果对比图;
图22为本申请实施例提供的另一种基于图像增强网络模型处理图像的结果对比图;
图23为本申请实施例提供的一种图像数据生成装置的结构图;
图24为本申请实施例提供的一种电子设备的结构图。
具体实施方式
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。本领域普通技术人员基于本申请中的实施例所获得的所有其他实施例,都属于本申请保护的范围。
相关技术中,由人工通过图像采集设备,采集环境光亮度充足和环境光亮度不足情况下的样本图像,会降低图像数据的生成效率。
为了解决上述问题,本申请实施例提供了一种图像数据生成方法,该方法可以应用于电子设备,该电子设备可以获取图像采集设备采集的,环境光亮度大于第一预设亮度阈值时的第一图像的原始像素值。进而,电子设备可以基于本申请实施例提供的图像数据生成方法,生成第一图像对应的,环境光亮度小于第二预设亮度阈值的目标图像的原始像素值,即,电子设备可以直接生成对应的环境光亮度不足的图 像的图像数据。
参见图1,图1为本申请实施例提供的一种图像数据生成方法的流程图,该方法可以包括以下步骤:
S101:基于环境光亮度大于第一预设亮度阈值时,图像采集设备采集的第一图像的原始像素值,得到第一图像中每一像素点的第一像素值。
S102:将每一第一像素值除以预设倍数,得到每一像素点对应的第二像素值。
其中,预设倍数为:图像采集设备在环境光亮度大于第一预设亮度阈值时采集的第二图像,与在环境光亮度小于第二预设亮度阈值时采集的第三图像的原始像素值的比值。第二预设亮度阈值小于第一预设亮度阈值。
S103:针对每一像素点,在该像素点对应的第二像素值的基础上,基于图像采集设备的系统总增益添加泊松噪声,得到该像素点对应的第三像素值。
其中,系统总增益为:基于图像采集设备采集的,包含灰阶板的第四图像的原始像素值的分布情况确定。
S104:基于各个第三像素值,生成第一图像对应的,环境光亮度小于第二预设亮度阈值的目标图像的原始像素值。
基于本申请实施例提供的图像数据生成方法,生成的目标图像的原始像素值是在第一图像的原始像素值的基础上除以预设倍数得到,即,目标图像的原始像素值能够表示环境光亮度较小时的图像。另外,目标图像的原始像素值为添加泊松噪声得到的,使得目标图像的原始像素值能够有效地模拟图像采集设备采集的图像中的噪声。因此,本申请实施例提供的图像数据生成方法,能够自动生成与环境光亮度充足时的图像对应的,环境光亮度不足的图像的图像数据。相对于现有技术中,由人工手动通过图像采集设备采集环境光亮度不足时的图像,能够提高图像数据的生成效率。
针对步骤S101,第一预设亮度阈值可以由技术人员根据经验进行设置,例如,第一预设亮度阈值可以为25lux,或者,也可以为30lux。环境光亮度大于第一预设亮度阈值,表示当前的环境光亮度充足。
一种实现方式中,图像的原始像素值可以为:图像采集设备采集的图像的RAW数据中记录的像素值。也就是,图像采集设备的CMOS(Complementary Metal Oxide Semiconductor,互补金属氧化物半导体)或者CCD(Charge Coupled Device,电荷耦合器件)图像感应器将捕捉到的光源信号转化为数字信号的原始数据。
一种实现方式中,可以直接将图像采集设备采集的第一图像的原始像素值,作为第一图像中每一像素点的第一像素值。
另一种实现方式中,参见图2,在图1的基础上,上述步骤S101可以包括以下步骤:
S1011:获取环境光亮度大于第一预设亮度阈值时,图像采集设备采集的第一图像的RAW数据中各像素点的像素值,作为待处理像素值。
S1012:将每一待处理像素值分别减去图像采集设备的黑电平值,得到第一图像中每一像素点的第一像素值。
在本申请实施例中,图像采集设备在将光源信号转化为数字信号的原始数据时,通常会加上一个偏移值,该偏移值也就是该图像采集设备的黑电平(Black Level)值。因此,为了提高获取的第一像素值的准确度,可以将待处理像素值减去该黑电平值。另外,若一个待处理像素值减去黑电平值后为负数,则可以确定对应的第一像素值为0。
针对步骤S102,第二预设亮度阈值可以由技术人员根据经验进行设置,例如,第二预设亮度阈值可以为0.1lux,或者,也可以为0.2lux。环境光亮度小于第二预设亮度阈值,表示当前的环境光亮度不足。
在一个实施例中,可以预先利用图像采集设备在环境光亮度大于第一预设亮度阈值时采集多个图像,得到第二图像,以及在环境光亮度小于第二预设亮度阈值时采集多个图像,得到第三图像。
然后,可以计算多个第二图像的原始像素值的平均值(可以称为第一平均值),以及多个第三图像的原始像素值的平均值(可以称为第二平均值)。进而,可以计算第一平均值与第二平均值的比值,作为预设倍数。
将第一像素值除以预设倍数得到第二像素值,也就使得第二像素值能够体现环境光亮度小于第二预设亮度阈值时的图像,即,能够体现环境光亮度不足时采集的图像。
针对上述步骤S103,图像采集设备在将捕捉到的光源信号转化为数字信号的原始数据的过程中,可以计算每一像素点对应的光电转换的电荷数与系统总增益的乘积,作为该像素点的原始像素值。
在一个实施例中,可以基于每一像素点对应的电荷数添加泊松噪声。参见图3,在图1的基础上,上述步骤S103可以包括以下步骤:
S1031:针对每一像素点,将该像素点对应的第二像素值除以图像采集设备的系统总增益,得到该像素点对应的电荷数,作为第一电荷数。
S1032:在该像素点对应的第一电荷数的基础上,添加泊松噪声,得到该像素点对应的第二电荷数。
S1033:将该像素点对应的第二电荷数乘以系统总增益,得到该像素点对应的第三像素值。
在本申请实施例中,针对每一像素点,可以确定以该像素点对应的第一电荷数为均值和方差的泊松分布,即,确定出的泊松分布的均值和方差均为第一电荷数。进而,可以生成符合该泊松分布的一个数值,作为第二电荷数,也就是添加泊松噪声后,该像素点对应的电荷数。例如,可以随机生成多个符合该泊松分布的数值,并从其中选择一个数值作为第二电荷数。或者,也可以随机生成一个符合该泊松分布的数值,作为第二电荷数。
在一个实施例中,参见图4,图4为本申请实施例提供的一种计算系统总增益的方法的流程图,该方法可以包括以下步骤:
S401:获取图像采集设备采集的包含灰阶板的多个第四图像。
S402:针对每一像素点,计算该像素点在各个第四图像中的原始像素值的均值和方差。
S403:以均值为横坐标,方差为纵坐标,对各个像素点对应的均值和方差进行直线拟合,得到目标直线。
S404:将目标直线的斜率,确定为系统总增益。
其中,灰阶板的表面为像素值渐变的灰度图,例如,参见图5。
在本申请实施例中,可以预先在预设环境光亮度下,利用图像采集设备拍摄多张(例如,100张)包含该灰阶板的图像(即第四图像)。另外,在拍摄的过程中,使得灰阶板最白的位置不过爆。
然后,可以获取各个第四图像的原始像素值,进而,针对每一像素点,统计该像素点在各个第四图像中的原始像素值,并确定出对应的均值和方差。也就是说,针对每一像素点,均可以确定出其对应的原始像素值的均值和方差。
参见公式(1)。
Figure PCTCN2022076725-appb-000001
其中,E(x)表示像素点对应的像素值的均值,Var(x)表示像素点对应的像素值的方差,K表示目标直线的斜率,δ read 2表示目标直线中横坐标(即均值)为0时的纵坐标(即方差)的数值。
参见图6,图6为本申请实施例提供的一种目标直线的示意图。图6中,横坐标表示像素点对应的像素值的均值,纵坐标表示像素点对应的像素值的方差。根据各个像素点对应的像素值的均值和方差,可以确定出图6中的各个点,进而,确定出图6中各个点拟合得到的直线。
在一个实施例中,还可以在添加泊松噪声的基础上,添加其他噪声,以进一步提高生成的图像数据的真实性。
相应的,上述步骤S104可以包括以下步骤:
步骤一:针对各个像素点,在对应的第三像素值的基础上,添加读入噪声和/或行噪声,得到各像素点对应的第四像素值。
步骤二:基于各个第四像素值,得到第一图像对应的,环境光亮度小于第二预设亮度阈值的目标图像的原始像素值。
在本申请实施例中,在得到第三像素值的基础上,即,在添加泊松噪声后,还可以添加读入噪声和/或行噪声,得到第四像素值。
其中,读入噪声表示由暗电流噪声、热噪声、源极跟随器噪声等导致的图像噪声。针对各个像素点,对应的读入噪声符合高斯分布(即后文中的第一高斯分布)。
例如,可以生成m×n个符合第一高斯分布的数值,m表示第一图像的宽度,n表示第一图像的高度,生成的m×n个数值分别与各个像素点对应。进而,可以将生成的m×n个数值分别与对应的像素点的第三像素值相加,得到第四像素值。
行噪声表示由CMOS图像感应器以行像素点为单位读出数据时所产生的噪声。每一行像素点对应的行噪声相同,各行像素点对应的行噪声符合高斯分布(即后文中的第二高斯分布)。
例如,可以生成n个符合第二高斯分布的数值,n表示第一图像的高度,生成的n个数值分别与各行像素点对应。针对一行像素点,可以将该行像素点对应的第三像素值加上生成的对应的数值,得到第四像素值。
在一个实施例中,上述读入噪声符合第一高斯分布,第一高斯分布的均值为0,方差为目标直线中横坐标为0时的纵坐标的数值的平方根,即上述公式(1)中的δ read
行噪声符合第二高斯分布,第二高斯分布的均值为0,方差为图像采集设备在曝光时间为0,且入射光强为0时采集的第五图像中各行像素值的标准差。一个行像素值为第五图像中一行像素点的原始像素值的均值。
在本申请实施例中,可以获取图像采集设备在曝光时间为0,且入射光强为0时采集的图像(即第五图像)的原始像素值。第五图像也就是图像采集设备采集的Bias Frame(偏移帧)。例如,可以遮盖住图像采集设备的镜头,并设置曝光时间为0,进行图像采集,采集到的图像也就是第五图像。
在得到第五图像的原始像素值后,针对每一行像素点,可以计算该行像素点在第五图像中的原始像素值的均值。然而,可以计算各行像素点对应的各个均值的标准差,将该标准差作为第二高斯分布的方差。
一种实现方式中,在添加读入噪声和/或行噪声,得到第四像素值后,可以直接将第四像素值,作为第一图像对应的,环境光亮度小于第二预设亮度阈值的目标图像的原始像素值。
另一种实现方式中,在得到第四像素值后,还可以在第四像素值的基础上,添加量化噪声。量化噪声可以基于图像采集设备采集图像的原始像素值的位数确定。例如,量化噪声可以为
Figure PCTCN2022076725-appb-000002
其中,L表示图像采集设备采集图像的原始像素值的位数。
例如,图像采集设备采集图像的原始像素值为12位,也就是说,每一像素点的像素值的取值范围为0至2 L-1(即4095)。相应的,量化噪声可以为1/4095,即,将各个第四像素值分别加上1/4095,得到第一图像对应的,环境光亮度小于第二预设亮度阈值的目标图像的原始像素值。
在一个实施例中,在得到目标图像的原始像素值之后,还可以基于第一图像的原始像素值和目标图像的原始像素值,对图像增强网络模型进行训练。
相应的,参见图7,在图1的基础上,在上述步骤S104之后,该方法还可以包括以下步骤:
S105:将目标图像的原始像素值作为输入数据,以及将第一像素值作为对应的输出数据,对待训练的图像增强网络模型进行训练,直至收敛,得到图像增强网络模型。
基于上述处理,使得图像增强网络模型能够学习从目标图像到第一图像的转换关系。相应的,针对输入的环境光不足时采集的图像的原始像素值,图像增强网络模型能够去除其中的噪声,得到清晰的图像的原始像素值,即,能够得到对应的环境光充足时的图像的图像数据。
在一个实施例中,上述图像增强网络模型可以为U-net。
在一个实施例中,还可以对传统的U-net进行改进,得到上述图像增强网络模型。
一种实现方式中,图像增强网络模型包含编码部分和解码部分,解码部分的每一网络层的输入数据为:该网络层的前一网络层输出的第一特征图,与编码部分中与该网络层对应的网络层输出的第二特征图叠加得到的。
参见图8,图8为本申请实施例提供的一种图像增强网络模型的结构图。
图8中的图像增强网络模型可以分为左侧的编码部分和右侧的解码部分。
编码部分包括多个网络层,解码部分也包含多个网络层,且二者包含的网络层的数目相同。本申请实施例中,仅以二者均包含5个网络层为例进行说明,但并不限于此。
图8中的数字表示通道数,编码部分和解码部分中向右的实心箭头表示卷积(Convolution),卷积核的大小为3×3;向下的实心箭头表示最大池化(Max Pool),池化窗口的大小为2×2;向上的实心箭头表示转置卷积(Transposed Convolution),卷积核的大小为2×2,转置卷积也就是上采样处理;向右的空心箭头表示表示卷积,卷积核的大小为1×1。编码部分与解码部分之间的箭头表示叠加处理。
图8中,针对编码部分和解码部分,从上至下,可以分别称为第一个网络层、第二个网络层、第三个网络层、第四个网络层、第五个网络层。
图8中,输入数据为4通道的数据,输出的也为4通道的数据,每一通道的数据可以称为一个特征图。
可见,图8中编码部分的第一个网络层的输入数据为4通道的特征图,经过该网络层处理后,可以得到32通道的特征图;经过编码部分的第二个网络层处理后,可以得到64通道的特征图;经过编码部分的第三个网络层处理后,可以得到128通道的特征图;经过编码部分的第四个网络层处理后,可以得到256通道的特征图;经过编码部分的第五个网络层处理后,可以得到512通道的特征图。
解码部分的第五个网络层的输入数据为512通道的特征图,经过该网络层处理后,可以得到512通道的特征图。对该512通道的特征图进行上采样后,可以得到256通道的特征图,并分别与编码部分的第四个网络层得到的256通道的特征图进行叠加,叠加结果仍为256通道的特征图。然后,可以将叠加得到的256通道的特征图输入至解码部分的第四个网络层,得到256通道的特征图。对解码部分的第四个网络层输出的256通道的特征图进行上采样,得到128通道的特征图,并分别与编码部分的第三个网络层得到的128通道的特征图进行叠加,叠加结果仍为128通道的特征图,并输入至解码部分的第三个网络层。
以此类推,解码部分的第一个网络层的输入数据为:对解码部分的第二个网络层输出的64通道的特征图上采样得到的32通道的特征图,与编码部分的第一个网络层输出的32通道的特征图的叠加结果。解码部分的第一个网络层输出4通道的特征图。
例如,可以将目标图像的原始像素值拆分(Unpack)为4通道的特征图,即RGGB(Red Green Green Blue,红绿绿蓝)通道的特征图,作为图8所示图像增强网络模型的输入数据,同时,将第一像素值也拆分为4通道的特征图作为对应的输出数据,对图8所示图像增强网络模进行训练。
进而,当需要对某一图像(待处理图像)进行图像增强处理时,可以将该待处理图像的原始像素值拆分为4通道的特征图,输入至训练好的图像增强网络模型。进而,可以得到图像增强网络模型输出的4通道的特征图,并对输出的4通道的特征图进行打包(Pack),得到增强处理后的原始像素值。
参见图9,图9为本申请实施例提供的一种基于图像增强网络模型进行图像增强处理的示意图。
图9中,最左侧为初始的Bayer(拜尔)RAW,表示初始的图像RAW数据,经过拆分后,得到4通道的特征图,作为图像增强网络模型的输入。相应的,图像增强网络模型的输出为4通道的特征图,进而,对输出的4通道的特征图进行打包,可以得到增强的Bayer RAW,即,增强的图像Raw数据。
针对原始像素值,即RAW数据中记录的像素值,经过预设的ISP(Image Processing Pipeline,图像处理管线)进行处理,可以得到对应的图像。
参见图10,左侧的图表示利用图像采集设备采集的Bias Frame的原始像素值,经过预设的ISP处理生成的图像;中间的图表示基于本申请实施例提供的图像数据生成方法,在原始像素值为黑电平值的基础上,添加泊松噪声、读入噪声和行噪声得到的像素值,经过预设的ISP处理生成的图像;右侧的图表示基于相关技术,在原始像素值为黑电平值的基础上,添加泊松噪声、读入噪声和行噪声得到的像素值,经过预设的ISP处理生成的图像。相关技术中,添加泊松噪声时的系统总增益、添加的读入噪声以及行噪声均为经验值。
可见,相对于现有技术,基于本申请实施例提供的图像数据生成方法得到的图像,与利用图像采集设备真实采集的图像的相似度更高,也就是说,基于本申请实施例提供的图像数据方法生成的图像数据能够更真实地模拟图像采集设备真实采集的图像。
参见图11-图22,每一组图包含3个图像。每一组的3个图像中,左侧的图像为环境光不足时采集的包含噪声的图像;中间的图像为基于本申请实施例提供的图像增强网络模型,对左侧的图像进行增强处理得到的图像;右侧的图像为基于相关技术中的图像增强网络模型,对左侧的图像进行增强处理得到的图像。
基于上述图11-图22可见,相对于现有技术中的图像增强网络模型,通过本申请实施例提供的图像增强网络模型增强处理得到的图像中的噪声更少,图像的清晰度更高。
基于相同的发明构思,本申请实施例还提供了一种图像数据生成装置,参见图23,图23为本申请实施例提供的一种图像数据生成装置的结构图,该装置可以包括:第一像素值获取模块2301,用于基于环境光亮度大于第一预设亮度阈值时,图像采集设备采集的第一图像的原始像素值,得到所述第一图像中每一像素点的第一像素值;第二像素值获取模块2302,用于将每一第一像素值除以预设倍数,得到每 一像素点对应的第二像素值;其中,所述预设倍数为:所述图像采集设备在环境光亮度大于所述第一预设亮度阈值时采集的第二图像,与在环境光亮度小于第二预设亮度阈值时采集的第三图像的原始像素值的比值;所述第二预设亮度阈值小于所述第一预设亮度阈值;第三像素值获取模块2303,用于针对每一像素点,在该像素点对应的第二像素值的基础上,基于所述图像采集设备的系统总增益添加泊松噪声,得到该像素点对应的第三像素值;其中,所述系统总增益为:基于所述图像采集设备采集的,包含灰阶板的第四图像的原始像素值的分布情况确定;图像数据生成模块2304,用于基于各个第三像素值,生成所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值。
可选的,所述第三像素值获取模块2303,包括:第一电荷数获取子模块,用于针对每一像素点,将该像素点对应的第二像素值除以所述图像采集设备的系统总增益,得到该像素点对应的电荷数,作为第一电荷数;第二电荷数获取子模块,用于在该像素点对应的第一电荷数的基础上,添加泊松噪声,得到该像素点对应的第二电荷数;第三像素值获取子模块,用于将该像素点对应的第二电荷数乘以所述系统总增益,得到该像素点对应的第三像素值。
可选的,所述装置还包括:第四图像获取模块,用于获取所述图像采集设备采集的包含灰阶板的多个第四图像;计算模块,用于针对每一像素点,计算该像素点在各个第四图像中的原始像素值的均值和方差;直线拟合模块,用于以均值为横坐标,方差为纵坐标,对各个像素点对应的均值和方差进行直线拟合,得到目标直线;系统总增益确定模块,用于将所述目标直线的斜率,确定为所述系统总增益。
可选的,所述图像数据生成模块2304,包括:第四像素值获取子模块,用于针对各个像素点,在对应的第三像素值的基础上,添加读入噪声和/或行噪声,得到各像素点对应的第四像素值;图像数据生成子模块,用于基于各个第四像素值,得到所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值。
可选的,所述读入噪声符合第一高斯分布,所述第一高斯分布的均值为0,方差为所述目标直线中横坐标为0时的纵坐标的数值的平方根;所述行噪声符合第二高斯分布,所述第二高斯分布的均值为0,方差为所述图像采集设备在曝光时间为0,且入射光强为0时采集的第五图像中各行像素值的标准差;一个行像素值为所述第五图像中一行像素点的原始像素值的均值。
可选的,所述装置还包括:训练模块,用于在所述基于各个第三像素值,生成所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值之后,将所述目标图像的原始像素值作为输入数据,以及将所述第一像素值作为对应的输出数据,对待训练的图像增强网络模型进行训练,直至收敛,得到图像增强网络模型。
可选的,所述图像增强网络模型包含编码部分和解码部分,所述解码部分的每一网络层的输入数据为:该网络层的前一网络层输出的第一特征图,与所述编码部分中与该网络层对应的网络层输出的第二特征图叠加得到的。
可选的,所述第一像素值获取模块2301,包括:待处理像素值获取子模块,用于获取环境光亮度大于第一预设亮度阈值时,图像采集设备采集的第一图像的RAW数据中各像素点的像素值,作为待处理像素值;第一像素值获取子模块,用于将每一待处理像素值分别减去所述图像采集设备的黑电平值,得到所述第一图像中每一像素点的第一像素值。
本申请实施例还提供了一种电子设备,如图24所示,包括处理器2401、通信接口2402、存储器2403和通信总线2404,其中,处理器2401,通信接口2402,存储器2403通过通信总线2404完成相互间的通信,存储器2403,用于存放计算机程序;处理器2401,用于执行存储器2403上所存放的程序时,实现如下步骤:基于环境光亮度大于第一预设亮度阈值时,图像采集设备采集的第一图像的原始像素值,得到所述第一图像中每一像素点的第一像素值;将每一第一像素值除以预设倍数,得到每一像素点对应的第二像素值;其中,所述预设倍数为:所述图像采集设备在环境光亮度大于所述第一预设亮度阈值时采集的第二图像,与在环境光亮度小于第二预设亮度阈值时采集的第三图像的原始像素值的比值;所述第二预设亮度阈值小于所述第一预设亮度阈值;针对每一像素点,在该像素点对应的第二像素值的基础上,基于所述图像采集设备的系统总增益添加泊松噪声,得到该像素点对应的第三像素值;其中,所述系统总增益为:基于所述图像采集设备采集的,包含灰阶板的第四图像的原始像素值的分布情况确定;基于各个第三像素值,生成所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值。
上述电子设备提到的通信总线可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
通信接口用于上述电子设备与其他设备之间的通信。
存储器可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。
上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
在本申请提供的又一实施例中,还提供了一种计算机可读存储介质,该计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一图像数据生成方法的步骤。
在本申请提供的又一实施例中,还提供了一种包含指令的计算机程序产品,当其在计算机上运行时, 使得计算机执行上述实施例中任一图像数据生成方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置、电子设备、计算机可读存储介质以及计算机程序产品实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (19)

  1. 一种图像数据生成方法,其特征在于,所述方法包括:
    基于环境光亮度大于第一预设亮度阈值时,图像采集设备采集的第一图像的原始像素值,得到所述第一图像中每一像素点的第一像素值;
    将每一第一像素值除以预设倍数,得到每一像素点对应的第二像素值;其中,所述预设倍数为:所述图像采集设备在环境光亮度大于所述第一预设亮度阈值时采集的第二图像,与在环境光亮度小于第二预设亮度阈值时采集的第三图像的原始像素值的比值;所述第二预设亮度阈值小于所述第一预设亮度阈值;
    针对每一像素点,在该像素点对应的第二像素值的基础上,基于所述图像采集设备的系统总增益添加泊松噪声,得到该像素点对应的第三像素值;其中,所述系统总增益为:基于所述图像采集设备采集的,包含灰阶板的第四图像的原始像素值的分布情况确定;
    基于各个第三像素值,生成所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值。
  2. 根据权利要求1所述的方法,其特征在于,所述针对每一像素点,在该像素点对应的第二像素值的基础上,基于所述图像采集设备的系统总增益添加泊松噪声,得到该像素点对应的第三像素值,包括:
    针对每一像素点,将该像素点对应的第二像素值除以所述图像采集设备的系统总增益,得到该像素点对应的电荷数,作为第一电荷数;
    在该像素点对应的第一电荷数的基础上,添加泊松噪声,得到该像素点对应的第二电荷数;
    将该像素点对应的第二电荷数乘以所述系统总增益,得到该像素点对应的第三像素值。
  3. 根据权利要求1所述的方法,其特征在于,所述系统总增益的计算过程包括:
    获取所述图像采集设备采集的包含灰阶板的多个第四图像;
    针对每一像素点,计算该像素点在各个第四图像中的原始像素值的均值和方差;
    以均值为横坐标,方差为纵坐标,对各个像素点对应的均值和方差进行直线拟合,得到目标直线;
    将所述目标直线的斜率,确定为所述系统总增益。
  4. 根据权利要求3所述的方法,其特征在于,所述基于各个第三像素值,生成所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值,包括:
    针对各个像素点,在对应的第三像素值的基础上,添加读入噪声和/或行噪声,得到各像素点对应的第四像素值;
    基于各个第四像素值,得到所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值。
  5. 根据权利要求4所述的方法,其特征在于,所述读入噪声符合第一高斯分布,所述第一高斯分布 的均值为0,方差为所述目标直线中横坐标为0时的纵坐标的数值的平方根;
    所述行噪声符合第二高斯分布,所述第二高斯分布的均值为0,方差为所述图像采集设备在曝光时间为0,且入射光强为0时采集的第五图像中各行像素值的标准差;一个行像素值为所述第五图像中一行像素点的原始像素值的均值。
  6. 根据权利要求1所述的方法,其特征在于,在所述基于各个第三像素值,生成所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值之后,所述方法还包括:
    将所述目标图像的原始像素值作为输入数据,以及将所述第一像素值作为对应的输出数据,对待训练的图像增强网络模型进行训练,直至收敛,得到图像增强网络模型。
  7. 根据权利要求6所述的方法,其特征在于,所述图像增强网络模型包含编码部分和解码部分,所述解码部分的每一网络层的输入数据为:该网络层的前一网络层输出的第一特征图,与所述编码部分中与该网络层对应的网络层输出的第二特征图叠加得到的。
  8. 根据权利要求1所述的方法,其特征在于,所述基于环境光亮度大于第一预设亮度阈值时,图像采集设备采集的第一图像的原始像素值,得到所述第一图像中每一像素点的第一像素值,包括:
    获取环境光亮度大于第一预设亮度阈值时,图像采集设备采集的第一图像的RAW数据中各像素点的像素值,作为待处理像素值;
    将每一待处理像素值分别减去所述图像采集设备的黑电平值,得到所述第一图像中每一像素点的第一像素值。
  9. 一种图像数据生成装置,其特征在于,所述装置包括:
    第一像素值获取模块,用于基于环境光亮度大于第一预设亮度阈值时,图像采集设备采集的第一图像的原始像素值,得到所述第一图像中每一像素点的第一像素值;
    第二像素值获取模块,用于将每一第一像素值除以预设倍数,得到每一像素点对应的第二像素值;其中,所述预设倍数为:所述图像采集设备在环境光亮度大于所述第一预设亮度阈值时采集的第二图像,与在环境光亮度小于第二预设亮度阈值时采集的第三图像的原始像素值的比值;所述第二预设亮度阈值小于所述第一预设亮度阈值;
    第三像素值获取模块,用于针对每一像素点,在该像素点对应的第二像素值的基础上,基于所述图像采集设备的系统总增益添加泊松噪声,得到该像素点对应的第三像素值;其中,所述系统总增益为:基于所述图像采集设备采集的,包含灰阶板的第四图像的原始像素值的分布情况确定;
    图像数据生成模块,用于基于各个第三像素值,生成所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值。
  10. 根据权利要求9所述的装置,其特征在于,所述第三像素值获取模块,包括:
    第一电荷数获取子模块,用于针对每一像素点,将该像素点对应的第二像素值除以所述图像采集设 备的系统总增益,得到该像素点对应的电荷数,作为第一电荷数;
    第二电荷数获取子模块,用于在该像素点对应的第一电荷数的基础上,添加泊松噪声,得到该像素点对应的第二电荷数;
    第三像素值获取子模块,用于将该像素点对应的第二电荷数乘以所述系统总增益,得到该像素点对应的第三像素值。
  11. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    第四图像获取模块,用于获取所述图像采集设备采集的包含灰阶板的多个第四图像;
    计算模块,用于针对每一像素点,计算该像素点在各个第四图像中的原始像素值的均值和方差;
    直线拟合模块,用于以均值为横坐标,方差为纵坐标,对各个像素点对应的均值和方差进行直线拟合,得到目标直线;
    系统总增益确定模块,用于将所述目标直线的斜率,确定为所述系统总增益。
  12. 根据权利要求11所述的装置,其特征在于,所述图像数据生成模块,包括:
    第四像素值获取子模块,用于针对各个像素点,在对应的第三像素值的基础上,添加读入噪声和/或行噪声,得到各像素点对应的第四像素值;
    图像数据生成子模块,用于基于各个第四像素值,得到所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值。
  13. 根据权利要求12所述的装置,其特征在于,所述读入噪声符合第一高斯分布,所述第一高斯分布的均值为0,方差为所述目标直线中横坐标为0时的纵坐标的数值的平方根;
    所述行噪声符合第二高斯分布,所述第二高斯分布的均值为0,方差为所述图像采集设备在曝光时间为0,且入射光强为0时采集的第五图像中各行像素值的标准差;一个行像素值为所述第五图像中一行像素点的原始像素值的均值。
  14. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    训练模块,用于在所述基于各个第三像素值,生成所述第一图像对应的,环境光亮度小于所述第二预设亮度阈值的目标图像的原始像素值之后,将所述目标图像的原始像素值作为输入数据,以及将所述第一像素值作为对应的输出数据,对待训练的图像增强网络模型进行训练,直至收敛,得到图像增强网络模型。
  15. 根据权利要求14所述的装置,其特征在于,所述图像增强网络模型包含编码部分和解码部分,所述解码部分的每一网络层的输入数据为:该网络层的前一网络层输出的第一特征图,与所述编码部分中与该网络层对应的网络层输出的第二特征图叠加得到的。
  16. 根据权利要求9所述的装置,其特征在于,所述第一像素值获取模块,包括:
    待处理像素值获取子模块,用于获取环境光亮度大于第一预设亮度阈值时,图像采集设备采集的第 一图像的RAW数据中各像素点的像素值,作为待处理像素值;
    第一像素值获取子模块,用于将每一待处理像素值分别减去所述图像采集设备的黑电平值,得到所述第一图像中每一像素点的第一像素值。
  17. 一种电子设备,其特征在于,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;
    存储器,用于存放计算机程序;
    处理器,用于执行存储器上所存放的程序时,实现权利要求1-8任一所述的方法步骤。
  18. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-8任一所述的方法步骤。
  19. 一种包含指令的计算机程序产品,其特征在于,当其在计算机上运行时,使得计算机执行权利要求1-8任一所述的方法步骤。
PCT/CN2022/076725 2021-06-22 2022-02-18 一种图像数据生成方法和装置 WO2022267494A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/791,126 US20240179421A1 (en) 2021-06-22 2022-02-18 Method and apparatus for generating image data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110690725.8 2021-06-22
CN202110690725.8A CN113256537B (zh) 2021-06-22 2021-06-22 一种图像数据生成方法和装置

Publications (1)

Publication Number Publication Date
WO2022267494A1 true WO2022267494A1 (zh) 2022-12-29

Family

ID=77189121

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/076725 WO2022267494A1 (zh) 2021-06-22 2022-02-18 一种图像数据生成方法和装置

Country Status (3)

Country Link
US (1) US20240179421A1 (zh)
CN (1) CN113256537B (zh)
WO (1) WO2022267494A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218113A (zh) * 2023-11-06 2023-12-12 铸新科技(苏州)有限责任公司 一种炉管的氧化程度分析方法、装置、计算机设备及介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256537B (zh) * 2021-06-22 2022-01-07 英特灵达信息技术(深圳)有限公司 一种图像数据生成方法和装置
CN114494080A (zh) * 2022-03-28 2022-05-13 英特灵达信息技术(深圳)有限公司 一种图像生成方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090040516A1 (en) * 2007-08-10 2009-02-12 Honeywell International Inc. Spectroscopic system
CN108986050A (zh) * 2018-07-20 2018-12-11 北京航空航天大学 一种基于多分支卷积神经网络的图像和视频增强方法
CN110610463A (zh) * 2019-08-07 2019-12-24 深圳大学 一种图像增强方法及装置
CN111260579A (zh) * 2020-01-17 2020-06-09 北京理工大学 一种基于物理噪声生成模型的微光图像去噪增强方法
CN113256537A (zh) * 2021-06-22 2021-08-13 英特灵达信息技术(深圳)有限公司 一种图像数据生成方法和装置

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101052096A (zh) * 2006-04-04 2007-10-10 广达电脑股份有限公司 用以调整图像的对比度的方法及装置
JP2009290827A (ja) * 2008-06-02 2009-12-10 Sony Corp 画像処理装置および画像処理方法
US9600887B2 (en) * 2013-12-09 2017-03-21 Intel Corporation Techniques for disparity estimation using camera arrays for high dynamic range imaging
CN104050645B (zh) * 2014-06-23 2017-01-11 小米科技有限责任公司 图像处理方法及装置
JP6578454B2 (ja) * 2017-01-10 2019-09-18 富士フイルム株式会社 ノイズ処理装置及びノイズ処理方法
CN108055486B (zh) * 2017-12-07 2020-02-14 浙江华睿科技有限公司 一种亮度校正方法及装置
US11189017B1 (en) * 2018-09-11 2021-11-30 Apple Inc. Generalized fusion techniques based on minimizing variance and asymmetric distance measures
US10852123B2 (en) * 2018-10-25 2020-12-01 Government Of The United States Of America, As Represented By The Secretary Of Commerce Apparatus for critical-dimension localization microscopy
CN113168671A (zh) * 2019-03-21 2021-07-23 华为技术有限公司 噪点估计
CN109949353A (zh) * 2019-03-25 2019-06-28 北京理工大学 一种低照度图像自然感彩色化方法
CN111401411B (zh) * 2020-02-28 2023-09-29 北京小米松果电子有限公司 获取样本图像集的方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090040516A1 (en) * 2007-08-10 2009-02-12 Honeywell International Inc. Spectroscopic system
CN108986050A (zh) * 2018-07-20 2018-12-11 北京航空航天大学 一种基于多分支卷积神经网络的图像和视频增强方法
CN110610463A (zh) * 2019-08-07 2019-12-24 深圳大学 一种图像增强方法及装置
CN111260579A (zh) * 2020-01-17 2020-06-09 北京理工大学 一种基于物理噪声生成模型的微光图像去噪增强方法
CN113256537A (zh) * 2021-06-22 2021-08-13 英特灵达信息技术(深圳)有限公司 一种图像数据生成方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218113A (zh) * 2023-11-06 2023-12-12 铸新科技(苏州)有限责任公司 一种炉管的氧化程度分析方法、装置、计算机设备及介质
CN117218113B (zh) * 2023-11-06 2024-05-24 铸新科技(苏州)有限责任公司 一种炉管的氧化程度分析方法、装置、计算机设备及介质

Also Published As

Publication number Publication date
US20240179421A1 (en) 2024-05-30
CN113256537B (zh) 2022-01-07
CN113256537A (zh) 2021-08-13

Similar Documents

Publication Publication Date Title
WO2022267494A1 (zh) 一种图像数据生成方法和装置
CN109064396B (zh) 一种基于深度成分学习网络的单幅图像超分辨率重建方法
CN110428366B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
US20170256036A1 (en) Automatic microlens array artifact correction for light-field images
US20180007289A1 (en) Image sensor, imaging device, mobile terminal and imaging method
WO2019223594A1 (zh) 神经网络模型处理方法和装置、图像处理方法、移动终端
CN114972085B (zh) 一种基于对比学习的细粒度噪声估计方法和系统
WO2021237732A1 (zh) 图像对齐方法及装置、电子设备、存储介质
MX2011009714A (es) Metodo y aparato para realizar autenticacion de video sobre un usuario.
JP2021197144A (ja) 画像ノイズ除去モデルの訓練方法、画像ノイズ除去方法、装置及び媒体
CN107465903B (zh) 图像白平衡方法、装置和计算机可读存储介质
WO2020215180A1 (zh) 图像处理方法、装置和电子设备
CN113962859B (zh) 一种全景图生成方法、装置、设备及介质
WO2019029573A1 (zh) 图像虚化方法、计算机可读存储介质和计算机设备
WO2023010750A1 (zh) 一种图像颜色映射方法、装置、终端设备及存储介质
WO2023125440A1 (zh) 一种降噪方法、装置、电子设备及介质
WO2022247232A1 (zh) 一种图像增强方法、装置、终端设备及存储介质
CN102449662B (zh) 用于处理数字图像的自适应方法和图像处理设备
CN111369557A (zh) 图像处理方法、装置、计算设备和存储介质
CN114494080A (zh) 一种图像生成方法、装置、电子设备及存储介质
Lian et al. [Retracted] Film and Television Animation Sensing and Visual Image by Computer Digital Image Technology
CN111681191B (zh) 一种基于fpga的彩色图像去马赛克方法、系统及存储介质
US20230245276A1 (en) Method and apparatus for acquiring raw image, and electronic device
CN114331893A (zh) 一种获取图像噪声的方法、介质和电子设备
CN114240794A (zh) 图像处理方法、系统、设备及存储介质

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 17791126

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22827003

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22827003

Country of ref document: EP

Kind code of ref document: A1