WO2020207261A1 - 基于多帧图像的图像处理方法、装置、电子设备 - Google Patents

基于多帧图像的图像处理方法、装置、电子设备 Download PDF

Info

Publication number
WO2020207261A1
WO2020207261A1 PCT/CN2020/081433 CN2020081433W WO2020207261A1 WO 2020207261 A1 WO2020207261 A1 WO 2020207261A1 CN 2020081433 W CN2020081433 W CN 2020081433W WO 2020207261 A1 WO2020207261 A1 WO 2020207261A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
original
images
frame
noise
Prior art date
Application number
PCT/CN2020/081433
Other languages
English (en)
French (fr)
Inventor
黄杰文
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020207261A1 publication Critical patent/WO2020207261A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/62Detection or reduction of noise due to excess charges produced by the exposure, e.g. smear, blooming, ghost image, crosstalk or leakage between pixels

Definitions

  • This application relates to the field of imaging technology, and in particular to an image processing method, device, and electronic equipment based on multi-frame images.
  • mobile terminal devices such as smart phones, tablet computers, etc.
  • mobile terminal devices have built-in cameras, and with the enhancement of mobile terminal processing capabilities and the development of camera technology, the performance of built-in cameras is getting stronger and the quality of captured images is getting higher and higher.
  • mobile terminal devices are simple to operate and easy to carry. In daily life, more and more users use smart phones, tablet computers and other mobile terminal devices to take pictures.
  • This application aims to solve one of the technical problems in the related technology at least to a certain extent.
  • the purpose of this application is to propose an image processing method, device, and electronic device based on multi-frame images, which can more accurately distinguish the picture noise and effective details of high dynamic range images, and help reduce the original image collection frames Therefore, the total time required for the overall shooting process is shortened, avoiding the situation that the shooting time is too long and causing the picture to be blurred, which is conducive to clearly shooting dynamic night scenes.
  • the image processing method based on multi-frame images proposed by the embodiment of the first aspect of the present application includes: obtaining multi-frame original images; denoising some original images based on artificial intelligence to obtain a denoised image.
  • the original frame image is the original image of at least two frames of the multiple original frames; the high dynamic range image is synthesized according to the noise reduction image and other original frame images, and the partial original frame image and the other original frame images are synthesized
  • the images together constitute the multiple frames of original images.
  • the multi-frame image-based image processing method proposed by the embodiment of the first aspect of this application obtains multi-frame original images; denoises some of the original images based on artificial intelligence to obtain a denoised image, and some of the original images are multi-frame original images At least two original images in the frame; according to the noise-reduction image and other original images, a high dynamic range image is synthesized, and some original images and other original images together form multiple original images to obtain a high dynamic range image. It can more accurately distinguish the picture noise and effective details of the high dynamic range image, which helps to reduce the number of original image acquisition frames, so that the total time required for the overall shooting process is shortened, and the picture is blurred due to excessive shooting time. Conducive to clear shooting of dynamic night scenes.
  • the image processing device based on multi-frame images proposed in the embodiment of the second aspect of the present application includes: an acquisition module for acquiring multi-frame original images; a noise reduction module for partial frame original images based on artificial intelligence Noise reduction to obtain a noise-reduced image, where the partial frame original image is the original image of at least two frames of the multi-frame original image; the synthesis module is used to synthesize the high-resolution image according to the noise-reduced image and other original images In a dynamic range image, the partial frame of original image and the other frames of original image together form the multiple frames of original image.
  • the image processing device based on the multi-frame image proposed in the embodiment of the second aspect of the application obtains multi-frame original images; denoises some original images based on artificial intelligence to obtain a denoised image, and some original images are multi-frame original images At least two original images in the frame; according to the noise-reduction image and other original images, a high dynamic range image is synthesized, and some original images and other original images together form multiple original images to obtain a high dynamic range image. It can more accurately distinguish the picture noise and effective details of the high dynamic range image, which helps to reduce the number of original image acquisition frames, so that the total time required for the overall shooting process is shortened, and the picture is blurred due to excessive shooting time. Conducive to clear shooting of dynamic night scenes.
  • the electronic device proposed in the embodiment of the third aspect of the present application includes: an image sensor, a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the image sensor and the processor The processor is electrically connected, and when the processor executes the program, the image processing method based on multi-frame images proposed in the embodiment of the first aspect of the present application is implemented.
  • the electronic device proposed in the embodiment of the third aspect of the present application obtains multiple frames of original images; denoises some frames of original images based on artificial intelligence to obtain noise-reduced images, and some frames of original images are at least two frames of the multiple original images Original image; according to the noise-reduction image and other original frame images, a high dynamic range image is synthesized, and some original frames and other original frame images are combined to form multiple original images to obtain a high dynamic range image, which can more accurately distinguish high
  • the image noise and effective details of the dynamic range image help reduce the number of original image acquisition frames, shorten the total time required for the overall shooting process, avoid the situation that the shooting time is too long and cause the picture to be blurred, and is conducive to clearly shooting dynamic night scenes.
  • the image processing circuit proposed in the embodiment of the fourth aspect of the present application includes: an image signal processing ISP processor and a graphics processor GPU; the ISP processor is electrically connected to the image sensor and is used to control the image The sensor acquires a multi-frame original image; the GPU is electrically connected to the ISP processor, and is used to reduce noise based on artificial intelligence for a part of the original image to obtain a noise-reduced image, and the part of the original image is the multi-frame original The original image of at least two frames in the image; the ISP processor is also used to synthesize a high dynamic range image according to the noise-reduction image and other original frame images, the partial original image and the other original frame The images together constitute the multiple frames of original images.
  • the image processing circuit proposed in the embodiment of the fourth aspect of the present application obtains multiple frames of original images; denoises part of the original images based on artificial intelligence to obtain a denoised image, and some of the original images are at least two of the multiple original images.
  • the image noise and effective details of the high dynamic range image help reduce the number of original image acquisition frames, shorten the total time required for the overall shooting process, avoid the situation that the shooting time is too long and cause the picture to be blurred, which is conducive to clear shooting of dynamic night scenes .
  • the computer-readable storage medium provided by the embodiment of the fifth aspect of the present application stores a computer program thereon, and when the program is executed by a processor, the multi-frame image-based Image processing method.
  • the computer-readable storage medium proposed in the embodiment of the fifth aspect of the present application obtains multiple frames of original images; denoises some frames of original images based on artificial intelligence to obtain denoised images, and some of the original frames are at least Two-frame original image; synthesize a high dynamic range image based on the noise-reduced image and other original images.
  • Some original images and other original images together form multiple original images to obtain a high dynamic range image, which can be more precise Separating the image noise and effective details of the high dynamic range image helps to reduce the number of original image acquisition frames, so that the total time required for the overall shooting process is shortened, avoiding the situation that the shooting time is too long and the image is blurred, which is conducive to clear shooting Dynamic night scene.
  • FIG. 1 is a schematic flowchart of the first image processing method based on multi-frame images provided by an embodiment of this application;
  • FIG. 2 is a schematic flowchart of a second image processing method based on multi-frame images provided by an embodiment of the application;
  • FIG. 3 is a schematic flowchart of a third image processing method based on multi-frame images provided by an embodiment of the application;
  • FIG. 4 is a schematic flowchart of a fourth image processing method based on multi-frame images provided by an embodiment of the application;
  • FIG. 5 is a schematic diagram of an application process in this application.
  • FIG. 6 is a schematic structural diagram of the first image processing device based on multi-frame images provided by an embodiment of the application;
  • FIG. 7 is a schematic structural diagram of a second image processing device based on multi-frame images provided by an embodiment of the application.
  • FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • FIG. 9 is a schematic diagram of the principle of an electronic device provided by an embodiment of the application.
  • FIG. 10 is a schematic diagram of the principle of an image processing circuit provided by an embodiment of the application.
  • this application proposes an image processing method based on multi-frame images, by obtaining multi-frame original images; denoising some of the original images based on artificial intelligence to obtain a denoised image, some of the original images are multi-frame original images
  • the original image of at least two frames; the high dynamic range image is synthesized according to the noise-reduced image and the original image of other frames, and the original image of some frames and the original image of other frames together form multiple original images.
  • FIG. 1 is a schematic flowchart of the first image processing method based on multi-frame images provided by an embodiment of the application.
  • the image processing method based on multi-frame images in the embodiments of the present application is applied to an electronic device.
  • the electronic device may be a mobile phone, a tablet computer, a personal digital assistant, a wearable device, and other hardware devices with various operating systems and imaging devices.
  • the image processing method based on multi-frame images includes the following steps:
  • Step 101 Obtain multiple frames of original images.
  • the original image may be, for example, a RAW format image without any processing that is collected by an image sensor of an electronic device, which is not limited.
  • the RAW format image is the original image that the image sensor converts the captured light source signal into a digital signal.
  • the RAW format image records the original information of the digital camera sensor, as well as some metadata generated by the camera, such as the sensitivity setting, shutter speed, aperture value, white balance, etc.
  • the preview image of the current shooting scene can be obtained to determine whether the current shooting scene belongs to a night scene. Because the environmental brightness values are different in different scenes, the preview image content is also different. You can start the night scene shooting mode after determining that the current shooting scene belongs to the night scene scene according to the preview image content of the current shooting scene and the environment brightness value of each area. Acquire multiple frames of original images under exposure.
  • the screen content of the preview image includes night sky or night scene light sources, or the ambient brightness value in each area of the preview image matches the brightness distribution characteristics of the image in the night scene environment, it can be determined that the current shooting scene belongs to the night scene scene.
  • the electronic device can be used for image synthesis by shooting multiple frames of original images, and can also be used for selecting clear images for synthesis and imaging.
  • the image sensor of the electronic device can be controlled to shoot multiple frames of original images under different exposures. For example: use low exposure shooting to clearly image high-bright areas, and high exposure shooting to clearly image low-bright areas.
  • Step 102 Denoise a part of the original image based on artificial intelligence to obtain a noise-reduced image, and the part of the original image is at least two original images of the multiple original images.
  • the partial frame original image is the original image of the partial frame among the multiple frames of the original image collected in the above steps. More specifically, the partial frame original image may be at least two frames of the first image with the same exposure, corresponding to Yes, the other frame of the original image may be at least one frame of the second image with a lower exposure than the first image.
  • the image sensor in the electronic device is subject to varying degrees of light and electromagnetic interference from the peripheral circuits and its own pixels during the shooting process, the original image obtained by shooting is inevitably noisy, and the degree of interference is different. The sharpness of the image is also different. Therefore, the collected multiple frames of original images must also have noise, and some frames of original images can be further denoised based on artificial intelligence to obtain denoised images.
  • images are usually captured with a larger aperture and a longer exposure time. At this time, if a higher sensitivity is selected to reduce the exposure time, the captured images will inevitably produce noise.
  • a multi-frame fusion noise reduction may be performed on part of the original image to obtain the first noise reduction image.
  • performing image alignment processing on part of the original image and synthesizing it into a multi-frame fusion image (which can be called the first denoising image), which is equivalent to undergoing temporal denoising, and initially improving the signal-to-noise ratio of the picture .
  • the first noise reduction image is processed to obtain a noise reduction image, which can simultaneously reduce the noise of the highlight area and the dark light area in the first noise reduction image, thereby obtaining a better noise reduction effect.
  • noisy image based on artificial intelligence, the first noise reduction image is processed to obtain a noise reduction image, which can simultaneously reduce the noise of the highlight area and the dark light area in the first noise reduction image, thereby obtaining a better noise reduction effect.
  • the noise reduction of the first noise reduction image is based on artificial intelligence, and the noise reduction image obtained is an unprocessed RAW image.
  • a neural network model can be used to identify the noise characteristics of the first denoised image; wherein the neural network model has learned the first denoised image The mapping relationship between the sensitivity and the noise characteristics; reduce the noise of the first noise reduction image according to the identified noise characteristics to obtain the noise reduction image.
  • the neural network model has learned the mapping relationship between the sensitivity of the first denoised image and the noise characteristics. Therefore, the first noise-reduction image can be input into the neural network model to use the neural network model to identify the noise characteristics of the first noise-reduction image, thereby identifying the noise characteristics of the first noise-reduction image, and according to the identified noise characteristics, Denoise the first denoised image to obtain a denoised image, thereby achieving the purpose of noise reduction and improving the signal-to-noise ratio of the image.
  • the neural network model is only a possible way to achieve noise reduction based on artificial intelligence.
  • any other possible ways can be used to achieve noise reduction based on artificial intelligence.
  • traditional The programming technology (such as simulation method and engineering method) can be realized.
  • it can also be realized by genetic algorithm and artificial neural network.
  • the sensitivity also known as the ISO value, refers to an index that measures the sensitivity of the film to light. For low-sensitivity negatives, longer exposure time is required to achieve the same imaging as high-sensitivity negatives.
  • the sensitivity of a digital camera is an indicator similar to the sensitivity of a film.
  • the ISO of a digital camera can be adjusted by adjusting the sensitivity of the photosensitive device or combining the photosensitive points, that is, by increasing the light sensitivity of the photosensitive device or Combine several adjacent photosensitive points to achieve the purpose of improving ISO.
  • the noise characteristic may be the statistical characteristic of random noise caused by the image sensor.
  • the noise mentioned here mainly includes thermal noise and shot noise. Among them, the thermal noise conforms to the Gaussian distribution, and the shot noise conforms to the Poisson distribution.
  • the statistical characteristics in the embodiments of this application may refer to the variance value of the noise, of course, it may also be other The possible values are not limited here.
  • Step 103 Synthesize a high dynamic range image based on the noise-reduced image and other original images of frames, and some original images and other original images together form multiple original images.
  • some original frames may be at least two frames of first images with the same exposure.
  • other frames of original images may be at least one frame of second images with lower exposure than the first image.
  • a high dynamic range image can be obtained by synthesizing the noise-reduced image with at least one frame of the second image.
  • high-dynamic range images (High-Dynamic Range, HDR for short) can provide more dynamic range and image details than ordinary images.
  • the low dynamic range image LDR Low-Dynamic Range
  • the low dynamic range image LDR (Low-Dynamic Range) synthesizes the high dynamic range image, which can better reflect the visual effect in the real environment.
  • the noise-reduced image and other original frame images are taken under different exposure conditions and obtained by noise reduction processing, the noise-reduced image and other frame original images contain picture information of different brightness.
  • the noise-reduced image and other original images may be overexposed, underexposed, or properly exposed.
  • the scenes in the high-dynamic range image can be properly exposed as much as possible, which is closer to the actual scene.
  • the image format that can be processed by the display of the electronic device is the YUV format.
  • the luminance signal of the image is called Y
  • the chrominance signal is composed of two mutually independent signals.
  • the two chrominance signals are often called U and V.
  • the high dynamic range image can be formatted through the image signal processor (Image Signal Processing, ISP), and the high dynamic range image in the RAW format can be converted to the YUV format. image. Due to the limited size of the display interface of the display, in order to achieve a better preview effect, the converted YUV format image can be compressed to the preview size for preview display.
  • ISP Image Signal Processing
  • multiple frames of original images are obtained; some frames of original images are denoised based on artificial intelligence to obtain noise-reduced images, some of the original images are original images of at least two of the multiple original images; according to the noise-reduced image
  • the high dynamic range image is synthesized from other original images, which can more accurately distinguish the picture noise and effective details of the high dynamic range image.
  • this application can achieve a certain degree It helps to reduce the number of original image acquisition frames. For each original image, it helps to increase the sensitivity of the acquisition to reduce the shooting time, so that the total time required for the overall shooting process is shortened, and the shooting time is avoided.
  • the long-term blurring of the picture is conducive to clear shooting of dynamic night scenes.
  • the noise reduction of some original frames based on artificial intelligence is used to obtain the noise-reduced image, and the noise-reduced image and the original image of other frames are synthesized with high dynamics to obtain a high dynamic range image, which can guarantee the noise reduction.
  • the noise reduction reduces the amount of calculation for image noise reduction, so that while obtaining a better imaging effect, the imaging efficiency is improved.
  • FIG. 2 is a schematic flowchart of the second image processing method based on multi-frame images provided by an embodiment of the application, which specifically includes the following steps:
  • Step 201 Obtain sample images of various sensitivity.
  • the noise characteristics of the image have been marked in the sample image.
  • the sample image may be an image obtained by shooting with different sensitivity settings under different environmental brightness.
  • each environmental brightness there should be a variety of environmental brightness.
  • multiple frames of images are taken under different sensitivity conditions as sample images.
  • the environmental brightness and ISO may be subdivided, and the number of frames of the sample image may be increased, so that after the first noise reduction image is input to the neural network model, the neural network It can accurately identify the statistical characteristics of the first noise reduction image.
  • step 202 the neural network model is trained using sample images of various sensitivity.
  • the neural network model is trained using the sample images.
  • the statistical characteristics marked in the sample images are used as the characteristics of model training, and the sample images marked with statistical characteristics are input into the neural network model to train the neural network model, and then the statistical characteristics of the image are identified.
  • the neural network model is only a possible way to achieve noise reduction based on artificial intelligence.
  • any other possible ways can be used to achieve noise reduction based on artificial intelligence.
  • traditional The programming technology (such as simulation method and engineering method) can be realized.
  • it can also be realized by genetic algorithm and artificial neural network.
  • the statistical characteristics of the sample images are marked to train the neural network model because the marked sample images can clearly indicate the noise location and noise type of the image, so that the marked statistical characteristics are used as the characteristics of the model training . After inputting the first noise reduction image into the neural network model, the statistical characteristics in the image can be identified.
  • Step 203 until the noise characteristic identified by the neural network model matches the noise characteristic marked in the corresponding sample image, the neural network model training is completed.
  • the neural network model is trained using sample images of each sensitivity until the noise characteristics identified by the neural network model match the statistical characteristics marked in the corresponding sample images.
  • the neural network model is trained by acquiring sample images of each sensitivity and using the sample images of each sensitivity until the statistical characteristics recognized by the neural network model match the statistical characteristics marked in the corresponding sample image, The neural network model training is completed. Because the neural network model is trained using sample images with statistical characteristics labeled at each sensitivity, it is possible to accurately identify the statistical characteristics of the image after inputting the first denoised image into the neural network model to achieve image noise reduction Processing to improve the quality of image shooting.
  • FIG. 3 is a schematic flowchart of a third image processing method based on multi-frame images provided by an embodiment of the application.
  • Step 101 may also include:
  • Step 301 Determine the number of image frames n of the reference exposure according to the imaging quality of the preview image.
  • the preview image is obtained in advance, for example, it may be a preview image taken by turning on a camera, or it may be read from a memory, which is not limited.
  • n is a natural number greater than or equal to 2.
  • the value range of the image frame number n may be 3. Or 4 to reduce the shooting time and obtain higher quality images.
  • the imaging quality of the preview image can be measured by, for example, signal-to-noise ratio and/or imaging speed, and the imaging quality is generally positively related to the number of frames of the captured image, that is, the better the imaging quality, the better The more frames the image.
  • the preview image is taken based on the tripod mode, considering that the picture is relatively stable, a larger number of frames of the preview image can be collected for subsequent synthesis, and the preview image is taken based on the handheld mode, then Due to the image shake caused by the inevitable human hand shake, in the embodiment of the present application, in order to avoid blurring of the high dynamic range image, a preview image of fewer frames can be collected for subsequent synthesis.
  • the synthesized image obtained during high dynamic synthesis contains more picture information and is more similar to the actual scene, so imaging There is a positive relationship between quality and the number of acquired image frames, and the number of image frames n of the reference exposure can be determined according to the imaging quality of the preview image.
  • Step 302 Collect n frames of original images that meet the reference exposure.
  • n frames of original images meeting the reference exposure are further collected.
  • the reference exposure duration of each frame of the original image to be collected can be determined to obtain images with different dynamic ranges, so that the combined The image has a higher dynamic range, improving the overall brightness and quality of the image.
  • FIG. 4 is a schematic flowchart of a fourth image processing method based on multi-frame images provided by an embodiment of this application. As shown in FIG. 4, step 302 may further include the following sub-steps:
  • the reference exposure is determined according to the illuminance of the shooting scene.
  • the exposure amount refers to how much light the photosensitive device in the electronic device receives during the exposure time.
  • the exposure amount is related to the aperture, the exposure time and the sensitivity.
  • the aperture is the light aperture, which determines the amount of light passing through the unit time;
  • the exposure time refers to the time for the light to pass through the lens;
  • the sensitivity also known as the ISO value, is a measure of the sensitivity of the film to light.
  • the ISO value also known as the ISO value
  • the exposure amount is related to the exposure time and the sensitivity aperture.
  • it can be the product of the exposure time and the sensitivity.
  • the reference exposure in the related technology is defined as the exposure compensation level of zero, that is, EV0.
  • the preview image of the current shooting scene is acquired by the image sensor, and the ambient light brightness of each area of the preview image is further measured by the photosensitive device, and then the reference exposure is determined according to the brightness information of the preview image.
  • the reference exposure amount may specifically include the reference exposure duration and the reference sensitivity.
  • the reference exposure refers to the exposure that is determined to be compatible with the brightness information of the current environment after the brightness information of the current shooting scene obtained by metering the preview image.
  • the value of the reference exposure can be It is the product of the reference sensitivity and the reference exposure time.
  • a reference sensitivity is set according to the jitter degree of the preview image, or according to the jitter degree of the image sensor that collects the preview image.
  • the reference sensitivity can be set according to the jitter degree of the preview image to suit the current jitter degree; it can also be set according to the current jitter degree of the image sensor that collects the preview image
  • the sensitivity to suit the current dithering degree is not limited here. Among them, the value range of the reference sensitivity can be 100ISO to 200ISO.
  • the sensitivity of the captured image will affect the overall shooting time. If the shooting time is too long, it may increase the jitter of the image sensor during handheld shooting, thereby affecting the image quality. Therefore, the reference sensitivity corresponding to the collected preview image can be determined according to the screen shake degree of the preview image, or according to the shake degree of the image sensor for collecting the preview, so that the shooting time can be controlled within a proper range.
  • the displacement information in order to determine the degree of jitter, may be collected according to the displacement sensor provided in the electronic device, and further, the degree of screen jitter of the preview image is determined or the image of the preview image is collected according to the collected displacement information of the electronic device The jitter level of the sensor.
  • the current gyro-sensor information of the electronic device can be obtained to determine the current jitter degree of the electronic device, that is, the jitter degree of the image sensor that collects the preview image.
  • the gyroscope is also called the angular velocity sensor, which can measure the angular velocity of rotation when the physical quantity is deflected or tilted.
  • the gyroscope can measure the rotation and deflection movements, so as to accurately analyze and judge the actual movements of the user.
  • the gyroscope information (gyro information) of the electronic device can include the movement information of the mobile phone in three dimensions in the three-dimensional space.
  • the three dimensions of the three-dimensional space can be expressed as the X-axis, Y-axis, and Z-axis directions respectively.
  • the X-axis, Y-axis, and Z-axis are in a pairwise vertical relationship.
  • the jitter degree of the image sensor collecting the preview image can be determined according to the current gyro information of the electronic device. The greater the absolute value of the gyro movement of the electronic device in the three directions, the greater the jitter of the image sensor that collects the preview image.
  • the absolute value thresholds of the gyro movement in the three directions can be preset, and the acquired preview image is determined based on the sum of the acquired absolute value of the current gyro movement in the three directions and the preset threshold.
  • the current jitter level of the image sensor can be preset.
  • the preset thresholds are the first threshold A, the second threshold B, and the third threshold C, and A ⁇ B ⁇ C
  • the sum of the absolute values of the gyro motion in the three directions currently acquired is S . If S ⁇ A, it is determined that the jitter degree of the image sensor collecting the preview image is "no jitter”; if A ⁇ S ⁇ B, it can be determined that the jitter degree of the image sensor collecting the preview image is "slight jitter”; if B ⁇ S ⁇ C, it can be determined that the jitter degree of the image sensor that collects the preview image is "small jitter”; if S>C, it can be determined that the jitter degree of the image sensor that collects the preview image is "large jitter”.
  • the number of thresholds and the specific value of each threshold can be preset according to actual needs, and the mapping relationship between gyro information and the jitter degree of the image sensor that collects the preview image can be preset according to the relationship between gyro information and each threshold.
  • the reference sensitivity corresponding to each frame of the image to be collected can be appropriately compressed to a small value to effectively suppress the noise of each frame of image and improve the captured image. If the image sensor that collects the preview image has a large degree of jitter, the reference sensitivity corresponding to each frame of the image to be collected can be appropriately increased to a larger value to shorten the shooting time.
  • the reference sensitivity can be determined to be a smaller value to try to obtain a higher quality image, for example, determine the reference sensitivity to be 100 ; If it is determined that the jitter degree of the image sensor collecting the preview image is "slight jitter”, the reference sensitivity can be determined to be a larger value to reduce the shooting time, for example, the reference sensitivity is determined to be 120; If the jitter degree of the image sensor is "small jitter", the reference sensitivity can be further increased to reduce the shooting time.
  • the reference sensitivity is determined to be 180; if the jitter degree of the image sensor collecting the preview image is determined to be "large jitter", It can be determined that the current degree of jitter is too large. At this time, the reference sensitivity can be further increased to reduce the shooting time, for example, the reference sensitivity is determined to be 200.
  • the above examples are only exemplary, and should not be regarded as limitations on the application.
  • the reference sensitivity can be changed to obtain the optimal solution.
  • the mapping relationship between the jitter degree of the image sensor that collects the preview image and the reference sensitivity corresponding to each frame of the image to be collected can be preset according to actual needs.
  • the jitter degree of the preview image is positively correlated with the jitter degree of the image sensor that collects the preview image.
  • the implementation process of setting the reference sensitivity is referred to the above process, and will not be here Repeat.
  • the reference exposure duration is determined according to the reference exposure amount and the set reference sensitivity.
  • the reference exposure includes the reference exposure duration and the reference sensitivity. Therefore, the reference exposure is determined according to the illuminance of the shooting scene, and according to the jitter degree of the preview image or the jitter of the image sensor collecting the preview image After determining the reference sensitivity, the reference exposure time can be determined according to the reference exposure and the reference sensitivity.
  • n frames of original images are collected according to the reference exposure time and the reference sensitivity.
  • the image sensor is controlled to perform image collection according to the exposure time and reference sensitivity of the original image to be collected in each frame. Repeat.
  • Step 303 Collect at least one original image that is lower than the reference exposure.
  • the reference exposure time can be compensated according to the set exposure compensation level to obtain a compensated exposure time shorter than the reference exposure time; according to the compensation Exposure time and reference sensitivity, collect at least one original image.
  • Exposure compensation level is a parameter for adjusting the amount of exposure, so that some images are underexposed, some images are overexposed, and some images can be properly exposed.
  • the exposure compensation level corresponding to at least one frame of the second image ranges from EV-5 to EV-1.
  • the at least one frame of original image may be called at least one frame of second image, specifically two frames of second image,
  • the frames of the second image correspond to different exposure compensation levels, and the exposure compensation levels of the two frames of the second images are less than EV0.
  • the reference exposure time is compensated to obtain a compensated exposure time shorter than the reference exposure time; according to the compensated exposure time and the reference sensitivity, two frames of second images are collected.
  • n frames of original images conforming to the reference exposure are collected, and at least one original image lower than the reference exposure is collected.
  • the collected multiple frames of original images are determined, thereby improving the image quality of the image and obtaining a higher definition Imaging effect.
  • FIG. 6 is a schematic structural diagram of the first image processing device based on multi-frame images provided by an embodiment of the application.
  • the image processing device 600 based on multi-frame images includes: an acquisition module 610, a noise reduction module 620 and a synthesis module 630.
  • the obtaining module 610 is used to obtain multiple frames of original images
  • the noise reduction module 620 is configured to reduce noise based on artificial intelligence on part of the original image to obtain a noise reduction image, and the part of the original image is at least two original images of the multiple original images;
  • the synthesis module 630 is used to synthesize a high dynamic range image according to the noise-reduced image and other original frame images, and some original frames and other original frames together form multiple original frames.
  • the noise reduction module 620 is specifically configured to:
  • the neural network model is used to identify the noise characteristics of the first noise-reduced image; among them, the neural network model has learned the mapping relationship between the sensitivity of the first noise-reduced image and the noise characteristics;
  • the neural network model uses sample images of various sensitivity to train the neural network model until the noise characteristics identified by the neural network model match the noise characteristics marked in the corresponding sample image.
  • the network model training is completed.
  • some of the original frames are at least two frames of the first image with the same exposure, and the other frames of original images are at least one frame of the second image with lower exposure than the first image;
  • the synthesis module 630 is specifically used for:
  • a high dynamic range image is synthesized.
  • the obtaining module 610 is specifically configured to:
  • n is a natural number greater than or equal to 2;
  • the imaging quality and the number of image frames have a positive relationship
  • the imaging quality includes at least one of signal-to-noise ratio and imaging speed.
  • the obtaining module 610 is specifically configured to:
  • n frames of original images are collected.
  • At least one frame of the second image is specifically two frames of second images
  • the two frames of second images correspond to different exposure compensation levels, and the exposure compensation levels of the two frames of second images are less than EV0.
  • the exposure compensation level corresponding to at least one frame of the second image ranges from EV-5 to EV-1.
  • FIG. 7 is a schematic structural diagram of a second image processing apparatus based on multi-frame images provided by an embodiment of the application, and further includes:
  • the conversion module 640 is used to convert the high dynamic range image into a YUV image.
  • multiple frames of original images are obtained; some frames of original images are denoised based on artificial intelligence to obtain noise-reduced images, some of the original images are original images of at least two of the multiple original images; according to the noise-reduced image
  • the high dynamic range image is synthesized from other original images, which can more accurately distinguish the picture noise and effective details of the high dynamic range image.
  • this application can achieve a certain degree It helps to reduce the number of original image acquisition frames. For each original image, it helps to increase the sensitivity of the acquisition to reduce the shooting time, so that the total time required for the overall shooting process is shortened, and the shooting time is avoided.
  • the long-term blurring of the picture is conducive to clear shooting of dynamic night scenes.
  • the noise reduction of some original frames based on artificial intelligence is used to obtain the noise-reduced image, and the noise-reduced image and the original image of other frames are synthesized with high dynamics to obtain a high dynamic range image, which can guarantee the noise reduction.
  • the noise reduction reduces the amount of calculation for image noise reduction, so that while obtaining a better imaging effect, the imaging efficiency is improved.
  • FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the application, including: an image sensor 210, a processor 220, a memory 230, and A computer program stored in the memory 230 and running on the processor 220.
  • the image sensor 210 is electrically connected to the processor 220.
  • the processor 220 executes the program, the image processing method based on multi-frame images as in the above embodiment is implemented.
  • the processor 220 may include: an image signal processing ISP processor.
  • the ISP processor is used to control the image sensor to obtain multiple frames of original images.
  • the processor 220 may further include: a graphics processing unit (Graphics Processing Unit, GPU for short) connected to the ISP processor.
  • a graphics processing unit Graphics Processing Unit, GPU for short
  • the GPU is used to reduce noise based on artificial intelligence for part of the original image to obtain a noise-reduced image
  • the part of the original image is the original image of at least two of the multiple original images.
  • GPU is also used to encode the target noise reduction image.
  • the ISP processor is also used to synthesize a high dynamic range image based on the noise-reduced image and other original frame images, and some original frames and other original frames together form multiple original frames.
  • FIG. 9 is a schematic diagram of an example of an electronic device provided by an embodiment of the application.
  • the memory 230 of the electronic device 200 includes a non-volatile memory 80, an internal memory 82, and a processor 220.
  • Computer readable instructions are stored in the memory 230.
  • the processor 230 is caused to execute the image processing method based on the multi-frame image in any of the foregoing embodiments.
  • the electronic device 200 includes a processor 220, a non-volatile memory 80, an internal memory 82, a display screen 83 and an input device 84 connected through a system bus 81.
  • the non-volatile memory 80 of the electronic device 200 stores an operating system and computer readable instructions.
  • the computer-readable instructions may be executed by the processor 220 to implement the image processing method based on multi-frame images in the embodiments of the present application.
  • the processor 220 is used to provide calculation and control capabilities, and support the operation of the entire electronic device 200.
  • the internal memory 82 of the electronic device 200 provides an environment for the operation of computer readable instructions in the non-volatile memory 80.
  • the display screen 83 of the electronic device 200 may be a liquid crystal display screen or an electronic ink display screen, etc.
  • the input device 84 may be a touch layer covered on the display screen 83, or may be a button, a trackball or a touch set on the housing of the electronic device 200
  • the board can also be an external keyboard, touchpad, or mouse.
  • the electronic device 200 may be a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a wearable device (such as a smart bracelet, a smart watch, a smart helmet, and a smart glasses). Those skilled in the art can understand that the structure shown in FIG.
  • the specific electronic device 200 may include more or fewer components than shown in the figure, or combine certain components, or have a different component arrangement.
  • FIG. 10 is a schematic diagram of the principle of an image processing circuit provided by an embodiment of this application.
  • the image processing circuit 90 includes Image signal processing ISP processor 91 (ISP processor 91 as processor 220) and graphics processor GPU.
  • the ISP processor is electrically connected to the image sensor and is used to control the image sensor to obtain multiple frames of original images
  • the GPU is electrically connected to the ISP processor, and is used to reduce noise based on artificial intelligence for some original frames of the original image to obtain a noise-reduced image, and the partial original images are original images of at least two frames among the original images.
  • the ISP processor is also used to synthesize a high dynamic range image based on the noise-reduced image and other original frame images, and some original frames and other original frames together form multiple original frames.
  • the image data captured by the camera 93 is first processed by the ISP processor 91, and the ISP processor 91 analyzes the image data to capture image statistical information that can be used to determine one or more control parameters of the camera 93.
  • the camera module 310 may include one or more lenses 932 and an image sensor 934.
  • the image sensor 934 may include a color filter array (such as a Bayer filter), and the image sensor 934 may obtain the light intensity and wavelength information captured by each imaging pixel, and provide a set of raw image data that can be processed by the ISP processor 91.
  • the sensor 94 (such as a gyroscope) can provide the collected image processing parameters (such as anti-shake parameters) to the ISP processor 91 based on the interface type of the sensor 94.
  • the sensor 94 interface may be an SMIA (Standard Mobile Imaging Architecture, Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the foregoing interfaces.
  • the image sensor 934 may also send raw image data to the sensor 94, and the sensor 94 may provide the raw image data to the ISP processor 91 based on the interface type of the sensor 94, or the sensor 94 may store the raw image data in the image memory 95.
  • the ISP processor 91 processes the original image data pixel by pixel in various formats.
  • each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 91 may perform one or more image processing operations on the original image data and collect statistical information about the image data. Among them, the image processing operations can be performed with the same or different bit depth accuracy.
  • the ISP processor 91 may also receive image data from the image memory 95.
  • the sensor 94 interface sends the original image data to the image memory 95, and the original image data in the image memory 95 is then provided to the ISP processor 91 for processing.
  • the image memory 95 may be the memory 330, a part of the memory 330, a storage device, or an independent dedicated memory in an electronic device, and may include DMA (Direct Memory Access, direct memory access) features.
  • DMA Direct Memory Access, direct memory access
  • the ISP processor 91 may perform one or more image processing operations, such as temporal filtering.
  • the processed image data can be sent to the image memory 95 for additional processing before being displayed.
  • the ISP processor 91 receives the processed data from the image memory 95, and performs image data processing in the original domain and in the RGB and YCbCr color spaces on the processed data.
  • the image data processed by the ISP processor 91 may be output to a display 97 (the display 97 may include a display screen 83) for viewing by the user and/or further processing by a graphics engine or GPU.
  • the output of the ISP processor 91 can also be sent to the image memory 95, and the display 97 can read image data from the image memory 95.
  • the image memory 95 may be configured to implement one or more frame buffers.
  • the output of the ISP processor 91 may be sent to the encoder/decoder 96 in order to encode/decode image data.
  • the encoded image data can be saved and decompressed before being displayed on the display 97 device.
  • the encoder/decoder 96 may be implemented by a CPU or GPU or a coprocessor.
  • the statistical data determined by the ISP processor 91 may be sent to the control logic 92 unit.
  • the statistical data may include image sensor 934 statistical information such as automatic exposure, automatic white balance, automatic focus, flicker detection, black level compensation, and lens 932 shading correction.
  • the control logic 92 may include processing elements and/or microcontrollers that execute one or more routines (such as firmware), and one or more routines can determine the control parameters of the camera 93 and the ISP processor based on the received statistical data. 91 control parameters.
  • the control parameters of the camera 93 may include sensor 94 control parameters (such as gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, lens 932 control parameters (such as focus or zoom focal length), or these parameters The combination.
  • the ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (for example, during RGB processing), and lens 932 shading correction parameters.
  • the ISP processor controls the image sensor to obtain multi-frame original images; GPU denoises some of the original images based on artificial intelligence to obtain denoised images.
  • Part of the original image is the original image of at least two frames of the multi-frame original image.
  • the ISP processor is also used to synthesize the high dynamic range image based on the noise-reduction image and other frame original images, part of the original image and other original frames The images together form multiple frames of original images.
  • the embodiments of the present application also provide a storage medium.
  • the processor is caused to perform the following steps: obtain multiple frames of original images; Intelligent noise reduction to obtain a noise-reduced image.
  • Part of the original image is the original image of at least two of the multiple original images; according to the noise-reduced image and other original images, a high dynamic range image, some original images and others are synthesized Frames of original images together form multiple frames of original images.
  • the program can be stored in a non-volatile computer readable storage medium.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请提出一种基于多帧图像的图像处理方法、装置、电子设备,该方法包括获取多帧原始图像;对部分帧原始图像基于人工智能降噪,得到降噪图像,部分帧原始图像为多帧原始图像中的至少两帧的原始图像;根据降噪图像以及其它帧原始图像,合成得到高动态范围图像,部分帧原始图像和其它帧原始图像共同组成多帧原始图像。通过本申请能够更加精确地区分出高动态范围图像的画面噪声和有效细节,有助于减少原始图像采集帧数,使得整体拍摄过程需要的总时长得到缩短,避免了拍摄时长过长导致画面模糊的情况,有利于清晰拍摄动态夜景。

Description

基于多帧图像的图像处理方法、装置、电子设备
相关申请的交叉引用
本申请要求OPPO广东移动通信有限公司于2019年4月9日提交中国专利局、申请号为201910279858.9、发明名称为“基于多帧图像的图像处理方法、装置、电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及成像技术领域,尤其涉及一种基于多帧图像的图像处理方法、装置、电子设备。
背景技术
随着智能终端技术的发展,移动终端设备(如智能手机、平板电脑等)的使用越来越普及。绝大多数移动终端设备都内置有摄像头,并且随着移动终端处理能力的增强以及摄像头技术的发展,内置摄像头的性能越来越强大,拍摄图像的质量也越来越高。如今,移动终端设备均操作简单又便于携带,在日常生活中越来越多的用户使用智能手机、平板电脑等移动终端设备拍照。
智能移动终端在给人们的日常拍照带来便捷的同时,人们对拍摄的图像质量的要求也越来越高,尤其在夜景这一特殊场景中,图像质量较低。
目前,通常获取多帧原始图像进行高动态合成,但是在获取多帧原始图像过程中会引入噪声,导致最终合成的图像不清晰。因此,在最大限度的保留图像细节的情况下,对图像降噪处理,是一个亟待解决的问题。
发明内容
本申请旨在至少在一定程度上解决相关技术中的技术问题之一。
为此,本申请的目的在于提出一种基于多帧图像的图像处理方法、装置、电子设备,能够更加精确地区分出高动态范围图像的画面噪声和有效细节,有助于减少原始图像采集帧数,使得整体拍摄过程需要的总时长得到缩短,避免了拍摄时长过长导致画面模糊的情况,有利于清晰拍摄动态夜景。
为达到上述目的,本申请第一方面实施例提出的基于多帧图像的图像处理方法,包括:获取多帧原始图像;对部分帧原始图像基于人工智能降噪,得到降噪图像,所述部分帧原始 图像为所述多帧原始图像中的至少两帧的原始图像;根据所述降噪图像以及其它帧原始图像,合成得到高动态范围图像,所述部分帧原始图像和所述其它帧原始图像共同组成所述多帧原始图像。
本申请第一方面实施例提出的基于多帧图像的图像处理方法,通过获取多帧原始图像;对部分帧原始图像基于人工智能降噪,得到降噪图像,部分帧原始图像为多帧原始图像中的至少两帧的原始图像;根据降噪图像以及其它帧原始图像,合成得到高动态范围图像,部分帧原始图像和其它帧原始图像共同组成多帧原始图像,以得到高动态范围图像,能够更加精确地区分出高动态范围图像的画面噪声和有效细节,有助于减少原始图像采集帧数,使得整体拍摄过程需要的总时长得到缩短,避免了拍摄时长过长导致画面模糊的情况,有利于清晰拍摄动态夜景。
为达到上述目的,本申请第二方面实施例提出的基于多帧图像的图像处理装置,包括:获取模块,用于获取多帧原始图像;降噪模块,用于对部分帧原始图像基于人工智能降噪,得到降噪图像,所述部分帧原始图像为所述多帧原始图像中的至少两帧的原始图像;合成模块,用于根据所述降噪图像以及其它帧原始图像,合成得到高动态范围图像,所述部分帧原始图像和所述其它帧原始图像共同组成所述多帧原始图像。
本申请第二方面实施例提出的基于多帧图像的图像处理装置,通过获取多帧原始图像;对部分帧原始图像基于人工智能降噪,得到降噪图像,部分帧原始图像为多帧原始图像中的至少两帧的原始图像;根据降噪图像以及其它帧原始图像,合成得到高动态范围图像,部分帧原始图像和其它帧原始图像共同组成多帧原始图像,以得到高动态范围图像,能够更加精确地区分出高动态范围图像的画面噪声和有效细节,有助于减少原始图像采集帧数,使得整体拍摄过程需要的总时长得到缩短,避免了拍摄时长过长导致画面模糊的情况,有利于清晰拍摄动态夜景。
为达到上述目的,本申请第三方面实施例提出的电子设备,包括:图像传感器、存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述图像传感器与所述处理器电连接,所述处理器执行所述程序时,实现本申请第一方面实施例提出的基于多帧图像的图像处理方法。
本申请第三方面实施例提出的电子设备,通过获取多帧原始图像;对部分帧原始图像基于人工智能降噪,得到降噪图像,部分帧原始图像为多帧原始图像中的至少两帧的原始图像;根据降噪图像以及其它帧原始图像,合成得到高动态范围图像,部分帧原始图像和其它帧原始图像共同组成多帧原始图像,以得到高动态范围图像,能够更加精确地区分出高动态范围 图像的画面噪声和有效细节,有助于减少原始图像采集帧数,使得整体拍摄过程需要的总时长得到缩短,避免了拍摄时长过长导致画面模糊的情况,有利于清晰拍摄动态夜景。
为达到上述目的,本申请第四方面实施例提出的图像处理电路,包括:图像信号处理ISP处理器和图形处理器GPU;所述ISP处理器,与图像传感器电连接,用于控制所述图像传感器获取多帧原始图像;所述GPU,与所述ISP处理器电连接,用于对部分帧原始图像基于人工智能降噪,得到降噪图像,所述部分帧原始图像为所述多帧原始图像中的至少两帧的原始图像;所述ISP处理器,还用于根据所述降噪图像以及其它帧原始图像,合成得到高动态范围图像,所述部分帧原始图像和所述其它帧原始图像共同组成所述多帧原始图像。
本申请第四方面实施例提出的图像处理电路,通过获取多帧原始图像;对部分帧原始图像基于人工智能降噪,得到降噪图像,部分帧原始图像为多帧原始图像中的至少两帧的原始图像;根据降噪图像以及其它帧原始图像,合成得到高动态范围图像,部分帧原始图像和其它帧原始图像共同组成多帧原始图像,以得到高动态范围图像,能够更加精确地区分出高动态范围图像的画面噪声和有效细节,有助于减少原始图像采集帧数,使得整体拍摄过程需要的总时长得到缩短,避免了拍摄时长过长导致画面模糊的情况,有利于清晰拍摄动态夜景。
为达到上述目的,本申请第五方面实施例提出的计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本申请第一方面实施例提出的基于多帧图像的图像处理方法。
本申请第五方面实施例提出的计算机可读存储介质,通过获取多帧原始图像;对部分帧原始图像基于人工智能降噪,得到降噪图像,部分帧原始图像为多帧原始图像中的至少两帧的原始图像;根据降噪图像以及其它帧原始图像,合成得到高动态范围图像,部分帧原始图像和其它帧原始图像共同组成多帧原始图像,以得到高动态范围图像,能够更加精确地区分出高动态范围图像的画面噪声和有效细节,有助于减少原始图像采集帧数,使得整体拍摄过程需要的总时长得到缩短,避免了拍摄时长过长导致画面模糊的情况,有利于清晰拍摄动态夜景。
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本申请实施例所提供的第一种基于多帧图像的图像处理方法的流程示意图;
图2为本申请实施例所提供的第二种基于多帧图像的图像处理方法的流程示意图;
图3为本申请实施例所提供的第三种基于多帧图像的图像处理方法的流程示意图;
图4为本申请实施例提供的第四种基于多帧图像的图像处理方法的流程示意图;
图5为本申请中的一种应用流程示意图;
图6为本申请实施例提供的第一种基于多帧图像的图像处理装置的结构示意图;
图7为本申请实施例提供的第二种基于多帧图像的图像处理装置的结构示意图;
图8为本申请实施例提供的一种电子设备的结构示意图;
图9为本申请实施例提供的一种电子设备的原理示意图;
图10为本申请实施例提供的一种图像处理电路的原理示意图。
具体实施方式
下面详细描述本申请的实施例,实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本申请,而不能理解为对本申请的限制。相反,本申请的实施例包括落入所附加权利要求书的精神和内涵范围内的所有变化、修改和等同物。
针对相关技术中,高动态合成图像时,拍摄的帧数较多,采帧时间长,可能会由于抖动导致拍摄的图像存在拖影,或者会在拍摄的过程中引入噪声,导致图像画面模糊的问题,本申请提出了一种基于多帧图像的图像处理方法,通过获取多帧原始图像;对部分帧原始图像基于人工智能降噪,得到降噪图像,部分帧原始图像为多帧原始图像中的至少两帧的原始图像;根据降噪图像以及其它帧原始图像,合成得到高动态范围图像,部分帧原始图像和其它帧原始图像共同组成多帧原始图像。
下面参考附图描述本申请实施例的基于多帧图像的图像处理方法和装置。
图1为本申请实施例所提供的第一种基于多帧图像的图像处理方法的流程示意图。
本申请实施例的基于多帧图像的图像处理方法,应用于电子设备,该电子设备可以为手机、平板电脑、个人数字助理、穿戴式设备等具有各种操作系统、成像设备的硬件设备。
如图1所示,该基于多帧图像的图像处理方法包括以下步骤:
步骤101,获取多帧原始图像。
其中,原始图像可以例如通过电子设备的图像传感器采集得到的未做任何处理的RAW格式图像,对此不作限制。
其中,RAW格式图像就是图像传感器将捕捉到的光源信号转化为数字信号的原始图像。RAW格式图像记录了数码相机传感器的原始信息,同时记录了由相机拍摄所产生的一些元数据,如感光度的设置、快门速度、光圈值、白平衡等。
可以通过获取当前拍摄场景的预览图像,以确定当前拍摄场景是否属于夜景场景。由于不同场景下环境亮度值不同,预览图像内容也不相同,可以根据当前拍摄场景预览图像的画面内容以及各区域的环境亮度值,确定当前拍摄场景属于夜景场景后,启动夜景拍摄模式,在不同曝光量下获取多帧原始图像。
例如,预览图像的画面内容包括夜晚天空或者夜景灯源等,或者预览图像的各区域中环境亮度值符合夜景环境下图像的亮度分布特性,即可确定当前拍摄场景属于夜景场景。
由于在夜景拍摄时,拍摄场景中光线强度等环境因素的限制,电子设备在拍摄图像时,若采集单帧原始图像无法较好同时顾及到夜景中的灯光等高亮区域,以及夜景中的低亮区域。
因此,电子设备可以通过拍摄多帧原始图像,用于图像合成,另外还可以用于选取画面清晰的图像进行合成成像。
为了同时顾及到夜景中的灯光等高亮区域,以及夜景中的低亮区域,可以控制电子设备的图像传感器在不同曝光量下,拍摄得到的多帧原始图像。例如:采用低曝光量拍摄以对高亮区清晰成像,采用高曝光量拍摄以对低亮区清晰成像。
步骤102,对部分帧原始图像基于人工智能降噪,得到降噪图像,部分帧原始图像为多帧原始图像中的至少两帧的原始图像。
本申请实施例中,部分帧原始图像为上述步骤所采集的多帧原始图像中的部分帧的原始图像,更具体地,部分帧原始图像可以为至少两帧相同曝光量的第一图像,对应的,其它帧原始图像可以为曝光量低于第一图像的至少一帧第二图像。
由于电子设备中的图像传感器在拍摄的过程中会受到不同程度的来自周边电路和本身像素间的光电磁干扰,因此拍摄得到的原始图像不可避免的存在噪声,并且,干扰程度的不同,拍摄得到的图像的清晰度也不相同。因此采集的多帧原始图像也必然存在噪声,可以进一步对部分帧原始图像基于人工智能降噪,得到降噪图像。
例如,在夜景拍摄场景中,通常使用较大的光圈和较长的曝光时间拍摄得到图像,此时如果选择较高的感光度来减少了曝光时间,拍摄得到的图像必然会产生噪声。
本申请实施例中,可以首先对部分帧原始图像进行多帧融合降噪,得到第一降噪图像。
例如,对部分帧原始图像进行图像对齐处理,合成为一张多帧融合图像(可以被称为第 一降噪图像),相当于经过了时域降噪,初步地提升了画面的信噪比。
而后,基于人工智能对第一降噪图像降噪处理,得到降噪图像,能够同时对第一降噪图像中的高光区域和暗光区域降噪,进而可以得到较佳的降噪效果的降噪图像。
需要说明的是,对第一降噪图像基于人工智能降噪,得到的降噪图像为未经加工处理的RAW图像。
本申请实施例中,对第一降噪图像基于人工智能降噪时,可以采用神经网络模型,对第一降噪图像进行噪声特性识别;其中,神经网络模型,已学习得到第一降噪图像的感光度与噪声特性之间的映射关系;根据识别出的噪声特性对第一降噪图像降噪,以得到降噪图像。
作为一种可能的实现方式,由于神经网络模型已学习得到第一降噪图像的感光度与噪声特性之间的映射关系。因此,可以将第一降噪图像输入神经网络模型中,以采用神经网络模型对第一降噪图像进行噪声特性识别,从而识别出第一降噪图像的噪声特性,根据识别出的噪声特性,对第一降噪图像降噪,得到降噪图像,从而达到了降噪的目的,提高了图像的信噪比。
当然,神经网络模型仅仅是实现基于人工智能的降噪的一种可能的实现方式,在实际执行过程中,可以通过其他任意可能的方式来实现基于人工智能的降噪,比如,还可以采用传统的编程技术(比如模拟法和工程学方法)实现,又比如,还可以遗传学算法和人工神经网络的方法来实现。
其中,感光度,又称为ISO值,是指衡量底片对于光的灵敏程度的指标。对于感光度较低的底片,需要曝光更长的时间以达到跟感光度较高的底片相同的成像。数码相机的感光度是一种类似于胶卷感光度的一种指标,数码相机的ISO可以通过调整感光器件的灵敏度或者合并感光点来调整,也就是说,可以通过提升感光器件的光线敏感度或者合并几个相邻的感光点来达到提升ISO的目的。
需要说明的是,无论是数码或是底片摄影,ISO值越低,采集的图像质量越高,图像细节表现越细腻,ISO值越高,光线感应性能越强,也就越能接收更多的光线,从而产生更多的热量,因此,使用相对较高的感光度通常会引入较多的噪声,从而导致图像质量降低。
本申请实施例中,噪声特性,可以是由于图像传感器引起的随机噪声的统计特性。这里说的噪声主要包括热噪声和散粒噪声,其中,热噪声符合高斯分布,散粒噪声符合泊松分布,本申请实施例中的统计特性可以指噪声的方差值,当然也可以是其他可能情况的值,在此不做限定。
步骤103,根据降噪图像以及其它帧原始图像,合成得到高动态范围图像,部分帧原始 图像和其它帧原始图像共同组成多帧原始图像。
本申请实施例中的部分帧原始图像可以为至少两帧相同曝光量的第一图像,对应的,其它帧原始图像可以为曝光量低于第一图像的至少一帧第二图像。
因此,本申请实施例中,可以根据降噪图像与至少一帧第二图像合成得到高动态范围图像。
其中,高动态范围图像(High-Dynamic Range,简称HDR),相比普通的图像,可以提供更多的动态范围和图像细节。
本申请实施例中,可以确定降噪图像和每帧第二图像中,对应于不同曝光时间点的,最佳细节的低动态范围图像LDR(Low-Dynamic Range)图像,而后,根据最佳细节的低动态范围图像LDR(Low-Dynamic Range)合成高动态范围图像,能够更好的反映真实环境中的视觉效果。
需要说明的是,由于降噪图像和其它帧原始图像是在不同曝光情况下拍摄并降噪处理得到的,因此,降噪图像和其它帧原始图像中包含有不同亮度的画面信息。对于同一景物,降噪图像和其它帧原始图像中可能是过曝的,可能是欠曝的,还可能是恰当曝光的。对于降噪图像和其它帧原始图像进行高动态合成后,能够尽量使得高动态范围图像中各景物恰当曝光,与实际场景也更加相近。
本申请实施例在具体执行的过程中,还可以根据降噪图像以及其它帧原始图像,合成得到高动态范围图像之后,还包括:将高动态范围图像转为YUV图像。
可选地,电子设备的显示器能够处理的图像格式为YUV格式。
其中,图像的亮度信号被称作Y,色度信号是由两个互相独立的信号组成,视颜色系统和格式不同,两种色度信号经常被称作U和V。在这种情况下,得到RAW格式的高动态范围图像之后,可以通过图像信号处理器(Image Signal Processing,ISP)对高动态范围图像进行格式转换,将RAW格式的高动态范围图像转换为YUV格式图像。由于显示器的显示界面尺寸有限,为了达到更好的预览效果,可以将转换得到的YUV格式图像压缩至预览尺寸以进行预览显示。
本实施例中,通过获取多帧原始图像;对部分帧原始图像基于人工智能降噪,得到降噪图像,部分帧原始图像为多帧原始图像中的至少两帧的原始图像;根据降噪图像以及其它帧原始图像,合成得到高动态范围图像,能够更加精确地区分出高动态范围图像的画面噪声和有效细节,相较于未进行人工智能的降噪处理,本申请能够在一定程度上有助于减少原始图像采集帧数,对于每一帧原始图像来说有助于增大采集时的感光度以减小拍摄时长,从而使 得整体拍摄过程需要的总时长得到缩短,避免了拍摄时长过长导致画面模糊的情况,有利于清晰拍摄动态夜景。另外,本申请中在对部分帧原始图像基于人工智能降噪,得到降噪图像,对降噪图像,以及其它帧原始图像进行高动态合成,以得到高动态范围图像,从而能够在保障降噪有效性的同时,降低图像降噪的运算量,使得在得到清晰度更好的成像效果的同时,提升成像效率。
为了获得较佳的人工智能的降噪效果,可以选用神经网络模型降噪,并采用各感光度的样本图像对该神经网络模型进行训练,以提高神经网络模型识别噪声特性的能力,具体的训练过程参见图2,如图2所示,图2为本申请实施例所提供的第二种基于多帧图像的图像处理方法的流程示意图,具体包括以下步骤:
步骤201,获取各感光度的样本图像。
其中,样本图像中已经标注了图像的噪声特性。
本申请实施例中,样本图像可以是在不同的环境亮度下,设置不同的感光度拍摄得到的图像。
也就是说,环境亮度应为多种,在每一种环境亮度下,分别在不同感光度情况下拍摄多帧图像,作为样本图像。
为了获得更佳准确的噪声特性识别结果,本申请实施例中还可以对环境亮度和ISO进行细分,增加样本图像的帧数,以使第一降噪图像输入神经网络模型后,该神经网络能准确的识别出第一降噪图像的统计特性。
步骤202,采用各感光度的样本图像对神经网络模型进行训练。
本申请实施例中,获取到不同环境光亮度下拍摄得到的各感光度的样本图像后,采用样本图像对神经网络模型进行训练。将样本图像中标注的统计特性作为模型训练的特性,将经过统计特性标注的样本图像输入神经网络模型,以对神经网络模型进行训练,进而识别出图像的统计特性。
当然,神经网络模型仅仅是实现基于人工智能的降噪的一种可能的实现方式,在实际执行过程中,可以通过其他任意可能的方式来实现基于人工智能的降噪,比如,还可以采用传统的编程技术(比如模拟法和工程学方法)实现,又比如,还可以遗传学算法和人工神经网络的方法来实现。
需要说明的是,在样本图像中标注统计特性对神经网络模型进行训练,是因为已标注的样本图像能够清楚的表示出图像的噪声位置和噪声类型,从而将标注的统计特性作为模型训练的特性,将第一降噪图像输入神经网络模型后,能够识别出图像中的统计特性。
步骤203,直至神经网络模型识别出的噪声特性与相应样本图像中标注的噪声特性匹配时,神经网络模型训练完成。
本申请实施例中,采用各感光度的样本图像对神经网络模型进行训练,直至神经网络模型识别出的噪声特性与相应样本图像中标注的统计特性匹配,
本申请实施例中,通过获取各感光度的样本图像,采用各感光度的样本图像对神经网络模型进行训练,直至神经网络模型识别出的统计特性与相应样本图像中标注的统计特性匹配时,神经网络模型训练完成。由于,采用各感光度下经过标注统计特性的样本图像对神经网络模型进行训练,能够实现将第一降噪图像输入神经网络模型后,准确的识别出图像的统计特性,以实现对图像降噪处理,从而提高图像的拍摄质量。
在图1实施例的基础上,作为一种可能的实现方式,在步骤101中获取多帧原始图像时,可以根据预览图像的成像质量,确定基准曝光量的图像帧数n,以采集符合基准曝光量的n帧原始图像,并采集低于基准曝光量的至少一帧原始图像。下面结合图3对上述过程进行详细介绍,如图3所示,图3为本申请实施例所提供的第三种基于多帧图像的图像处理方法的流程示意图,步骤101还可以包括:
步骤301,根据预览图像的成像质量,确定基准曝光量的图像帧数n。
其中的预览图像是预先获取得到的,例如,可以是开启摄像头拍摄得到的预览图像,或者,也可以是从存储器中读取的,对此不作限制。
其中,n为大于或等于2的自然数。
需要说明的是,采集的图像帧数较多时,整个拍摄时长会过长,在拍摄过程中可能会引入较多的噪声,因此本申请实施例中,图像帧数n的取值范围可以为3或4,以降低拍摄时长,获得较高质量的图像。
本申请实施例中,预览图像的成像质量可以例如采用信噪比和/或成像速度进行衡量,并且成像质量一般是与采集图像帧数为正向关系,即,成像质量越好,则可以采集越多帧的图像。
本申请实施例在具体执行的过程中,若基于脚架模式拍摄预览图像,则考虑到画面较稳定,则可采集较多帧数的预览图像进行后续合成,而基于手持模式拍摄预览图像,则由于不可避免的人手的抖动所造成的画面抖动,本申请实施例中为了避免高动态范围图像模糊,可以采集较少帧的预览图像进行后续的合成。
可以理解的是,采集的原始图像帧数越多,包含有不同的画面信息越多,在高动态合成时得到的合成图像中包含有更多的画面信息,与实际场景也更加相近,因此成像质量与采集 图像帧数为正向关系,进而可以根据预览图像的成像质量,确定基准曝光量的图像帧数n。
步骤302,采集符合基准曝光量的n帧原始图像。
本申请实施例中,根据预览图像的成像质量,确定基准曝光量的图像帧数n后,进一步采集符合基准曝光量的n帧原始图像。
在一种可能的场景下,可以基于拍摄场景的光照度确定的基准曝光量和设定的基准感光度,确定各帧待采集原始图像的基准曝光时长,以获得不同动态范围的图像,使得合成后的图像具有更高的动态范围,提高图像的整体亮度和质量。
下面结合图4对上述过程进行详细介绍,图4为本申请实施例提供的第四种基于多帧图像的图像处理方法的流程示意图,如图4所示,步骤302还可以包括如下子步骤:
子步骤3021,根据拍摄场景的光照度,确定基准曝光量。
其中,曝光量,是指电子设备中的感光器件在曝光时长内接受到光的多少,曝光量与光圈、曝光时长和感光度有关。其中,光圈也就是通光口径,决定单位时间内光线通过的数量;曝光时长,是指光线通过镜头的时间;感光度,又称为ISO值,是衡量底片对于光的灵敏程度的指标,用于表示感光元件的感光速度,ISO数值越高就说明该感光元器件的感光能力越强。
其中,曝光量与曝光时长、感光度光圈相关,例如,可以是曝光时长和感光度乘积,相关技术中的基准曝光量,定义为曝光补偿等级为零,即EV0。
具体地,通过图像传感器获取当前拍摄场景的预览图像,进一步的通过感光器件测量得到预览图像各区域的环境光亮度,进而根据预览图像的亮度信息,确定基准曝光量。其中,在光圈固定的情况下,基准曝光量具体可以包括基准曝光时长和基准感光度。
本申请实施例中,基准曝光量,是指通过对预览图像进行测光获取的当前拍摄场景的亮度信息后,确定的与当前环境的亮度信息相适应的曝光量,基准曝光量的取值可以是基准感光度与基准曝光时长之间的乘积。
子步骤3022,根据预览图像的画面抖动程度,或者根据采集预览图像的图像传感器的抖动程度,设定基准感光度。
本申请实施例中,基准感光度,可以是根据预览图像的画面抖动程度,设定与当前的抖动程度相适应的感光度;也可以是根据采集预览图像的图像传感器当前的抖动程度,设定与当前的抖动程度相适应的感光度,在此不做限定。其中,基准感光度的取值范围可以为100ISO至200ISO。
可以理解的是,采集图像的感光度会影响到整体的拍摄时长,拍摄时长过长,可能会导 致手持拍摄时图像传感器的抖动程度加剧,从而影响图像质量。因此,可以根据预览图像的画面抖动程度,或者根据采集预览的图像传感器的抖动程度,确定采集预览图像对应的基准感光度,以使得拍摄时长控制在合适的范围内。
本申请实施例中,为了确定抖动程度,可以根据电子设备中设置的位移传感器,采集位移信息,进而,根据采集到的电子设备的位移信息,确定预览图像的画面抖动程度或者采集预览图像的图像传感器的抖动程度。
作为一种示例,可以通过获取电子设备当前的陀螺仪(Gyro-sensor)信息,确定电子设备当前的抖动程度,即采集预览图像的图像传感器的抖动程度。
其中,陀螺仪又叫角速度传感器,可以测量物理量偏转、倾斜时的转动角速度。在电子设备中,陀螺仪可以很好的测量转动、偏转的动作,从而可以精确分析判断出使用者的实际动作。电子设备的陀螺仪信息(gyro信息)可以包括手机在三维空间中三个维度方向上的运动信息,三维空间的三个维度可以分别表示为X轴、Y轴、Z轴三个方向,其中,X轴、Y轴、Z轴为两两垂直关系。
需要说明的是,可以根据电子设备当前的gyro信息,确定采集预览图像的图像传感器的抖动程度。电子设备在三个方向上的gyro运动的绝对值越大,则采集预览图像的图像传感器的抖动程度越大。
具体的,可以预设在三个方向上gyro运动的绝对值阈值,并根据获取到的当前在三个方向上的gyro运动的绝对值之和,与预设的阈值的关系,确定采集预览图像的图像传感器的当前的抖动程度。
举例来说,假设预设的阈值为第一阈值A、第二阈值B、第三阈值C,且A<B<C,当前获取到的在三个方向上gyro运动的绝对值之和为S。若S<A,则确定采集预览图像的图像传感器的抖动程度为“无抖动”;若A<S<B,则可以确定采集预览图像的图像传感器的抖动程度为“轻微抖动”;若B<S<C,则可以确定采集预览图像的图像传感器的抖动程度为“小抖动”;若S>C,则可以确定采集预览图像的图像传感器的抖动程度为“大抖动”。
需要说明的是,上述举例仅为示例性的,不能视为对本申请的限制。实际使用时,可以根据实际需要预设阈值的数量和各阈值的具体数值,以及根据gyro信息与各阈值的关系,预设gyro信息与采集预览图像的图像传感器抖动程度的映射关系。
具体的,若采集预览图像的图像传感器的抖动程度较小,则可以将每帧待采集图像对应的基准感光度可以适当压缩为较小的值,以有效抑制每帧图像的噪声、提高拍摄图像的质量;若采集预览图像的图像传感器的抖动程度较大,则可以将每帧待采集图像对应的基准感光度 可以适当提高为较大的值,以缩短拍摄时长。
举例来说,若确定采集预览图像的图像传感器的抖动程度为“无抖动”,则可以将基准感光度确定为较小的值,以尽量获得更高质量的图像,比如确定基准感光度为100;若确定采集预览图像的图像传感器的抖动程度为“轻微抖动”,则可以将基准感光度确定为较大的值,以降低拍摄时长,比如确定基准感光度为120;若确定采集预览图像的图像传感器的抖动程度为“小抖动”,则可以进一步增大基准感光度,以降低拍摄时长,比如确定基准感光度为180;若确定采集预览图像的图像传感器的抖动程度为“大抖动”,则可以确定当前的抖动程度过大,此时可以进一步增大基准感光度,以降低拍摄时长,比如确定基准感光度为200。
需要说明的是,上述举例仅为示例性的,不能视为对本申请的限制。实际使用时,当采集预览图像的图像传感器的抖动程度变化时,既可以改变基准感光度,以获得最优的方案。其中,采集预览图像的图像传感器的抖动程度与每帧待采集图像对应的基准感光度的映射关系,可以根据实际需要预设。
本申请实施例中,预览图像的画面抖动程度与采集预览图像的图像传感器的抖动程度呈正相关关系,根据预览图像的画面抖动程度,设定基准感光度的实现过程参见上述过程,在此不再赘述。
子步骤3023,根据基准曝光量和设定的基准感光度,确定基准曝光时长。
本申请实施例中,基准曝光量,包括基准曝光时长和基准感光度,因此,在根据拍摄场景的光照度,确定基准曝光量,以及根据预览图像的画面抖动程度或者采集预览图像的图像传感器的抖动程度确定基准感光度后,即可根据基准曝光量及基准感光度,确定基准曝光时长。
子步骤3024,根据基准曝光时长和基准感光度,采集n帧原始图像。
本申请实施例中,在确定各帧待采集原始图像的基准曝光时长和基准感光度后,根据各帧待采集原始图像的曝光时长和基准感光度控制图像传感器进行图像采集,在此不做具体赘述。
步骤303,采集低于基准曝光量的至少一帧原始图像。
本申请实施例中,在采集低于基准曝光量的至少一帧原始图像时,可以根据设定的曝光补偿等级,对基准曝光时长进行补偿,得到短于基准曝光时长的补偿曝光时长;根据补偿曝光时长和基准感光度,采集至少一帧原始图像。
可以理解为,通过曝光补偿等级,对至少一帧原始图像分别采取不同的曝光补偿策略, 使得待采集图像对应于不同的曝光量,以获得具有不同动态范围的图像。
需要说明的是,在曝光量最初的定义中,曝光量并不是指一个准确的数值,而是指“能够给出相同的曝光量的所有相机光圈与曝光时长的组合”。感光度、光圈和曝光时长确定了相机的曝光量,不同的参数组合可以产生相等的曝光量。曝光补偿等级是对曝光量进行调整的参数,使得某些图像欠曝光,某些图像过曝光,还可以使得某些图像恰当曝光。本申请实施例中,至少一帧第二图像对应的曝光补偿等级取值范围为EV-5至EV-1。
作为一种示例,采集低于基准曝光量的至少一帧原始图像,具体为两帧原始图像,该至少一帧原始图像可以被称为至少一帧第二图像具体为两帧第二图像,两帧第二图像对应不同的曝光补偿等级,且两帧第二图像的曝光补偿等级小于EV0。
具体地,根据设定的曝光补偿等级,对基准曝光时长进行补偿,得到短于基准曝光时长的补偿曝光时长;根据补偿曝光时长和基准感光度,采集两帧第二图像。
参见图5,图5为本申请中的一种应用流程示意图。
本申请实施例中,通过根据预览图像的成像质量,确定基准曝光量的图像帧数n,采集符合基准曝光量的n帧原始图像,同时采集低于基准曝光量的至少一帧原始图像。由此,通过采集基准曝光量的n帧原始图像,同时采集低于基准曝光量的至少一帧原始图像,确定采集的多帧原始图像,进而提高了图像的成像质量,得到清晰度较高的成像效果。
图6为本申请实施例提供的第一种基于多帧图像的图像处理装置的结构示意图。
如图6所示,该基于多帧图像的图像处理装置600包括:获取模块610、降噪模块620以及合成模块630。
获取模块610,用于获取多帧原始图像;
降噪模块620,用于对部分帧原始图像基于人工智能降噪,得到降噪图像,部分帧原始图像为多帧原始图像中的至少两帧的原始图像;
合成模块630,用于根据降噪图像以及其它帧原始图像,合成得到高动态范围图像,部分帧原始图像和其它帧原始图像共同组成多帧原始图像。
可选地,一些实施例中,降噪模块620,具体用于:
对部分帧原始图像进行多帧融合降噪,得到第一降噪图像;
采用神经网络模型,对第一降噪图像进行噪声特性识别;其中,神经网络模型,已学习得到第一降噪图像的感光度与噪声特性之间的映射关系;
根据识别出的噪声特性对第一降噪图像降噪,以得到降噪图像。
可选地,一些实施例中,神经网络模型,是采用各感光度的样本图像对神经网络模型进 行训练,直至神经网络模型识别出的噪声特性与相应样本图像中标注的噪声特性匹配时,神经网络模型训练完成。
可选地,一些实施例中,部分帧原始图像为至少两帧相同曝光量的第一图像,其它帧原始图像为曝光量低于第一图像的至少一帧第二图像;
合成模块630,具体用于:
根据降噪图像和至少一帧第二图像,合成得到高动态范围图像。
可选地,一些实施例中,获取模块610,具体用于:
获取预览图像;
根据预览图像的成像质量,确定基准曝光量的图像帧数n;其中,n为大于或等于2的自然数;
采集符合基准曝光量的n帧原始图像;
采集低于基准曝光量的至少一帧原始图像。
可选地,一些实施例中,成像质量与图像帧数为正向关系;
成像质量包括信噪比和成像速度中的至少一个。
可选地,一些实施例中,获取模块610,具体用于:
根据拍摄场景的光照度,确定基准曝光量;
根据基准曝光量和设定的基准感光度,确定基准曝光时长;
根据基准曝光时长和基准感光度,采集n帧原始图像。
可选地,一些实施例中,至少一帧第二图像具体为两帧第二图像;
两帧第二图像对应不同的曝光补偿等级,且两帧第二图像的曝光补偿等级小于EV0。
可选地,一些实施例中,至少一帧第二图像对应的曝光补偿等级取值范围为EV-5至EV-1。
可选地,一些实施例中,参见图7,图7为本申请实施例提供的第二种基于多帧图像的图像处理装置的结构示意图,还包括:
转换模块640,用于将高动态范围图像转为YUV图像。
需要说明的是,前述对基于多帧图像的图像处理方法实施例的解释说明也适用于该实施例的基于多帧图像的图像处理装置600,此处不再赘述。
本实施例中,通过获取多帧原始图像;对部分帧原始图像基于人工智能降噪,得到降噪图像,部分帧原始图像为多帧原始图像中的至少两帧的原始图像;根据降噪图像以及其它帧原始图像,合成得到高动态范围图像,能够更加精确地区分出高动态范围图像的画面噪声和 有效细节,相较于未进行人工智能的降噪处理,本申请能够在一定程度上有助于减少原始图像采集帧数,对于每一帧原始图像来说有助于增大采集时的感光度以减小拍摄时长,从而使得整体拍摄过程需要的总时长得到缩短,避免了拍摄时长过长导致画面模糊的情况,有利于清晰拍摄动态夜景。另外,本申请中在对部分帧原始图像基于人工智能降噪,得到降噪图像,对降噪图像,以及其它帧原始图像进行高动态合成,以得到高动态范围图像,从而能够在保障降噪有效性的同时,降低图像降噪的运算量,使得在得到清晰度更好的成像效果的同时,提升成像效率。
为了实现上述实施例,本申请还提出一种电子设备200,参见图8,图8为本申请实施例提供的一种电子设备的结构示意图,包括:图像传感器210、处理器220、存储器230及存储在存储器230上并可在处理器220上运行的计算机程序,图像传感器210与处理器220电连接,处理器220执行程序时,实现如上述实施例中的基于多帧图像的图像处理方法。
作为一种可能的情况,处理器220可以包括:图像信号处理ISP处理器。
其中,ISP处理器,用于控制图像传感器获取多帧原始图像。
作为另一种可能的情况,处理器220还可以包括:与ISP处理器连接的图形处理器(Graphics Processing Unit,简称GPU)。
其中,GPU,用于对部分帧原始图像基于人工智能降噪,得到降噪图像,部分帧原始图像为多帧原始图像中的至少两帧的原始图像。
GPU,还用于对目标降噪图像进行编码处理。
ISP处理器,还用于根据降噪图像以及其它帧原始图像,合成得到高动态范围图像,部分帧原始图像和其它帧原始图像共同组成多帧原始图像。
作为一种示例,请参阅图9,在图8电子设备的基础上,图9中为本申请实施例提供的一种电子设备的原理示例图。电子设备200的存储器230包括非易失性存储器80、内存储器82和处理器220。存储器230中存储有计算机可读指令。计算机可读指令被存储器执行时,使得处理器230执行上述任一实施方式的基于多帧图像的图像处理方法。
如图9所示,该电子设备200包括通过系统总线81连接的处理器220、非易失性存储器80、内存储器82、显示屏83和输入装置84。其中,电子设备200的非易失性存储器80存储有操作系统和计算机可读指令。该计算机可读指令可被处理器220执行,以实现本申请实施方式的基于多帧图像的图像处理方法。该处理器220用于提供计算和控制能力,支撑整个电子设备200的运行。电子设备200的内存储器82为非易失性存储器80中的计算机可读指令的运行提供环境。电子设备200的显示屏83可以是液晶显示屏或者电子墨水显示屏等, 输入装置84可以是显示屏83上覆盖的触摸层,也可以是电子设备200外壳上设置的按键、轨迹球或触控板,也可以是外接的键盘、触控板或鼠标等。该电子设备200可以是手机、平板电脑、笔记本电脑、个人数字助理或穿戴式设备(例如智能手环、智能手表、智能头盔、智能眼镜)等。本领域技术人员可以理解,图9中示出的结构,仅仅是与本申请方案相关的部分结构的示意图,并不构成对本申请方案所应用于其上的电子设备200的限定,具体的电子设备200可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
为了实现上述实施例,本申请还提出一种图像处理电路,请参阅图10,图10为本申请实施例提供的一种图像处理电路的原理示意图,如图10所示,图像处理电路90包括图像信号处理ISP处理器91(ISP处理器91作为处理器220)和图形处理器GPU。
ISP处理器,与图像传感器电连接,用于控制图像传感器获取多帧原始图像;
GPU,与ISP处理器电连接,用于对部分帧原始图像基于人工智能降噪,得到降噪图像,部分帧原始图像为多帧原始图像中的至少两帧的原始图像。
ISP处理器,还用于根据降噪图像以及其它帧原始图像,合成得到高动态范围图像,部分帧原始图像和其它帧原始图像共同组成多帧原始图像。
摄像头93捕捉的图像数据首先由ISP处理器91处理,ISP处理器91对图像数据进行分析以捕捉可用于确定摄像头93的一个或多个控制参数的图像统计信息。摄像模组310可包括一个或多个透镜932和图像传感器934。图像传感器934可包括色彩滤镜阵列(如Bayer滤镜),图像传感器934可获取每个成像像素捕捉的光强度和波长信息,并提供可由ISP处理器91处理的一组原始图像数据。传感器94(如陀螺仪)可基于传感器94接口类型把采集的图像处理的参数(如防抖参数)提供给ISP处理器91。传感器94接口可以为SMIA(Standard Mobile Imaging Architecture,标准移动成像架构)接口、其它串行或并行照相机接口或上述接口的组合。
此外,图像传感器934也可将原始图像数据发送给传感器94,传感器94可基于传感器94接口类型把原始图像数据提供给ISP处理器91,或者传感器94将原始图像数据存储到图像存储器95中。
ISP处理器91按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器91可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。
ISP处理器91还可从图像存储器95接收图像数据。例如,传感器94接口将原始图像 数据发送给图像存储器95,图像存储器95中的原始图像数据再提供给ISP处理器91以供处理。图像存储器95可为存储器330、存储器330的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。
当接收到来自图像传感器934接口或来自传感器94接口或来自图像存储器95的原始图像数据时,ISP处理器91可进行一个或多个图像处理操作,如时域滤波。处理后的图像数据可发送给图像存储器95,以便在被显示之前进行另外的处理。ISP处理器91从图像存储器95接收处理数据,并对处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。ISP处理器91处理后的图像数据可输出给显示器97(显示器97可包括显示屏83),以供用户观看和/或由图形引擎或GPU进一步处理。此外,ISP处理器91的输出还可发送给图像存储器95,且显示器97可从图像存储器95读取图像数据。在一个实施例中,图像存储器95可被配置为实现一个或多个帧缓冲器。此外,ISP处理器91的输出可发送给编码器/解码器96,以便编码/解码图像数据。编码的图像数据可被保存,并在显示于显示器97设备上之前解压缩。编码器/解码器96可由CPU或GPU或协处理器实现。
ISP处理器91确定的统计数据可发送给控制逻辑器92单元。例如,统计数据可包括自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、透镜932阴影校正等图像传感器934统计信息。控制逻辑器92可包括执行一个或多个例程(如固件)的处理元件和/或微控制器,一个或多个例程可根据接收的统计数据,确定摄像头93的控制参数及ISP处理器91的控制参数。例如,摄像头93的控制参数可包括传感器94控制参数(例如增益、曝光控制的积分时间、防抖参数等)、照相机闪光控制参数、透镜932控制参数(例如聚焦或变焦用焦距)、或这些参数的组合。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵,以及透镜932阴影校正参数。
以下为运用图10中图像处理技术实现基于多帧图像的图像处理方法的步骤:ISP处理器控制图像传感器获取多帧原始图像;GPU对部分帧原始图像基于人工智能降噪,得到降噪图像,部分帧原始图像为多帧原始图像中的至少两帧的原始图像,ISP处理器,还用于根据降噪图像以及其它帧原始图像,合成得到高动态范围图像,部分帧原始图像和其它帧原始图像共同组成多帧原始图像。
为了实现上述实施例,本申请实施例还提供了一种存储介质,当存储介质中的指令由处理器执行时,使得处理器执行以下步骤:获取多帧原始图像;对部分帧原始图像基于人工智能降噪,得到降噪图像,部分帧原始图像为多帧原始图像中的至少两帧的原始图像;根据降噪图像以及其它帧原始图像,合成得到高动态范围图像,部分帧原始图像和其它帧原始图像 共同组成多帧原始图像。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (19)

  1. 一种基于多帧图像的图像处理方法,其特征在于,所述方法包括以下步骤:
    获取多帧原始图像;
    对部分帧原始图像基于人工智能降噪,得到降噪图像,所述部分帧原始图像为所述多帧原始图像中的至少两帧的原始图像;
    根据所述降噪图像以及其它帧原始图像,合成得到高动态范围图像,所述部分帧原始图像和所述其它帧原始图像共同组成所述多帧原始图像。
  2. 根据权利要求1所述的基于多帧图像的图像处理方法,其特征在于,所述对部分帧原始图像基于人工智能降噪,得到降噪图像,包括:
    对所述部分帧原始图像进行多帧融合降噪,得到第一降噪图像;
    采用神经网络模型,对所述第一降噪图像进行噪声特性识别;其中,所述神经网络模型,已学习得到所述第一降噪图像的感光度与噪声特性之间的映射关系;
    根据识别出的噪声特性对所述第一降噪图像降噪,以得到所述降噪图像。
  3. 根据权利要求2所述的基于多帧图像的图像处理方法,其特征在于,所述神经网络模型,是采用各感光度的样本图像对所述神经网络模型进行训练,直至所述神经网络模型识别出的噪声特性与相应样本图像中标注的噪声特性匹配时,所述神经网络模型训练完成。
  4. 根据权利要求1-3任一项所述的基于多帧图像的图像处理方法,其特征在于,所述部分帧原始图像为至少两帧相同曝光量的第一图像,所述其它帧原始图像为曝光量低于所述第一图像的至少一帧第二图像;
    所述根据所述降噪图像以及其它帧原始图像,合成得到高动态范围图像:
    根据所述降噪图像和所述至少一帧第二图像,合成得到所述高动态范围图像。
  5. 根据权利要求1-4任一项所述的基于多帧图像的图像处理方法,其特征在于,所述获取多帧原始图像之前,还包括:
    获取预览图像;
    所述获取多帧原始图像,包括:
    根据所述预览图像的成像质量,确定基准曝光量的图像帧数n;其中,n为大于或等于2的自然数;
    采集符合所述基准曝光量的n帧原始图像;
    采集低于所述基准曝光量的至少一帧原始图像。
  6. 根据权利要求5所述的基于多帧图像的图像处理方法,其特征在于,所述采集符合 所述基准曝光量的n帧原始图像,包括:
    根据拍摄场景的光照度,确定基准曝光量;
    根据所述基准曝光量和设定的基准感光度,确定基准曝光时长;
    根据所述基准曝光时长和所述基准感光度,采集所述n帧原始图像。
  7. 根据权利要求4-6任一项所述的基于多帧图像的图像处理方法,其特征在于,所述至少一帧第二图像具体为两帧第二图像;
    所述两帧第二图像对应不同的曝光补偿等级,且所述两帧第二图像的曝光补偿等级小于EV0。
  8. 根据权利要求7所述的基于多帧图像的图像处理方法,其特征在于,所述至少一帧第二图像对应的曝光补偿等级取值范围为EV-5至EV-1。
  9. 根据权利要求1-8任一项所述的基于多帧图像的图像处理方法,其特征在于,所述根据所述降噪图像以及其它帧原始图像,合成得到高动态范围图像之后,还包括:
    将所述高动态范围图像转为YUV图像。
  10. 一种基于多帧图像的图像处理装置,其特征在于,所述装置包括:
    获取模块,用于获取多帧原始图像;
    降噪模块,用于对部分帧原始图像基于人工智能降噪,得到降噪图像,所述部分帧原始图像为所述多帧原始图像中的至少两帧的原始图像;
    合成模块,用于根据所述降噪图像以及其它帧原始图像,合成得到高动态范围图像,所述部分帧原始图像和所述其它帧原始图像共同组成所述多帧原始图像。
  11. 根据权利要求10所述的基于多帧图像的图像处理装置,其特征在于,所述降噪模块,具体用于:
    对所述部分帧原始图像进行多帧融合降噪,得到第一降噪图像;
    采用神经网络模型,对所述第一降噪图像进行噪声特性识别;其中,所述神经网络模型,已学习得到所述第一降噪图像的感光度与噪声特性之间的映射关系;
    根据识别出的噪声特性对所述第一降噪图像降噪,以得到所述降噪图像。
  12. 根据权利要求11所述的基于多帧图像的图像处理装置,其特征在于,所述神经网络模型,是采用各感光度的样本图像对所述神经网络模型进行训练,直至所述神经网络模型识别出的噪声特性与相应样本图像中标注的噪声特性匹配时,所述神经网络模型训练完成。
  13. 根据权利要求10-12任一项所述的基于多帧图像的图像处理装置,其特征在于,所述部分帧原始图像为至少两帧相同曝光量的第一图像,所述其它帧原始图像为曝光量低于所 述第一图像的至少一帧第二图像;
    所述合成模块,具体用于:
    根据所述降噪图像和所述至少一帧第二图像,合成得到所述高动态范围图像。
  14. 一种电子设备,其特征在于,包括:图像传感器、存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述图像传感器与所述处理器电连接,所述处理器执行所述程序时,实现如权利要求1-9中任一所述的基于多帧图像的图像处理方法。
  15. 根据权利要求14所述的电子设备,其特征在于,所述处理器包括图像信号处理ISP处理器;
    所述ISP处理器,用于控制所述图像传感器获取多帧原始图像。
  16. 根据权利要求15所述的电子设备,其特征在于,所述处理器包括与所述ISP处理器连接的图形处理器GPU;
    其中,所述GPU,用于对部分帧原始图像基于人工智能降噪,得到降噪图像,所述部分帧原始图像为所述多帧原始图像中的至少两帧的原始图像;
    所述ISP处理器,还用于根据所述降噪图像以及其它帧原始图像,合成得到高动态范围图像,所述部分帧原始图像和所述其它帧原始图像共同组成所述多帧原始图像。
  17. 根据权利要求16所述的电子设备,其特征在于,
    所述GPU,还用于对所述高动态范围图像进行编码处理。
  18. 一种图像处理电路,其特征在于,所述图像处理电路包括图像信号处理ISP处理器和图形处理器GPU;
    所述ISP处理器,与图像传感器电连接,用于控制所述图像传感器获取多帧原始图像;
    所述GPU,与所述ISP处理器电连接,用于对部分帧原始图像基于人工智能降噪,得到降噪图像,所述部分帧原始图像为所述多帧原始图像中的至少两帧的原始图像;
    所述ISP处理器,还用于根据所述降噪图像以及其它帧原始图像,合成得到高动态范围图像,所述部分帧原始图像和所述其它帧原始图像共同组成所述多帧原始图像。
  19. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-9中任一所述的基于多帧图像的图像处理方法。
PCT/CN2020/081433 2019-04-09 2020-03-26 基于多帧图像的图像处理方法、装置、电子设备 WO2020207261A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910279858.9 2019-04-09
CN201910279858.9A CN110062159A (zh) 2019-04-09 2019-04-09 基于多帧图像的图像处理方法、装置、电子设备

Publications (1)

Publication Number Publication Date
WO2020207261A1 true WO2020207261A1 (zh) 2020-10-15

Family

ID=67318760

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/081433 WO2020207261A1 (zh) 2019-04-09 2020-03-26 基于多帧图像的图像处理方法、装置、电子设备

Country Status (2)

Country Link
CN (1) CN110062159A (zh)
WO (1) WO2020207261A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258407A (zh) * 2020-10-20 2021-01-22 北京集创北方科技股份有限公司 一种图像采集设备的信噪比获取方法、装置及存储介质

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110062159A (zh) * 2019-04-09 2019-07-26 Oppo广东移动通信有限公司 基于多帧图像的图像处理方法、装置、电子设备
CN110430370B (zh) * 2019-07-30 2021-01-15 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN110611750B (zh) * 2019-10-31 2022-03-22 北京迈格威科技有限公司 一种夜景高动态范围图像生成方法、装置和电子设备
CN111242860B (zh) * 2020-01-07 2024-02-27 影石创新科技股份有限公司 超级夜景图像的生成方法、装置、电子设备及存储介质
CN111479059B (zh) * 2020-04-15 2021-08-13 Oppo广东移动通信有限公司 拍照处理方法、装置、电子设备及存储介质
CN112003996B (zh) * 2020-08-12 2023-04-18 Oppo广东移动通信有限公司 一种视频的生成方法、终端及计算机存储介质
CN111988523B (zh) * 2020-08-14 2022-05-13 RealMe重庆移动通信有限公司 超级夜景图像生成方法及装置、终端和可读存储介质
CN115514876B (zh) * 2021-06-23 2023-09-01 荣耀终端有限公司 图像融合方法、电子设备、存储介质及计算机程序产品
WO2023050413A1 (zh) * 2021-09-30 2023-04-06 深圳传音控股股份有限公司 图像处理方法、智能终端及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100310190A1 (en) * 2009-06-09 2010-12-09 Aptina Imaging Corporation Systems and methods for noise reduction in high dynamic range imaging
CN105827971A (zh) * 2016-03-31 2016-08-03 维沃移动通信有限公司 一种图像处理方法及移动终端
CN106416225A (zh) * 2014-05-30 2017-02-15 通用电气公司 远程视觉检查图像捕获系统和方法
CN107635098A (zh) * 2017-10-30 2018-01-26 广东欧珀移动通信有限公司 高动态范围图像噪声去除方法、装置及设备
CN108280811A (zh) * 2018-01-23 2018-07-13 哈尔滨工业大学深圳研究生院 一种基于神经网络的图像去噪方法和系统
CN109005366A (zh) * 2018-08-22 2018-12-14 Oppo广东移动通信有限公司 摄像模组夜景摄像处理方法、装置、电子设备及存储介质
CN109218613A (zh) * 2018-09-18 2019-01-15 Oppo广东移动通信有限公司 高动态范围图像合成方法、装置、终端设备和存储介质
CN110062159A (zh) * 2019-04-09 2019-07-26 Oppo广东移动通信有限公司 基于多帧图像的图像处理方法、装置、电子设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014036401A (ja) * 2012-08-10 2014-02-24 Sony Corp 撮像装置、画像信号処理方法及びプログラム
CN109194882B (zh) * 2018-08-22 2020-07-31 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及存储介质
CN109167893B (zh) * 2018-10-23 2021-04-27 Oppo广东移动通信有限公司 拍摄图像的处理方法、装置、存储介质及移动终端
CN109348088B (zh) * 2018-11-22 2021-05-11 Oppo广东移动通信有限公司 图像降噪方法、装置、电子设备及计算机可读存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100310190A1 (en) * 2009-06-09 2010-12-09 Aptina Imaging Corporation Systems and methods for noise reduction in high dynamic range imaging
CN106416225A (zh) * 2014-05-30 2017-02-15 通用电气公司 远程视觉检查图像捕获系统和方法
CN105827971A (zh) * 2016-03-31 2016-08-03 维沃移动通信有限公司 一种图像处理方法及移动终端
CN107635098A (zh) * 2017-10-30 2018-01-26 广东欧珀移动通信有限公司 高动态范围图像噪声去除方法、装置及设备
CN108280811A (zh) * 2018-01-23 2018-07-13 哈尔滨工业大学深圳研究生院 一种基于神经网络的图像去噪方法和系统
CN109005366A (zh) * 2018-08-22 2018-12-14 Oppo广东移动通信有限公司 摄像模组夜景摄像处理方法、装置、电子设备及存储介质
CN109218613A (zh) * 2018-09-18 2019-01-15 Oppo广东移动通信有限公司 高动态范围图像合成方法、装置、终端设备和存储介质
CN110062159A (zh) * 2019-04-09 2019-07-26 Oppo广东移动通信有限公司 基于多帧图像的图像处理方法、装置、电子设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258407A (zh) * 2020-10-20 2021-01-22 北京集创北方科技股份有限公司 一种图像采集设备的信噪比获取方法、装置及存储介质

Also Published As

Publication number Publication date
CN110062159A (zh) 2019-07-26

Similar Documents

Publication Publication Date Title
WO2020207262A1 (zh) 基于多帧图像的图像处理方法、装置、电子设备
CN110072051B (zh) 基于多帧图像的图像处理方法和装置
WO2020207261A1 (zh) 基于多帧图像的图像处理方法、装置、电子设备
CN110062160B (zh) 图像处理方法和装置
CN110166708B (zh) 夜景图像处理方法、装置、电子设备以及存储介质
CN110290289B (zh) 图像降噪方法、装置、电子设备以及存储介质
CN110191291B (zh) 基于多帧图像的图像处理方法和装置
CN109005366B (zh) 摄像模组夜景摄像处理方法、装置、电子设备及存储介质
CN109068067B (zh) 曝光控制方法、装置和电子设备
CN109040609B (zh) 曝光控制方法、装置、电子设备和计算机可读存储介质
CN110248106B (zh) 图像降噪方法、装置、电子设备以及存储介质
CN108900782B (zh) 曝光控制方法、装置以及电子设备
CN109788207B (zh) 图像合成方法、装置、电子设备及可读存储介质
WO2020034737A1 (zh) 成像控制方法、装置、电子设备以及计算机可读存储介质
CN110166707B (zh) 图像处理方法、装置、电子设备以及存储介质
CN110166709B (zh) 夜景图像处理方法、装置、电子设备以及存储介质
CN109348088B (zh) 图像降噪方法、装置、电子设备及计算机可读存储介质
CN110166706B (zh) 图像处理方法、装置、电子设备以及存储介质
CN110264420B (zh) 基于多帧图像的图像处理方法和装置
CN109672819B (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
CN109151333B (zh) 曝光控制方法、装置以及电子设备
CN110166711B (zh) 图像处理方法、装置、电子设备以及存储介质
CN109756680B (zh) 图像合成方法、装置、电子设备及可读存储介质
CN108462831B (zh) 图像处理方法、装置、存储介质及电子设备
CN110276730B (zh) 图像处理方法、装置、电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20788150

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20788150

Country of ref document: EP

Kind code of ref document: A1