WO2020207262A1 - Procédé et appareil de traitement d'images basés sur de multiples trames d'images, et dispositif électronique - Google Patents
Procédé et appareil de traitement d'images basés sur de multiples trames d'images, et dispositif électronique Download PDFInfo
- Publication number
- WO2020207262A1 WO2020207262A1 PCT/CN2020/081471 CN2020081471W WO2020207262A1 WO 2020207262 A1 WO2020207262 A1 WO 2020207262A1 CN 2020081471 W CN2020081471 W CN 2020081471W WO 2020207262 A1 WO2020207262 A1 WO 2020207262A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- original
- images
- frames
- noise
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
Definitions
- This application relates to the field of imaging technology, and in particular to an image processing method, device, and electronic equipment based on multi-frame images.
- mobile terminal devices such as smart phones, tablet computers, etc.
- mobile terminal devices have built-in cameras, and with the enhancement of mobile terminal processing capabilities and the development of camera technology, the performance of built-in cameras is getting stronger and the quality of captured images is getting higher and higher.
- mobile terminal devices are simple to operate and easy to carry. In daily life, more and more users use smart phones, tablet computers and other mobile terminal devices to take pictures.
- This application aims to solve one of the technical problems in the related technology at least to a certain extent.
- the purpose of this application is to propose an image processing method, device, and electronic device based on multi-frame images, which can more accurately distinguish the picture noise and effective details of high dynamic range images, and help reduce the original image collection frames Therefore, the total time required for the overall shooting process is shortened, avoiding the situation that the shooting time is too long and causing the picture to be blurred, which is conducive to clearly shooting dynamic night scenes.
- the multi-frame image-based image processing method proposed by the embodiment of the first aspect of this application obtains multiple original images; denoises some original images based on artificial intelligence to obtain the first denoised image, and based on other original images Artificial intelligence noise reduction to obtain a second noise reduction image, convert the first noise reduction image into a first YUV image, and convert the second noise reduction image into a second YUV image, according to the first YUV image and the second YUV image,
- the high dynamic range image is synthesized, which can more accurately distinguish the picture noise and effective details of the high dynamic range image, which helps to reduce the number of original image acquisition frames, so that the total time required for the overall shooting process is shortened, and the shooting time is avoided.
- the long-term blurring of the picture is conducive to clear shooting of dynamic night scenes.
- the image processing device based on multi-frame images proposed in the embodiment of the second aspect of the present application includes: an acquisition module for acquiring multi-frame original images; a noise reduction module for partial frame original images based on artificial intelligence Noise reduction, obtaining a first noise reduction image, and denoising other original images based on artificial intelligence to obtain a second noise reduction image, where the partial original images are original images of at least two of the multiple original images Conversion module, used to convert the first noise reduction image into a first YUV image, and convert the second noise reduction image into a second YUV image; synthesis module, used according to the first YUV image and The second YUV image is synthesized to obtain a high dynamic range image.
- the image processing device based on multi-frame images proposed in the embodiment of the second aspect of the present application obtains multi-frame original images; denoises some of the original images based on artificial intelligence to obtain the first denoised image, and based on other original images
- Artificial intelligence noise reduction to obtain a second noise reduction image, convert the first noise reduction image into a first YUV image, and convert the second noise reduction image into a second YUV image, according to the first YUV image and the second YUV image
- the high dynamic range image is synthesized, which can more accurately distinguish the picture noise and effective details of the high dynamic range image, which helps to reduce the number of original image acquisition frames, so that the total time required for the overall shooting process is shortened, and the shooting time is avoided.
- the long-term blurring of the picture is conducive to clear shooting of dynamic night scenes.
- the electronic device proposed in the embodiment of the third aspect of the present application includes: an image sensor, a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
- the image sensor and the processor The processor is electrically connected, and when the processor executes the program, the image processing method based on multi-frame images proposed in the embodiment of the first aspect of the present application is implemented.
- the electronic device proposed in the embodiment of the third aspect of the present application obtains multiple frames of original images; denoises some of the original images based on artificial intelligence to obtain the first denoised image, and denoises other original images based on artificial intelligence to obtain
- the second noise reduction image, the first noise reduction image is converted into the first YUV image, and the second noise reduction image is converted into the second YUV image, and the high dynamic range image is synthesized according to the first YUV image and the second YUV image , It can more accurately distinguish the picture noise and effective details of the high dynamic range image, which helps to reduce the number of original image acquisition frames, so that the total time required for the overall shooting process is shortened, and the picture blurring caused by the long shooting time is avoided , Which is conducive to clear shooting of dynamic night scenes.
- the image processing circuit proposed in the embodiment of the fourth aspect of the present application includes: an image signal processing ISP processor and a graphics processor GPU; the ISP processor is electrically connected to the image sensor and is used to control the image The sensor acquires multiple frames of original images; the GPU, which is electrically connected to the ISP processor, is used for denoising some original frames based on artificial intelligence to obtain the first denoised image, and denoising other frames of original images based on artificial intelligence.
- the ISP processor is also used to convert the first noise-reduced image into A first YUV image, and converting the second noise reduction image into a second YUV image; synthesize a high dynamic range image according to the first YUV image and the second YUV image.
- the image processing circuit proposed in the embodiment of the fourth aspect of the present application obtains multiple frames of original images; denoises some of the original images based on artificial intelligence to obtain the first denoised image, and denoises other original images based on artificial intelligence.
- Obtain the second noise reduction image convert the first noise reduction image to the first YUV image, and convert the second noise reduction image to the second YUV image, and synthesize the high dynamic range according to the first YUV image and the second YUV image
- the image can more accurately distinguish the image noise and effective details of the high dynamic range image, which helps to reduce the number of original image acquisition frames, so that the total time required for the overall shooting process is shortened, and the image blur caused by the long shooting time is avoided Circumstances are conducive to clear shooting of dynamic night scenes.
- the computer-readable storage medium provided by the embodiment of the fifth aspect of the present application stores a computer program thereon, and when the program is executed by a processor, the multi-frame image-based Image processing method.
- the computer-readable storage medium proposed in the embodiment of the fifth aspect of the present application acquires multiple frames of original images; denoises some of the original images based on artificial intelligence to obtain the first denoised image, and performs artificial intelligence on other original images.
- Noise get the second noise reduction image, convert the first noise reduction image to the first YUV image, and convert the second noise reduction image to the second YUV image, according to the first YUV image and the second YUV image, synthesize the high
- the dynamic range image can more accurately distinguish the picture noise and effective details of the high dynamic range image, which helps to reduce the number of original image acquisition frames, so that the total time required for the overall shooting process is shortened, and the picture is prevented from being too long.
- the blurred situation is conducive to clearly shooting dynamic night scenes.
- FIG. 1 is a schematic flowchart of the first image processing method based on multi-frame images provided by an embodiment of this application;
- FIG. 2 is a schematic diagram of an application process in this application.
- FIG. 3 is a schematic flowchart of a second image processing method based on multi-frame images provided by an embodiment of the application;
- FIG. 4 is a schematic flowchart of a third image processing method based on multi-frame images provided by an embodiment of the application;
- FIG. 5 is a schematic flowchart of a fourth image processing method based on multi-frame images provided by an embodiment of the application.
- FIG. 6 is a schematic structural diagram of the first image processing device based on multi-frame images provided by an embodiment of the application;
- FIG. 7 is a schematic structural diagram of a second image processing device based on multi-frame images provided by an embodiment of the application.
- FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
- FIG. 9 is a schematic diagram of the principle of an electronic device provided by an embodiment of the application.
- FIG. 10 is a schematic diagram of the principle of an image processing circuit provided by an embodiment of the application.
- this application proposes an image processing method based on multi-frame images, which obtains multi-frame original images; denoises some of the original images based on artificial intelligence to obtain the first denoised image, and compares the original images of other frames.
- the second noise reduction image is obtained, some of the original images are original images of at least two frames of the multi-frame original image; the first noise reduction image is converted into the first YUV image, and the second noise reduction The image is converted into a second YUV image; according to the first YUV image and the second YUV image, a high dynamic range image is synthesized.
- FIG. 1 is a schematic flowchart of the first image processing method based on multi-frame images provided by an embodiment of the application.
- the image processing method based on multi-frame images in the embodiments of the present application is applied to an electronic device.
- the electronic device may be a mobile phone, a tablet computer, a personal digital assistant, a wearable device, and other hardware devices with various operating systems and imaging devices.
- the image processing method based on multi-frame images includes the following steps:
- Step 101 Obtain multiple frames of original images.
- the original image may be, for example, a RAW format image without any processing that is collected by an image sensor of an electronic device, which is not limited.
- the RAW format image is the original image that the image sensor converts the captured light source signal into a digital signal.
- the RAW format image records the original information of the digital camera sensor, as well as some metadata generated by the camera, such as the sensitivity setting, shutter speed, aperture value, white balance, etc.
- the preview image of the current shooting scene can be obtained to determine whether the current shooting scene belongs to a night scene. Because the environmental brightness values are different in different scenes, the preview image content is also different. You can start the night scene shooting mode after determining that the current shooting scene belongs to the night scene scene according to the preview image content of the current shooting scene and the environment brightness value of each area. Acquire multiple frames of original images under exposure.
- the screen content of the preview image includes night sky or night scene light sources, or the ambient brightness value in each area of the preview image matches the brightness distribution characteristics of the image in the night scene environment, it can be determined that the current shooting scene belongs to the night scene scene.
- the electronic device can be used for image synthesis by shooting multiple frames of original images, and can also be used for selecting clear images for synthesis and imaging.
- the image sensor of the electronic device can be controlled to shoot multiple frames of original images under different exposures. For example: use low exposure shooting to clearly image high-bright areas, and high exposure shooting to clearly image low-bright areas.
- Step 102 Denoise some original frames based on artificial intelligence to obtain a first denoised image, and denoise other frames of original images based on artificial intelligence to obtain a second denoised image, and some of the original frames are among multiple original images.
- some original frames are at least two frames of first images with the same exposure, and other original frames are at least one frame of second images with lower exposure than the first image.
- the first noise-reduced image is obtained by denoising some original images based on artificial intelligence
- the second noise-reduced image is obtained based on artificial intelligence for other original images.
- the noise characteristics of the original image and the original images of other frames are not exactly the same, so the noise reduction is more targeted and can effectively improve the noise reduction effect.
- the image sensor in the electronic device is subject to varying degrees of photoelectromagnetic interference from peripheral circuits and its own pixels during the shooting process, the original image obtained by shooting is inevitably noisy, and the degree of interference The sharpness of the captured images is also different. Therefore, the collected multi-frame original images must also have noise. You can further denoise some original frames based on artificial intelligence to obtain the first denoised image, and obtain the second denoised image based on artificial intelligence for other original images. .
- images are usually captured with a larger aperture and a longer exposure time. At this time, if a higher sensitivity is selected to reduce the exposure time, the captured images will inevitably produce noise.
- a multi-frame fusion noise reduction may be performed on a part of the original image to obtain an initial noise reduction image.
- performing image alignment processing on part of the original image and synthesizing it into a multi-frame fusion image is equivalent to undergoing temporal noise reduction, which initially improves the signal-to-noise ratio of the picture.
- the first neural network model is used to identify the noise characteristics of the initial denoised image
- the second neural network model is used to identify the noise characteristics of each frame of the original image in the other frames, which can simultaneously identify the initial noise reduction image Noise reduction with the highlight area and dark light area in the original image of each frame, and then a noise reduction image with better noise reduction effect can be obtained.
- the first noise reduction image is obtained based on artificial intelligence for some original frames of the original image
- the second noise reduction image is obtained based on artificial intelligence noise reduction for other original frames, where the first noise reduction image and
- the second noise reduction image is an unprocessed RAW image.
- the first neural network model when denoising some original images based on artificial intelligence, can be used to identify the noise characteristics of the initial denoised image, where the first neural network model has been learned to obtain the initial denoising The mapping relationship between image sensitivity and noise characteristics.
- the mapping relationship between the sensitivity of the initial noise reduction image and the noise characteristics has been learned. Therefore, the initial noise reduction image can be input into the first neural network model, and the first neural network model can be used to identify the noise characteristics of the initial noise reduction image, so as to identify the noise characteristics of the initial noise reduction image, and then according to the identified noise characteristics , Reduce the noise of the initial noise reduction image to obtain the first noise reduction image, thereby achieving the purpose of noise reduction and improving the signal-to-noise ratio of the image.
- the sensitivity also known as the ISO value, refers to an index that measures the sensitivity of the film to light. For low-sensitivity negatives, longer exposure time is required to achieve the same imaging as high-sensitivity negatives.
- the sensitivity of a digital camera is an indicator similar to the sensitivity of a film.
- the ISO of a digital camera can be adjusted by adjusting the sensitivity of the photosensitive device or combining the photosensitive points, that is, by increasing the light sensitivity of the photosensitive device or Combine several adjacent photosensitive points to achieve the purpose of improving ISO.
- the noise characteristic may be the statistical characteristic of random noise caused by the image sensor.
- the noise mentioned here mainly includes thermal noise and shot noise. Among them, the thermal noise conforms to the Gaussian distribution, and the shot noise conforms to the Poisson distribution.
- the statistical characteristics in the embodiments of this application may refer to the variance value of the noise, of course, it may also be other The possible values are not limited here.
- Step 103 Convert the first noise-reduced image into a first YUV image, and convert the second noise-reduced image into a second YUV image.
- the image format that can be processed by the display of the electronic device is the YUV format.
- the luminance signal of the image is called Y
- the chrominance signal is composed of two mutually independent signals.
- the two chrominance signals are often called U and V.
- the high dynamic range image can be formatted through the image signal processor (Image Signal Processing, ISP), and the high dynamic range image in the RAW format can be converted to the YUV format. image. Due to the limited size of the display interface of the display, in order to achieve a better preview effect, the converted YUV format image can be compressed to the preview size for preview display.
- the number of the first noise-reduced image obtained above is one piece
- the second noise-reduced image obtained corresponds to the number of original images contained in other original images.
- the first noise-reduction image can be converted into a first YUV image
- each second noise-reduction image can be converted into a second YUV image respectively to obtain multiple second YUV images, so that the In the case of high dynamic range images, the various input frame images for synthesis have been accurately noise-reduced.
- it can effectively ensure that the synthesis of each brightness will not have too much noise discontinuity, and it will be better Protect the details of each brightness image.
- converting the first noise-reduced image into the first YUV image includes: performing detail enhancement processing on the first noise-reduced image according to a part of the original image of the frame; and converting the processed first noise-reduced image into the first YUV image.
- a YUV image can retain the original multi-frame EV0raw image, and use the multi-frame EV0raw image to perform detail enhancement processing on the first noise reduction image, so that the image details that may be lost before the artificial intelligence noise reduction are superimposed again during fusion , Effectively guarantee the completeness of the details of the image.
- Step 104 synthesize a high dynamic range image according to the first YUV image and the second YUV image.
- the first YUV image and the second YUV image can be synthesized in a high dynamic range to obtain a high dynamic range image.
- high-dynamic range images (High-Dynamic Range, HDR for short) can provide more dynamic range and image details than ordinary images.
- LDR Low-Dynamic Range
- the first YUV image and the second YUV image of each frame are taken under different exposure conditions and obtained by noise reduction processing, the first YUV image and the second YUV image of each frame contain different brightness Screen information.
- the first YUV image and the second YUV image of each frame may be over-exposed, may be under-exposed, or may be properly exposed. After synthesizing the first YUV image and the second YUV images of each frame into a high dynamic range image, it is possible to properly expose each scene in the synthesized high dynamic range image, which is also closer to the actual scene.
- a high dynamic range image is synthesized, which can be distinguished more accurately
- the image noise and effective details of the high dynamic range image can help reduce the number of original image acquisition frames to a certain extent, which is helpful for each frame of the original image
- the sensitivity of the acquisition is increased to reduce the shooting time, so that the total time required for the overall shooting process is shortened, avoiding the situation that the shooting time is too long and causing the picture to be blurred, which is conducive to clearly shooting dynamic night scenes.
- the first noise-reduced image is obtained by denoising some original frames based on artificial intelligence
- the second noise-reduced image is obtained based on artificial intelligence denoising for other original frames.
- the noise characteristics of the image and the original image of other frames are not exactly the same, so the noise reduction is more targeted and can effectively improve the noise reduction effect.
- a neural network model can be used to reduce noise, and the neural network model can be trained using sample images of various sensitivity to improve the ability of the neural network model to recognize noise characteristics.
- the neural network model includes: a first neural network model and a second neural network model, which can be directed to one of the neural network models, and the specific training process for it is shown in FIG. 3, and the other is The training process of the neural network model is similar, and so on.
- FIG. 3 is a schematic flowchart of the second image processing method based on multi-frame images provided by an embodiment of the application, which specifically includes the following steps:
- Step 301 Obtain sample images of each sensitivity.
- the noise characteristics of the image have been marked in the sample image.
- the sample image may be an image obtained by shooting with different sensitivity settings under different environmental brightness.
- the environmental brightness and ISO may be subdivided, and the number of frames of the sample image may be increased, so that after the initial noise reduction image is input to the first neural network model, the first neural network model A neural network can accurately identify the statistical characteristics of the initial noise reduction image.
- the sample images are used to train the first neural network model.
- the statistical characteristics marked in the sample images are used as the characteristics of model training, and the sample images marked with statistical characteristics are input into the first neural network model to train the first neural network model, and then the statistical characteristics of the image are identified.
- the neural network model is only a possible way to achieve noise reduction based on artificial intelligence.
- any other possible ways can be used to achieve noise reduction based on artificial intelligence.
- traditional The programming technology (such as simulation method and engineering method) can be realized.
- it can also be realized by genetic algorithm and artificial neural network.
- the first neural network model is trained by labeling the statistical characteristics in the sample image because the labeled sample image can clearly indicate the noise location and noise type of the image, so that the labeled statistical characteristics are used as model training After inputting the initial noise reduction image into the first neural network model, the statistical characteristics in the image can be identified.
- Step 303 until the noise characteristic identified by the first neural network model matches the noise characteristic marked in the corresponding sample image, the training of the first neural network model is completed.
- Step 101 when multiple frames of original images are collected in step 101, the number of image frames n of the reference exposure can be determined according to the imaging quality of the preview image, so as to collect the image frames that meet the reference. N frames of original images with exposure, and at least one frame of original images lower than the reference exposure is collected.
- FIG. 4 is a schematic flowchart of a third image processing method based on multi-frame images provided by an embodiment of the application.
- Step 101 may also include:
- Step 401 Determine the image frame number n of the reference exposure according to the imaging quality of the preview image.
- the preview image is obtained in advance, for example, it may be a preview image taken by turning on a camera, or it may be read from a memory, which is not limited.
- n is a natural number greater than or equal to 2.
- the value range of the image frame number n may be 3. Or 4 to reduce the shooting time and obtain higher quality images.
- the imaging quality of the preview image can be measured by, for example, signal-to-noise ratio and/or imaging speed, and the imaging quality is generally positively related to the number of frames of the captured image, that is, the better the imaging quality, the better The more frames the image.
- the high dynamic range image obtained during high dynamic synthesis contains more picture information and is closer to the actual scene. Therefore, the imaging quality has a positive relationship with the number of acquired image frames, and the image frame number n of the reference exposure can be determined according to the imaging quality of the preview image.
- Step 402 Collect n frames of original images that meet the reference exposure.
- n frames of original images meeting the reference exposure are further collected.
- the reference exposure duration of each frame of the original image to be collected can be determined to obtain images with different dynamic ranges, so that the combined The image has a higher dynamic range, improving the overall brightness and quality of the image.
- FIG. 5 is a schematic flowchart of the fourth image processing method based on multi-frame images provided by an embodiment of the application. As shown in FIG. 5, step 402 may further include the following sub-steps:
- the reference exposure is determined according to the illuminance of the shooting scene.
- the exposure amount refers to how much light the photosensitive device in the electronic device receives during the exposure time.
- the exposure amount is related to the aperture, the exposure time and the sensitivity.
- the aperture is the light aperture, which determines the amount of light passing through the unit time;
- the exposure time refers to the time for the light to pass through the lens;
- the sensitivity also known as the ISO value, is a measure of the sensitivity of the film to light.
- the ISO value also known as the ISO value
- the exposure amount is related to the exposure time and the sensitivity aperture.
- it can be the product of the exposure time and the sensitivity.
- the reference exposure in the related technology is defined as the exposure compensation level of zero, that is, EV0.
- the reference exposure refers to the exposure that is determined to be compatible with the brightness information of the current environment after the brightness information of the current shooting scene obtained by metering the preview image.
- the value of the reference exposure can be It is the product of the reference sensitivity and the reference exposure time.
- a reference sensitivity is set according to the jitter degree of the preview image, or according to the jitter degree of the image sensor that collects the preview image.
- the reference sensitivity can be set according to the jitter degree of the preview image to suit the current jitter degree; it can also be set according to the current jitter degree of the image sensor that collects the preview image
- the sensitivity to suit the current dithering degree is not limited here. Among them, the value range of the reference sensitivity can be 100ISO to 200ISO.
- the sensitivity of the captured image will affect the overall shooting time. If the shooting time is too long, it may increase the jitter of the image sensor during handheld shooting, thereby affecting image quality. Therefore, the reference sensitivity corresponding to the collected preview image can be determined according to the screen shake degree of the preview image, or according to the shake degree of the image sensor for collecting the preview, so that the shooting time can be controlled within a proper range.
- the displacement information in order to determine the degree of jitter, may be collected according to the displacement sensor provided in the electronic device, and further, the degree of screen jitter of the preview image is determined or the image of the preview image is collected according to the collected displacement information of the electronic device The jitter level of the sensor.
- the current gyro-sensor information of the electronic device can be obtained to determine the current jitter degree of the electronic device, that is, the jitter degree of the image sensor that collects the preview image.
- the gyroscope is also called the angular velocity sensor, which can measure the angular velocity of rotation when the physical quantity is deflected or tilted.
- the gyroscope can measure the rotation and deflection movements, so as to accurately analyze and judge the actual movements of the user.
- the gyroscope information (gyro information) of the electronic device can include the movement information of the mobile phone in three dimensions in the three-dimensional space.
- the three dimensions of the three-dimensional space can be expressed as the X-axis, Y-axis, and Z-axis directions respectively.
- the X-axis, Y-axis, and Z-axis are in a pairwise vertical relationship.
- the jitter degree of the image sensor collecting the preview image can be determined according to the current gyro information of the electronic device. The greater the absolute value of the gyro movement of the electronic device in the three directions, the greater the jitter of the image sensor that collects the preview image.
- the absolute value thresholds of the gyro movement in the three directions can be preset, and the acquired preview image is determined based on the sum of the acquired absolute value of the current gyro movement in the three directions and the preset threshold.
- the current jitter level of the image sensor can be preset.
- the preset thresholds are the first threshold A, the second threshold B, and the third threshold C, and A ⁇ B ⁇ C
- the sum of the absolute values of the gyro motion in the three directions currently acquired is S . If S ⁇ A, it is determined that the jitter degree of the image sensor collecting the preview image is "no jitter”; if A ⁇ S ⁇ B, it can be determined that the jitter degree of the image sensor collecting the preview image is "slight jitter”; if B ⁇ S ⁇ C, it can be determined that the jitter degree of the image sensor that collects the preview image is "small jitter”; if S>C, it can be determined that the jitter degree of the image sensor that collects the preview image is "large jitter”.
- the number of thresholds and the specific value of each threshold can be preset according to actual needs, and the mapping relationship between gyro information and the jitter degree of the image sensor that collects the preview image can be preset according to the relationship between gyro information and each threshold.
- the reference sensitivity corresponding to each frame of the image to be collected can be appropriately compressed to a small value to effectively suppress the noise of each frame of image and improve the captured image. If the image sensor that collects the preview image has a large degree of jitter, the reference sensitivity corresponding to each frame of the image to be collected can be appropriately increased to a larger value to shorten the shooting time.
- the reference sensitivity can be determined to be a smaller value to try to obtain a higher quality image, for example, determine the reference sensitivity to be 100 ; If it is determined that the jitter degree of the image sensor collecting the preview image is "slight jitter”, the reference sensitivity can be determined to be a larger value to reduce the shooting time, for example, the reference sensitivity is determined to be 120; If the jitter degree of the image sensor is "small jitter", the reference sensitivity can be further increased to reduce the shooting time.
- the reference sensitivity is determined to be 180; if the jitter degree of the image sensor collecting the preview image is determined to be "large jitter", It can be determined that the current degree of jitter is too large. At this time, the reference sensitivity can be further increased to reduce the shooting time, for example, the reference sensitivity is determined to be 200.
- the above examples are only exemplary, and should not be regarded as limitations on the application.
- the reference sensitivity can be changed to obtain the optimal solution.
- the mapping relationship between the jitter degree of the image sensor that collects the preview image and the reference sensitivity corresponding to each frame of the image to be collected can be preset according to actual needs.
- the jitter degree of the preview image is positively correlated with the jitter degree of the image sensor that collects the preview image.
- the implementation process of setting the reference sensitivity is referred to the above process, and will not be here Repeat.
- the reference exposure duration is determined according to the reference exposure amount and the set reference sensitivity.
- the image sensor is controlled to perform image collection according to the exposure time and reference sensitivity of the original image to be collected in each frame. Repeat.
- Step 403 Collect at least one frame of original image below the reference exposure.
- Exposure compensation level is a parameter for adjusting the amount of exposure, so that some images are underexposed, some images are overexposed, and some images can be properly exposed.
- the exposure compensation level corresponding to at least one frame of the second image ranges from EV-5 to EV-1.
- the at least one frame of original image may be called at least one frame of second image, specifically two frames of second image
- the two frames of second images correspond to different exposure compensation levels, and the exposure compensation levels of the two frames of second images are less than EV0.
- the reference exposure time is compensated to obtain a compensated exposure time shorter than the reference exposure time; according to the compensated exposure time and the reference sensitivity, two frames of second images are collected.
- n frames of original images conforming to the reference exposure are collected, and at least one original image lower than the reference exposure is collected.
- the imaging quality of the image is improved, and the imaging effect with higher definition is obtained.
- FIG. 6 is a schematic structural diagram of the first image processing device based on multi-frame images provided by an embodiment of the application.
- the image processing apparatus 600 based on multi-frame images includes: an acquisition module 610, a noise reduction module 620, a conversion module 630, and a synthesis module 640.
- the obtaining module 610 is used to obtain multiple frames of original images
- the noise reduction module 620 is used for denoising some original frames based on artificial intelligence to obtain a first denoised image, and denoising other frames of original images based on artificial intelligence to obtain a second denoised image, some of the original images are multiple Original images of at least two of the original images;
- the conversion module 630 is configured to convert the first noise-reduction image into a first YUV image, and convert the second noise-reduction image into a second YUV image;
- the synthesis module 640 is configured to synthesize a high dynamic range image according to the first YUV image and the second YUV image.
- the noise reduction module 620 is specifically configured to:
- the first neural network model is used to identify the noise characteristics of the initial noise reduction image
- the second neural network model is used to identify the noise characteristics of each frame of the original image in the other frames; among them, the first neural network model has The mapping relationship between the sensitivity and noise characteristics of the initial denoised image is learned by learning, and the second neural network model has learned the mapping relationship between the sensitivity and noise characteristics of each frame of the original image;
- the noise characteristics identified by the first neural network model denoise the initial denoised image to obtain the first denoised image
- denoise each frame of the original image In order to obtain multiple frames of the second noise reduction image.
- FIG. 7 is a schematic structural diagram of a second image processing apparatus based on multi-frame images provided by an embodiment of the application, and further includes:
- the training module 650 is used to train the neural network model using sample images of each sensitivity, until the noise characteristics identified by the neural network model match the noise characteristics marked in the corresponding sample image, the neural network model training is completed, and the neural network model Including: the first neural network model and the second neural network model.
- some of the original frames are at least two frames of the first image with the same exposure, and the other frames of original images are at least one frame of the second image with lower exposure than the first image;
- the conversion module 630 is specifically used for:
- the processed first denoising image is converted into a first YUV image.
- the obtaining module 610 is specifically configured to:
- n is a natural number greater than or equal to 2;
- the obtaining module 610 is specifically configured to:
- n frames of original images are collected.
- At least one frame of the second image is specifically two frames of second images
- the two frames of second images correspond to different exposure compensation levels, and the exposure compensation levels of the two frames of second images are less than EV0.
- the exposure compensation level corresponding to at least one frame of the second image ranges from EV-5 to EV-1.
- a high dynamic range image is synthesized, which can be distinguished more accurately
- the image noise and effective details of the high dynamic range image can help reduce the number of original image acquisition frames to a certain extent, which is helpful for each frame of the original image
- the sensitivity of the acquisition is increased to reduce the shooting time, so that the total time required for the overall shooting process is shortened, avoiding the situation that the shooting time is too long and causing the picture to be blurred, which is conducive to clearly shooting dynamic night scenes.
- the first noise-reduced image is obtained by denoising some original frames based on artificial intelligence
- the second noise-reduced image is obtained based on artificial intelligence denoising for other original frames.
- the noise characteristics of the image and the original image of other frames are not exactly the same, so the noise reduction is more targeted and can effectively improve the noise reduction effect.
- FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the application, including: an image sensor 210, a processor 220, a memory 230, and A computer program stored in the memory 230 and running on the processor 220.
- the image sensor 210 is electrically connected to the processor 220.
- the processor 220 executes the program, the image processing method based on multi-frame images as in the above embodiment is implemented.
- the processor 220 may include: an image signal processing ISP processor.
- the ISP processor is used to control the image sensor to obtain multiple frames of original images.
- the processor 220 may further include: a graphics processing unit (Graphics Processing Unit, GPU for short) connected to the ISP processor.
- a graphics processing unit Graphics Processing Unit, GPU for short
- GPU is used to denoise some original frames based on artificial intelligence to obtain the first denoised image, and denoise other frames of original images based on artificial intelligence to obtain the second denoised image, and some of the original frames are multiple frames The original image of at least two frames in the original image.
- GPU is also used to encode high dynamic range images.
- the ISP processor is also used to convert the first noise-reduction image into a first YUV image, and convert the second noise-reduction image into a second YUV image; synthesize the high dynamic range according to the first YUV image and the second YUV image image.
- FIG. 9 is a schematic diagram of an example of an electronic device provided in an embodiment of the application.
- the memory 230 of the electronic device 200 includes a non-volatile memory 80, an internal memory 82, and a processor 220.
- Computer readable instructions are stored in the memory 230.
- the processor 230 is caused to execute the image processing method based on the multi-frame image in any of the foregoing embodiments.
- the electronic device 200 includes a processor 220, a non-volatile memory 80, an internal memory 82, a display screen 83 and an input device 84 connected through a system bus 81.
- the non-volatile memory 80 of the electronic device 200 stores an operating system and computer readable instructions.
- the computer-readable instructions may be executed by the processor 220 to implement the image processing method based on multi-frame images in the embodiments of the present application.
- the processor 220 is used to provide calculation and control capabilities, and support the operation of the entire electronic device 200.
- the internal memory 82 of the electronic device 200 provides an environment for the operation of computer readable instructions in the non-volatile memory 80.
- the specific electronic device 200 may include more or fewer components than shown in the figure, or combine certain components, or have a different component arrangement.
- the ISP processor is electrically connected to the image sensor and is used to control the image sensor to obtain multiple frames of original images
- the GPU which is electrically connected to the ISP processor, is used to denoise some original frames based on artificial intelligence to obtain the first denoised image, and denoise other original frames based on artificial intelligence to obtain the second denoised image, part of the frame
- the original image is an original image of at least two of the multiple original images.
- the ISP processor is also used to convert the first noise-reduction image into a first YUV image, and convert the second noise-reduction image into a second YUV image; synthesize the high dynamic range according to the first YUV image and the second YUV image image.
- the image data captured by the camera 93 is first processed by the ISP processor 91, and the ISP processor 91 analyzes the image data to capture image statistical information that can be used to determine one or more control parameters of the camera 93.
- the camera module 310 may include one or more lenses 932 and an image sensor 934.
- the image sensor 934 may include a color filter array (such as a Bayer filter), and the image sensor 934 may obtain the light intensity and wavelength information captured by each imaging pixel, and provide a set of raw image data that can be processed by the ISP processor 91.
- the sensor 94 (such as a gyroscope) can provide the collected image processing parameters (such as anti-shake parameters) to the ISP processor 91 based on the interface type of the sensor 94.
- the sensor 94 interface may be an SMIA (Standard Mobile Imaging Architecture, Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the foregoing interfaces.
- the image sensor 934 may also send raw image data to the sensor 94, and the sensor 94 may provide the raw image data to the ISP processor 91 based on the interface type of the sensor 94, or the sensor 94 may store the raw image data in the image memory 95.
- the ISP processor 91 may also receive image data from the image memory 95.
- the sensor 94 interface sends the original image data to the image memory 95, and the original image data in the image memory 95 is then provided to the ISP processor 91 for processing.
- the image memory 95 may be the memory 330, a part of the memory 330, a storage device, or an independent dedicated memory in an electronic device, and may include DMA (Direct Memory Access) features.
- the output of the ISP processor 91 can also be sent to the image memory 95, and the display 97 can read image data from the image memory 95.
- the statistical data determined by the ISP processor 91 may be sent to the control logic 92 unit.
- the statistical data may include image sensor 934 statistical information such as automatic exposure, automatic white balance, automatic focus, flicker detection, black level compensation, and lens 932 shading correction.
- the control logic 92 may include processing elements and/or microcontrollers that execute one or more routines (such as firmware), and one or more routines can determine the control parameters of the camera 93 and the ISP processor based on the received statistical data. 91 control parameters.
- the control parameters of the camera 93 may include sensor 94 control parameters (such as gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, lens 932 control parameters (such as focus or zoom focal length), or these parameters The combination.
- the ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (for example, during RGB processing), and lens 932 shading correction parameters.
- the ISP processor controls the image sensor to obtain multi-frame original images; GPU denoises some original images based on artificial intelligence to obtain the first noise reduction Image, and other original frames of the original image based on artificial intelligence noise reduction, to obtain the second noise-reduction image, part of the original image is the original image of at least two frames of the multi-frame original image, the ISP processor is also used to reduce the first
- the noisy image is converted into a first YUV image
- the second noise-reduction image is converted into a second YUV image
- a high dynamic range image is synthesized.
- the embodiments of the present application also provide a storage medium.
- the processor is caused to perform the following steps: obtain multiple frames of original images; Intelligent noise reduction, the first noise reduction image is obtained, and the noise reduction of other original images based on artificial intelligence is used to obtain the second noise reduction image.
- Part of the original image is the original image of at least two frames of the multi-frame original image;
- a noise-reduced image is converted into a first YUV image, and the second noise-reduced image is converted into a second YUV image; according to the first YUV image and the second YUV image, a high dynamic range image is synthesized.
- the program can be stored in a non-volatile computer readable storage medium.
- the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), etc.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
La présente invention concerne un procédé et un appareil de traitement d'images basés sur de multiples trames d'images, et un dispositif électronique. Le procédé comprend les étapes consistant à : obtenir de multiples trames d'images originales ; effectuer une réduction de bruit d'intelligence artificielle sur certaines trames d'images originales pour obtenir des premières images à bruit réduit, et effectuer une réduction de bruit d'intelligence artificielle sur d'autres trames d'images originales pour obtenir des secondes images à bruit réduit, certaines trames d'images originales étant au moins deux trames d'images originales dans les multiples trames d'images originales ; convertir les premières images à bruit réduit en premières images YUV, et convertir les secondes images à bruit réduit en secondes images YUV ; et effectuer une synthèse en fonction des premières images YUV et des secondes images YUV pour obtenir une image à plage dynamique élevée. Au moyen de la présente invention, le bruit d'image et les détails efficaces de l'image à plage dynamique élevée peuvent être distingués de manière plus précise, le nombre de trames d'acquisition d'images originales est facile à réduire, la durée totale requise dans l'ensemble du processus de photographie est raccourcie, le cas d'un flou d'image provoqué par une durée de photographie trop longue est évité, et une photographie claire de scènes nocturnes dynamiques est facilitée.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910280172.1 | 2019-04-09 | ||
CN201910280172.1A CN110072052B (zh) | 2019-04-09 | 2019-04-09 | 基于多帧图像的图像处理方法、装置、电子设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020207262A1 true WO2020207262A1 (fr) | 2020-10-15 |
Family
ID=67367208
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/081471 WO2020207262A1 (fr) | 2019-04-09 | 2020-03-26 | Procédé et appareil de traitement d'images basés sur de multiples trames d'images, et dispositif électronique |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110072052B (fr) |
WO (1) | WO2020207262A1 (fr) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112669241A (zh) * | 2021-01-29 | 2021-04-16 | 成都国科微电子有限公司 | 一种图像处理方法、装置、设备及介质 |
CN112950501A (zh) * | 2021-02-26 | 2021-06-11 | 平安科技(深圳)有限公司 | 基于噪声场的图像降噪方法、装置、设备及存储介质 |
CN113674232A (zh) * | 2021-08-12 | 2021-11-19 | Oppo广东移动通信有限公司 | 图像噪声预估方法、装置、电子设备和存储介质 |
CN115460343A (zh) * | 2022-07-31 | 2022-12-09 | 荣耀终端有限公司 | 图像处理方法、设备及存储介质 |
CN116245962A (zh) * | 2023-03-16 | 2023-06-09 | 祝晓鹏 | 用于无线传输到区块链服务器的数据提取系统及方法 |
WO2024088163A1 (fr) * | 2022-10-24 | 2024-05-02 | 维沃移动通信有限公司 | Procédé et circuit de traitement d'image, dispositif et support |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110072052B (zh) * | 2019-04-09 | 2021-08-27 | Oppo广东移动通信有限公司 | 基于多帧图像的图像处理方法、装置、电子设备 |
CN112529775A (zh) * | 2019-09-18 | 2021-03-19 | 华为技术有限公司 | 一种图像处理的方法和装置 |
CN110611750B (zh) * | 2019-10-31 | 2022-03-22 | 北京迈格威科技有限公司 | 一种夜景高动态范围图像生成方法、装置和电子设备 |
CN111242860B (zh) * | 2020-01-07 | 2024-02-27 | 影石创新科技股份有限公司 | 超级夜景图像的生成方法、装置、电子设备及存储介质 |
CN113744119A (zh) * | 2020-05-29 | 2021-12-03 | Oppo广东移动通信有限公司 | 多媒体处理芯片和电子设备 |
CN113744120A (zh) * | 2020-05-29 | 2021-12-03 | Oppo广东移动通信有限公司 | 多媒体处理芯片、电子设备和图像处理方法 |
CN111885312B (zh) * | 2020-07-27 | 2021-07-09 | 展讯通信(上海)有限公司 | Hdr图像的成像方法、系统、电子设备及存储介质 |
CN112950503A (zh) * | 2021-02-26 | 2021-06-11 | 北京小米松果电子有限公司 | 训练样本的生成方法及装置、真值图像的生成方法及装置 |
CN114511112B (zh) * | 2022-01-24 | 2024-07-19 | 北京通建泰利特智能系统工程技术有限公司 | 一种基于物联网的智慧运维方法、系统和可读存储介质 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100310190A1 (en) * | 2009-06-09 | 2010-12-09 | Aptina Imaging Corporation | Systems and methods for noise reduction in high dynamic range imaging |
CN105827971A (zh) * | 2016-03-31 | 2016-08-03 | 维沃移动通信有限公司 | 一种图像处理方法及移动终端 |
CN106416225A (zh) * | 2014-05-30 | 2017-02-15 | 通用电气公司 | 远程视觉检查图像捕获系统和方法 |
CN107635098A (zh) * | 2017-10-30 | 2018-01-26 | 广东欧珀移动通信有限公司 | 高动态范围图像噪声去除方法、装置及设备 |
CN108280811A (zh) * | 2018-01-23 | 2018-07-13 | 哈尔滨工业大学深圳研究生院 | 一种基于神经网络的图像去噪方法和系统 |
CN109005366A (zh) * | 2018-08-22 | 2018-12-14 | Oppo广东移动通信有限公司 | 摄像模组夜景摄像处理方法、装置、电子设备及存储介质 |
CN109218613A (zh) * | 2018-09-18 | 2019-01-15 | Oppo广东移动通信有限公司 | 高动态范围图像合成方法、装置、终端设备和存储介质 |
CN110072052A (zh) * | 2019-04-09 | 2019-07-30 | Oppo广东移动通信有限公司 | 基于多帧图像的图像处理方法、装置、电子设备 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9083935B2 (en) * | 2012-06-15 | 2015-07-14 | Microsoft Technology Licensing, Llc | Combining multiple images in bracketed photography |
CN103051841B (zh) * | 2013-01-05 | 2016-07-06 | 小米科技有限责任公司 | 曝光时间的控制方法及装置 |
CN107566739B (zh) * | 2017-10-18 | 2019-12-06 | 维沃移动通信有限公司 | 一种拍照方法及移动终端 |
CN108012080B (zh) * | 2017-12-04 | 2020-02-04 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
CN107864341A (zh) * | 2017-12-29 | 2018-03-30 | Tcl移动通信科技(宁波)有限公司 | 一种降帧率拍照方法、移动终端及存储介质 |
CN108322646B (zh) * | 2018-01-31 | 2020-04-10 | Oppo广东移动通信有限公司 | 图像处理方法、装置、存储介质及电子设备 |
CN108737750A (zh) * | 2018-06-07 | 2018-11-02 | 北京旷视科技有限公司 | 图像处理方法、装置及电子设备 |
CN108924420B (zh) * | 2018-07-10 | 2020-08-04 | Oppo广东移动通信有限公司 | 图像拍摄方法、装置、介质、电子设备及模型训练方法 |
CN108989700B (zh) * | 2018-08-13 | 2020-05-15 | Oppo广东移动通信有限公司 | 成像控制方法、装置、电子设备以及计算机可读存储介质 |
CN109194882B (zh) * | 2018-08-22 | 2020-07-31 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备及存储介质 |
CN108900782B (zh) * | 2018-08-22 | 2020-01-24 | Oppo广东移动通信有限公司 | 曝光控制方法、装置以及电子设备 |
CN109218628B (zh) * | 2018-09-20 | 2020-12-08 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备及存储介质 |
CN109089046B (zh) * | 2018-09-25 | 2021-06-04 | Oppo广东移动通信有限公司 | 图像降噪方法、装置、计算机可读存储介质及电子设备 |
CN109360163A (zh) * | 2018-09-26 | 2019-02-19 | 深圳积木易搭科技技术有限公司 | 一种高动态范围图像的融合方法及融合系统 |
CN109040603A (zh) * | 2018-10-15 | 2018-12-18 | Oppo广东移动通信有限公司 | 高动态范围图像获取方法、装置及移动终端 |
CN109361853B (zh) * | 2018-10-22 | 2021-03-23 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备及存储介质 |
-
2019
- 2019-04-09 CN CN201910280172.1A patent/CN110072052B/zh active Active
-
2020
- 2020-03-26 WO PCT/CN2020/081471 patent/WO2020207262A1/fr active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100310190A1 (en) * | 2009-06-09 | 2010-12-09 | Aptina Imaging Corporation | Systems and methods for noise reduction in high dynamic range imaging |
CN106416225A (zh) * | 2014-05-30 | 2017-02-15 | 通用电气公司 | 远程视觉检查图像捕获系统和方法 |
CN105827971A (zh) * | 2016-03-31 | 2016-08-03 | 维沃移动通信有限公司 | 一种图像处理方法及移动终端 |
CN107635098A (zh) * | 2017-10-30 | 2018-01-26 | 广东欧珀移动通信有限公司 | 高动态范围图像噪声去除方法、装置及设备 |
CN108280811A (zh) * | 2018-01-23 | 2018-07-13 | 哈尔滨工业大学深圳研究生院 | 一种基于神经网络的图像去噪方法和系统 |
CN109005366A (zh) * | 2018-08-22 | 2018-12-14 | Oppo广东移动通信有限公司 | 摄像模组夜景摄像处理方法、装置、电子设备及存储介质 |
CN109218613A (zh) * | 2018-09-18 | 2019-01-15 | Oppo广东移动通信有限公司 | 高动态范围图像合成方法、装置、终端设备和存储介质 |
CN110072052A (zh) * | 2019-04-09 | 2019-07-30 | Oppo广东移动通信有限公司 | 基于多帧图像的图像处理方法、装置、电子设备 |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112669241A (zh) * | 2021-01-29 | 2021-04-16 | 成都国科微电子有限公司 | 一种图像处理方法、装置、设备及介质 |
CN112669241B (zh) * | 2021-01-29 | 2023-11-14 | 成都国科微电子有限公司 | 一种图像处理方法、装置、设备及介质 |
CN112950501A (zh) * | 2021-02-26 | 2021-06-11 | 平安科技(深圳)有限公司 | 基于噪声场的图像降噪方法、装置、设备及存储介质 |
CN112950501B (zh) * | 2021-02-26 | 2023-10-13 | 平安科技(深圳)有限公司 | 基于噪声场的图像降噪方法、装置、设备及存储介质 |
CN113674232A (zh) * | 2021-08-12 | 2021-11-19 | Oppo广东移动通信有限公司 | 图像噪声预估方法、装置、电子设备和存储介质 |
CN115460343A (zh) * | 2022-07-31 | 2022-12-09 | 荣耀终端有限公司 | 图像处理方法、设备及存储介质 |
CN115460343B (zh) * | 2022-07-31 | 2023-06-13 | 荣耀终端有限公司 | 图像处理方法、设备及存储介质 |
WO2024088163A1 (fr) * | 2022-10-24 | 2024-05-02 | 维沃移动通信有限公司 | Procédé et circuit de traitement d'image, dispositif et support |
CN116245962A (zh) * | 2023-03-16 | 2023-06-09 | 祝晓鹏 | 用于无线传输到区块链服务器的数据提取系统及方法 |
CN116245962B (zh) * | 2023-03-16 | 2023-12-22 | 新疆量子通信技术有限公司 | 用于无线传输到区块链服务器的数据提取系统及方法 |
Also Published As
Publication number | Publication date |
---|---|
CN110072052A (zh) | 2019-07-30 |
CN110072052B (zh) | 2021-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020207262A1 (fr) | Procédé et appareil de traitement d'images basés sur de multiples trames d'images, et dispositif électronique | |
CN110072051B (zh) | 基于多帧图像的图像处理方法和装置 | |
WO2020207261A1 (fr) | Procédé et appareil de traitement d'images basés sur de multiples trames d'images, et dispositif électronique | |
CN110062160B (zh) | 图像处理方法和装置 | |
CN110166708B (zh) | 夜景图像处理方法、装置、电子设备以及存储介质 | |
CN110290289B (zh) | 图像降噪方法、装置、电子设备以及存储介质 | |
CN109005366B (zh) | 摄像模组夜景摄像处理方法、装置、电子设备及存储介质 | |
CN110191291B (zh) | 基于多帧图像的图像处理方法和装置 | |
CN108900782B (zh) | 曝光控制方法、装置以及电子设备 | |
CN109040609B (zh) | 曝光控制方法、装置、电子设备和计算机可读存储介质 | |
CN109068067B (zh) | 曝光控制方法、装置和电子设备 | |
CN110248106B (zh) | 图像降噪方法、装置、电子设备以及存储介质 | |
CN109788207B (zh) | 图像合成方法、装置、电子设备及可读存储介质 | |
CN110166707B (zh) | 图像处理方法、装置、电子设备以及存储介质 | |
WO2020034737A1 (fr) | Procédé de commande d'imagerie, appareil, dispositif électronique et support d'informations lisible par ordinateur | |
CN110264420B (zh) | 基于多帧图像的图像处理方法和装置 | |
CN110166709B (zh) | 夜景图像处理方法、装置、电子设备以及存储介质 | |
CN110166706B (zh) | 图像处理方法、装置、电子设备以及存储介质 | |
CN109151333B (zh) | 曝光控制方法、装置以及电子设备 | |
CN110166711B (zh) | 图像处理方法、装置、电子设备以及存储介质 | |
CN109756680B (zh) | 图像合成方法、装置、电子设备及可读存储介质 | |
CN110276730B (zh) | 图像处理方法、装置、电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20788266 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20788266 Country of ref document: EP Kind code of ref document: A1 |