CN110443766B - Image processing method and device, electronic equipment and readable storage medium - Google Patents

Image processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN110443766B
CN110443766B CN201910720898.2A CN201910720898A CN110443766B CN 110443766 B CN110443766 B CN 110443766B CN 201910720898 A CN201910720898 A CN 201910720898A CN 110443766 B CN110443766 B CN 110443766B
Authority
CN
China
Prior art keywords
exposure
image
night scene
night
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910720898.2A
Other languages
Chinese (zh)
Other versions
CN110443766A (en
Inventor
陈铭津
李骈臻
张伟
张长定
陈星�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Meitu Technology Co Ltd
Original Assignee
Xiamen Meitu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Meitu Technology Co Ltd filed Critical Xiamen Meitu Technology Co Ltd
Priority to CN201910720898.2A priority Critical patent/CN110443766B/en
Publication of CN110443766A publication Critical patent/CN110443766A/en
Application granted granted Critical
Publication of CN110443766B publication Critical patent/CN110443766B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image processing method, an image processing device, an electronic device and a readable storage medium, and in consideration of the fact that an overexposure area and an overexposure area of an image in different scenes are different, a first night view image of one frame is firstly obtained, exposure of the image to be acquired and a frame number to be acquired corresponding to each exposure are dynamically determined according to the overexposure area and the overexposure area of the first night view image, then frame number images to be acquired corresponding to the exposure are respectively acquired under each determined exposure, and multi-frame noise reduction processing is respectively carried out on the frame number images to be acquired under each exposure, so that the situation that multi-frame noise reduction effects are poor due to insufficient frame numbers or resources are wasted due to excessive frame numbers is avoided, and the night view image shooting effect is improved.

Description

Image processing method and device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus based on a night view image, an electronic device, and a readable storage medium.
Background
The photographing function is an important function of the current mobile electronic devices (such as smart phones, tablet computers, intelligent wearable devices and the like). Even though the photographing performance of a common mobile electronic device is far inferior to that of a professional single lens reflex camera, the mobile electronic device can still be used for photographing photos with a brilliant effect by combining an image processing algorithm and some photographing skills, and the daily requirements of most users can be met in the aspect of photographing effect.
At present, most electronic products generally have little difference in imaging effect in the daytime or under good weather conditions, but in a low-light scene (for example, at night), the imaging effect is limited by the configuration of a camera of the electronic product, and a good shooting effect is difficult to achieve. For example, in a scene with insufficient light, such as at night, if the picture needs to be brightened, image noise is likely to increase, which results in low image quality of the night scene image and seriously affects the photographing experience.
Although many image denoising algorithms exist at present, most of the algorithms adopt a fixed set frame number to perform multi-frame denoising processing on a shot image. However, when the noise level is high, the fixed set frames may not achieve a good denoising effect due to insufficient frames, or when the noise level is low, the fixed set frames may be more than what is actually needed, which wastes computational resources on one hand and may make the noise reduction image more blurred on the other hand.
Disclosure of Invention
In view of this, an object of the present application is to provide an image processing method, an image processing apparatus, an electronic device, and a readable storage medium, which can dynamically determine exposure levels and frame numbers to be acquired corresponding to each exposure level for different scenes, so as to avoid the situation that multi-frame noise reduction effect is not good due to insufficient frame numbers or calculation resources are wasted due to excessive frame numbers, and improve the night scene image shooting effect.
According to an aspect of the present application, there is provided an image processing method applied to an electronic device, the method including:
acquiring a frame of first night scene image, and determining the exposure of the image to be acquired and the number of frames to be acquired corresponding to each exposure according to the overexposure area and the overexposure area of the first night scene image;
acquiring a plurality of frames of second night scene images at each exposure according to the exposure of the images to be acquired and the number of frames to be acquired corresponding to each exposure;
respectively carrying out multi-frame noise reduction processing on the multi-frame second night scene image corresponding to each exposure degree to obtain a third night scene image corresponding to each exposure degree;
and carrying out image processing on the third night scene image corresponding to each exposure degree to obtain a target night scene image.
According to another aspect of the present application, there is also provided an image processing apparatus applied to an electronic device, the apparatus including:
the acquisition determining module is used for acquiring a frame of first night scene image and determining the exposure of the image to be acquired and the number of frames to be acquired corresponding to each exposure according to the overexposure area and the dark area of the first night scene image;
the image acquisition module is used for acquiring a plurality of frames of second night scene images at each exposure degree according to the exposure degree of the images to be acquired and the number of frames to be acquired corresponding to each exposure degree;
the noise reduction processing module is used for respectively carrying out multi-frame noise reduction processing on the multi-frame second night scene images corresponding to each exposure degree to obtain third night scene images corresponding to each exposure degree;
and the image processing module is used for carrying out image processing on the third night scene image corresponding to each exposure degree to obtain a target night scene image.
According to another aspect of the present application, there is also provided an electronic device, including a machine-readable storage medium and a processor, where the machine-readable storage medium stores machine-executable instructions, and the processor, when executing the machine-executable instructions, implements the foregoing image processing method.
According to another aspect of the present application, there is also provided a readable storage medium having stored therein machine executable instructions which, when executed, implement the aforementioned image processing method.
Based on any one of the above aspects, in the application, considering that the overexposure area and the overexposure area of the image in different scenes are different, a frame of first night scene image is obtained first, the overexposure area and the overexposure area of the first night scene image are taken as the basis, the exposure of the image to be collected and the number of frames to be collected corresponding to each exposure are dynamically determined, then the image of the number of frames to be collected corresponding to the exposure is collected respectively at each determined exposure, and the image of the number of frames to be collected in each exposure is subjected to multi-frame noise reduction processing respectively, so that the situation that the multi-frame noise reduction effect is poor due to insufficient number of frames or the calculation resources are wasted due to excessive number of frames is avoided, and the night scene image shooting effect is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and it will be apparent to those skilled in the art that other related drawings can be obtained from the drawings without inventive effort.
FIG. 1 is a flow chart of an image processing method provided by an embodiment of the present application;
FIG. 2 is a sub-flowchart of step S110 in one embodiment shown in FIG. 1;
FIG. 3 shows a schematic view of an example of a provided acquisition sequence of a second night view image;
FIG. 4 is a sub-flowchart of step S140 in one embodiment shown in FIG. 1;
FIG. 5 is a schematic diagram showing the comparison between the feature point screening results of the embodiment of the present application and the feature point screening results of the common scheme;
fig. 6 is a second flowchart of an image processing method according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating functional blocks of an image processing apparatus according to an embodiment of the present application;
fig. 8 shows a schematic block diagram of a structure of an electronic device for implementing the image processing method according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are only for illustration and description purposes and are not used to limit the protection scope of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some of the embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 shows a schematic flowchart of an image processing method provided in an embodiment of the present application. It should be understood that in other embodiments, the order of some steps in the image processing method of the present embodiment may be interchanged according to actual needs, or some steps may be omitted or deleted. The detailed steps of the image processing method are described below.
Step S110, a frame of first night scene image is obtained, and according to the over-exposure area and the over-dark area of the first night scene image, the exposure of the image to be collected and the number of frames to be collected corresponding to each exposure are determined.
In this embodiment, the first night-scene image may be the previously stored night-scene image captured in the past, or may be a night-scene image captured with respect to the current night-scene, which is not limited in this embodiment.
And step S120, acquiring a plurality of frames of second night scene images at each exposure according to the exposure of the images to be acquired and the number of frames to be acquired corresponding to each exposure.
And step S130, respectively carrying out multi-frame noise reduction processing on the multi-frame second night scene images corresponding to each exposure degree to obtain third night scene images corresponding to each exposure degree.
Step S140, the third night scene image corresponding to each exposure degree is processed to obtain the target night scene image.
Compared with the prior art that the manner of performing multi-frame noise reduction on the shot image by fixedly setting the frame number is adopted, in this embodiment, considering that the overexposure area and the overexposure area of the image in different scenes are different, firstly, a first night view image of one frame is obtained, the exposure of the image to be collected and the number of frames to be collected corresponding to each exposure are dynamically determined according to the overexposure area and the overexposure area of the first night view image, then, the number of frame images to be collected corresponding to the exposure are respectively collected in each determined exposure, and multi-frame noise reduction processing is performed on the number of frame images to be collected in each exposure respectively, so that the situation that the multi-frame noise reduction effect is poor due to insufficient frame numbers or the situation that the number of frames is excessive to cause the waste of computing resources is avoided, and the shooting effect of the night view image is improved.
In a possible implementation manner, referring to fig. 2 in conjunction with step S110, steps S111 to S114 may be implemented by the following sub-steps, which are described in detail below.
In the sub-step S111, an overexposed region and an overexposed region are identified from the first night view image.
As an example, a gray level histogram of the first night view image may be acquired first. That is, the color first night-scene image is converted into a gray-scale format night-scene image to remove the color information of the first night-scene image, and the brightness information of the first night-scene image is expressed in gray scale. Each pixel in the colored first night view image occupies 3 bytes, and after the first night view image is converted into the gray scale, each pixel occupies one byte, and the gray scale value of each pixel can represent the brightness of the pixel in the first night view image.
And then, according to the gray value of each pixel point in the gray histogram, determining the pixel point of which the gray value is greater than the first gray threshold as a first pixel point, and determining the pixel point of which the gray value is less than the second gray threshold as a second pixel point. For example, the pixel points with the gray value greater than 230 may be determined as the first pixel points, and the pixel points with the gray value less than 25 may be determined as the second pixel points. On this basis, the region including all the first pixel points can be determined as an overexposed region, and the region including all the second pixel points can be determined as an overexposed region.
In the sub-step S112, a first proportion of the overexposed region in the first night view image and a second proportion of the overexposed region in the first night view image are calculated.
And a substep S113 of determining the exposure of the image to be acquired according to the first proportion and the second proportion.
For example, as an alternative embodiment, the exposure level of the image to be captured may include a first exposure level of the first night view image, followed by determining other exposure levels of the image to be captured based on the first scale and the second scale.
In detail, if the first ratio is smaller than the first set value and the second ratio is smaller than the second set value, the exposure of the image to be collected is determined to be the first exposure of the first night view image.
And if the first ratio is larger than the third set value, determining a second exposure according to the first exposure and the difference between the first ratio and the third set value, and determining that the exposure of the image to be acquired also comprises the second exposure. For example, a coefficient corresponding to a difference between the first ratio and the third setting value may be set in advance, and then the product of the coefficient and the first exposure may be used as the second exposure.
And if the second proportion is larger than a fourth set value, determining a third exposure according to the first exposure and the difference between the second proportion and the fourth set value, and determining that the exposure of the image to be acquired also comprises the third exposure. For example, a coefficient corresponding to a difference between the second ratio and the fourth setting value may be set in advance, and then the product of the coefficient and the first exposure may be used as the third exposure.
Therefore, if too much dark areas exist, image frames with higher brightness can be acquired subsequently for fusion, and if too much overexposed areas exist, slightly dark image frames can be acquired subsequently for restoring the image information of the overexposed areas.
And a substep S114 of determining the number of frames to be acquired corresponding to each exposure according to the sensitivity of the first night scene image and the corresponding relationship of each determined exposure according to a preset number of frames.
In detail, after the exposure level of the image to be acquired is determined, the number of frames to be acquired corresponding to each exposure level may be further determined. For example, a frame number correspondence relationship may be preset, and the frame number correspondence relationship may include frame numbers corresponding to different exposures at each sensitivity, that is, each type of frame number to be acquired corresponds to one sensitivity and one exposure. Therefore, the frame number corresponding to different exposure degrees under the light sensitivity can be searched according to the light sensitivity of the first night scene image, and then the corresponding frame number to be collected is determined according to each determined exposure degree.
Therefore, the situation that the multi-frame noise reduction effect is poor due to insufficient frame number or the calculation resource is wasted due to excessive frame number can be effectively avoided.
After the exposure of the image to be acquired and the number of frames to be acquired corresponding to each exposure are determined, acquiring a plurality of frames of second night scene images under each exposure. In the research process, the inventor of the present application finds that if the corresponding multiple frames of second night scene images are respectively collected in sequence according to the sequence of different exposure degrees, when the multiple frames of second night scene images under each exposure degree are subjected to multiple frame noise reduction in the subsequent process, due to the fact that each two adjacent frames have slight deviation in the image content, the situation that ghost images or image blurring distortion appear in the multiple frames of noise-reduced images occurs.
Based on this, in one possible implementation, in step S120, assuming that the exposure levels of the to-be-captured image include a first exposure level of the first night view image, a second exposure level smaller than the first exposure level, and a third exposure level larger than the first exposure level, when acquiring multiple frames of the second night view image at each exposure level, first capturing multiple frames of the second night view image at the first exposure level, then capturing one frame of the second night view image at the second exposure level, and capturing one frame of the second night view image at the third exposure level. And then, acquiring the remaining each frame of second night scene image under the second exposure level, and acquiring the remaining each frame of second night scene image under the third exposure level to obtain a plurality of frames of second night scene images corresponding to each exposure level.
For example, assume that the first exposure of the first night view image is EV0, the second exposure less than the first exposure is EV-2, and the third exposure greater than the first exposure is EV + 3. The number of frames to be acquired corresponding to EV0, EV-2, and EV +3 is 3, 5, and 4, respectively, and please refer to fig. 3 in combination, first acquire 3 frames of the second night view image under EV0, then acquire 1 frame of the second night view image under EV-2, and acquire 1 frame of the second night view image under EV + 3. And then acquiring 4 second night scene images under EV-2, acquiring 3 frames of second night scene images under EV +3 to obtain 3 frames of second night scene images with exposure degree of EV0, 5 frames of second night scene images with exposure degree of EV-2 and 4 frames of second night scene images with exposure degree of EV +3, and finally obtaining twelve frames of second night scene images.
By adopting the scheme, because the second night scene image of each two adjacent frames has slight deviation in image content, after the second night scene image is acquired at the first exposure level, the second night scene image of one frame and the second night scene image of one frame are acquired at the second exposure level and the second night scene image of one frame are acquired at the third exposure level respectively, so that the reference frame of the second exposure level and the reference frame of the third exposure level are closer to the reference frame of the first exposure level when multi-frame denoising is performed subsequently, the subsequent image alignment is facilitated, and the condition that ghost or image fuzzy distortion occurs in the image after multi-frame denoising is effectively improved.
Further, the inventor of the present application finds that the second exposure level and the third exposure level are exposure levels dynamically determined according to the overexposed area and the overexposed area of the first night view image, but the subsequent dynamic range improvement of the second night view image obtained at a certain exposure level may not help much, if multi-frame noise reduction processing is performed on multiple frames of second night view images corresponding to each determined exposure level, redundant computation may be increased, the time of the whole night view image shooting process may be prolonged, and the shooting experience may be reduced.
Based on this, in a possible implementation manner, before performing the multi-frame noise reduction processing, in this embodiment, it may also be determined, for each exposure level, whether an overexposed region and an overexposed region of the first frame second night view image corresponding to the exposure level satisfy preset conditions. For example, the preset condition may include: the difference value between the over-exposed area and the over-dark area of the first frame of second night scene image and the pixel point of the over-exposed area and the over-dark area of the first frame of second night scene image is smaller than a preset pixel point threshold value.
And if the first frame of second night scene image does not meet the preset condition, determining the multiple frames of second night scene images under the exposure degree as multiple frames of noise reduction removal images so as to stop performing multiple frames of noise reduction processing on the multiple frames of second night scene images corresponding to the exposure degree.
Next, in step S130, in a possible implementation manner, for each exposure level, taking the second night scene image of the first frame acquired at the exposure level as a reference frame, aligning the remaining second night scene images with the reference frame, and then performing image fusion processing on each aligned remaining second night scene image and the reference frame to obtain a third night scene image corresponding to the exposure level. Therefore, the situation that ghost images or fuzzy distortion images appear in the images after multi-frame noise reduction can be effectively improved by carrying out image alignment and then carrying out multi-frame noise reduction.
In step S140, after the multi-frame denoising process, third night-scene images with better image quality and corresponding to different exposure levels are obtained, and then HDR (High-Dynamic Range) fusion needs to be performed by using the third night-scene images with corresponding different exposure levels to improve the Dynamic Range of the night-scene images, that is, to brighten the too-dark regions and to restore the image details of the too-bright regions. However, in the actual shooting process, due to other factors such as hand shake, the image content between the third night view images of each frame inevitably has a certain difference, so if the third night view images corresponding to different exposure levels are directly subjected to HDR fusion, a situation that a final generated target night view image has ghost or image distortion may occur.
Based on this, in one possible implementation, please refer to fig. 4 in combination, the step S140 can be implemented by the following sub-steps S141 and S142, which are described in detail below.
And a substep S141 of respectively aligning feature points of the third night scene images corresponding to each two exposure degrees to obtain the aligned fourth night scene images corresponding to each exposure degree.
And a substep S142, synthesizing the fourth night scene image corresponding to each exposure degree through a high dynamic range image algorithm to obtain a target night scene image.
In this embodiment, before performing HDR fusion on the third night view images corresponding to different exposure levels, feature point alignment may be performed on the third night view images corresponding to every two exposure levels, so as to improve the condition that a final generated target night view image has a ghost or image distortion. Since the third night view images to be aligned are of different exposure degrees, the conventional optical flow alignment scheme cannot meet the image alignment requirement of the scheme, and the inventor proves through research that the scheme of alignment after feature point matching can effectively perform image alignment on the third night view images of different exposure degrees.
In an alternative embodiment, before feature point alignment is performed on the third night view images corresponding to every two exposures, in order to increase the probability of successful feature point matching, in this embodiment, for the third night view images corresponding to every two exposures, the luminance of the third night view image with higher luminance in the two third night view images may be reduced to the luminance of the third night view image with lower luminance through a histogram specification algorithm. In this way, the brightness difference between the two third night scene images that need to be aligned can be reduced, thereby increasing the probability of successful matching of feature points.
In an alternative embodiment, with respect to sub-step S141, the inventors further study to find that, in some shooting scenes, there may be a case where the density of feature points in a certain region is relatively high and the remaining feature points are relatively sparse, and when performing feature point alignment, a region with dense feature points (for example, a region with dense trees in the shooting scene) is generally considered to be a priority, so that the image alignment effect of the third night scene image in the region with dense feature points is good, but the image alignment effect of other regions is poor.
For example, referring to fig. 5 in combination, if the first X feature points with the highest confidence rank are taken from a third night view image, if no limitation is imposed on the feature points, the distribution of the feature points with the highest confidence rank will be the left image in fig. 5, the feature points may be mostly concentrated in some regions with obvious features (upper left corner regions), which inevitably results in that the feature point matching and alignment of the third night view image and other third night view images are concentrated in the upper left corner regions of the images, and finally, the alignment effect of the upper left corner regions of the third night view image is good in a high probability, and the alignment effect of other regions is poor, thereby causing a situation that a subsequently generated target night view image has ghost or distortion.
Based on this, in a possible implementation manner, the embodiment may divide the two frames of third night view images into a plurality of image regions for every two exposure levels of the third night view image, then extract each feature point of each image region of the two frames of third night view images, and calculate the confidence of each feature point of each image region of the two frames of third night view images. And then, according to the calculation result, determining the feature points with the confidence degrees meeting the set conditions in each image area of the two frames of third night scene images as target feature points so as to determine each target feature point in the two frames of third night scene images. And finally, aligning feature points of the two frames of third night scene images according to the target feature points of the two frames of third night scene images to obtain two frames of fourth night scene images after alignment.
Wherein, the setting condition may include: the confidence degrees of the feature points are positioned at the top N in the descending sequence of the confidence degrees of the feature points in the image region, wherein N is a positive integer.
For example, the third night-view image a and the third night-view image B may be divided into an image area 1, an image area 2, an image area 3, and an image area Y. Then, the feature points of the image region 1, the image region 2, the image region 3, the image region Y of the third night view image a and the third night view image B are extracted, and the confidence of the feature points is calculated. Then, the feature points with the confidence level in the image area 1, the image area 2, the image area 3, the image area Y, the image area x, the image area Y, the image area 2, the image area, and the image area Y, the image area.
Still taking fig. 5 as an example, the distribution of the target feature points obtained after the above scheme is shown in the right diagram, and it can be seen that in this embodiment, when feature points of the third night view image corresponding to every two exposure levels are aligned, feature points of a plurality of image regions are screened, so that the screened target feature points are more uniformly distributed on the third night view image, and thus the image alignment effect of the whole third night view image is considered as a whole, and the situation that a subsequently generated target night view image has ghost or distortion is further improved.
Further, in a possible implementation manner, aiming at the sub-step S142, the inventor researches and discovers that, in the process of synthesizing the fourth night view image corresponding to each exposure by using the high dynamic range image algorithm, due to the technical principle, the luminance values of the image pixels of each fourth night view image are often close to the middle region, so that the dark or bright regions of the image are prevented from being over-exposed, and thus the obtained target night view image has low contrast and poor image quality.
Based on this, please further refer to fig. 6, the image processing method provided in this embodiment may further include step S150, which is described in detail as follows.
And S150, enhancing the contrast of the target night scene image by adopting an automatic color gradation algorithm to obtain an enhanced target night scene image.
In this way, in the embodiment, in consideration of the situation that the contrast of the target night view image is low when the high dynamic range image algorithm is synthesized, the visual effect of the target night view image can be significantly improved by enhancing the contrast of the target night view image by using the automatic color gradation algorithm.
Fig. 7 is a schematic diagram illustrating functional modules of the image processing apparatus 200 according to an embodiment of the present application, and the image processing apparatus 200 may be divided into the functional modules according to the foregoing method embodiment. For example, the functional blocks may be divided for the respective functions, or two or more functions may be integrated into one processing block. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the present application is schematic, and is only a logical function division, and there may be another division manner in actual implementation. For example, in the case of dividing each function module according to each function, the cluster resource management device shown in fig. 7 is only a schematic device diagram. The image processing apparatus 200 may include an acquisition determining module 210, an image acquiring module 220, a denoising processing module 230, and an image processing module 240, and the functions of the functional modules of the image processing apparatus 200 are respectively described in detail below.
The obtaining and determining module 210 is configured to obtain a frame of first night view image, and determine, according to an overexposed region and an overexposed region of the first night view image, an exposure level of an image to be collected and a number of frames to be collected corresponding to each exposure level. It is to be understood that the acquisition determining module 210 may be configured to perform the step S110, and as to the detailed implementation of the acquisition determining module 210, reference may be made to the content related to the step S110.
The image obtaining module 220 is configured to obtain multiple frames of second night scene images at each exposure level according to the exposure level of the image to be collected and the number of frames to be collected corresponding to each exposure level. It is understood that the image obtaining module 220 can be used to perform the step S120, and for the detailed implementation of the image obtaining module 220, reference can be made to the above-mentioned contents related to the step S120.
The denoising module 230 is configured to perform multi-frame denoising processing on the multiple frames of second night view images corresponding to each exposure degree, respectively, to obtain a third night view image corresponding to each exposure degree. It is understood that the denoising processing module 230 may be configured to perform the step S130, and for a detailed implementation of the denoising processing module 230, reference may be made to the contents related to the step S130.
And the image processing module 240 is configured to perform image processing on the third night view image corresponding to each exposure level to obtain a target night view image. It is understood that the image processing module 2400 may be configured to execute the step S140, and for a detailed implementation of the image processing module 240, reference may be made to the content related to the step S140.
In a possible implementation manner, the acquisition determining module 210 may specifically determine the exposure level of the image to be acquired and the number of frames to be acquired corresponding to each exposure level by:
identifying an overexposed area and an overexposed area from the first night scene image;
calculating a first proportion of the over-exposed area in the first night view image and a second proportion of the over-dark area in the first night view image;
determining the exposure of the image to be acquired according to the first proportion and the second proportion;
and determining the number of frames to be acquired corresponding to each exposure degree according to the sensitivity of the first night scene image and the determined exposure degree and the preset corresponding relationship of the number of frames, wherein each number of frames to be acquired corresponds to one sensitivity and one exposure degree.
In one possible implementation, the acquisition determining module 210 may specifically identify the overexposed region and the overexposed region from the first night view image by:
acquiring a gray level histogram of a first night scene image;
determining the pixel points with the gray values larger than a first gray threshold as first pixel points and determining the pixel points with the gray values smaller than a second gray threshold as second pixel points according to the gray values of all the pixel points in the gray histogram;
and determining the region including all the first pixel points as an overexposure region, and determining the region including all the second pixel points as an overexposure region.
In a possible embodiment, the exposure level of the image to be captured includes a first exposure level of the first night view image, and in a possible embodiment, the obtaining determining module 210 may specifically determine the exposure level of the image to be captured by:
if the first proportion is smaller than the first set value and the second proportion is smaller than the second set value, determining the exposure of the image to be acquired as the first exposure of the first night scene image;
if the first ratio is larger than the third set value, determining a second exposure according to the first exposure and the difference between the first ratio and the third set value, and determining that the exposure of the image to be acquired also comprises the second exposure;
and if the second proportion is larger than a fourth set value, determining a third exposure according to the first exposure and the difference between the second proportion and the fourth set value, and determining that the exposure of the image to be acquired also comprises the third exposure.
In a possible implementation manner, the exposure levels of the images to be captured include a first exposure level of the first night view image, a second exposure level smaller than the first exposure level, and a third exposure level larger than the first exposure level, and the image obtaining module 220 may specifically obtain multiple frames of the second night view image at each exposure level by:
collecting a plurality of frames of second night scene images under the first exposure;
collecting a frame of second night scene image under a second exposure level;
acquiring a frame of second night scene image under a third exposure level;
collecting each frame of the remaining second night scene images under the second exposure;
and collecting the remaining each frame of second night scene image under the third exposure level to obtain a plurality of frames of second night scene images corresponding to each exposure level.
In a possible implementation, the image processing apparatus 200 may further include an image removing module configured to:
judging whether an overexposure area and an overexposure area of a first frame of second night scene image corresponding to each exposure meet preset conditions or not according to each exposure;
if the first frame of second night scene image does not meet the preset condition, determining the multiple frames of second night scene images under the exposure degree as multiple frames of noise reduction removal images so as to stop performing multiple frames of noise reduction processing on the multiple frames of second night scene images corresponding to the exposure degree;
wherein the preset conditions include: the difference value between the over-exposed area and the over-dark area of the first frame of second night scene image and the pixel point of the over-exposed area and the over-dark area of the first frame of second night scene image is smaller than a preset pixel point threshold value.
In a possible implementation manner, the noise reduction processing module 230 may specifically perform multi-frame noise reduction processing on the multiple frames of the second night view image corresponding to each exposure degree respectively in the following manner to obtain a third night view image corresponding to each exposure degree:
aiming at each exposure, taking a first frame of second night scene image collected under the exposure as a reference frame, and aligning the rest second night scene images with the reference frame;
and performing image fusion processing on each residual second night scene image after alignment and the reference frame to obtain a third night scene image corresponding to the exposure level.
In a possible implementation manner, the image processing module 240 may specifically perform image processing on the third night view image corresponding to each exposure level to obtain a target night view image by:
respectively aligning feature points of the third night scene images corresponding to every two exposure degrees to obtain a fourth night scene image corresponding to each exposure degree after alignment;
and synthesizing the fourth night scene image corresponding to each exposure degree through a high dynamic range image algorithm to obtain a target night scene image.
In a possible implementation manner, the image processing module 240 may specifically perform feature point alignment on the third night view image corresponding to each two exposure levels respectively in the following manner to obtain a fourth night view image corresponding to each exposure level after alignment:
dividing the two frames of third night scene images into a plurality of image areas aiming at the third night scene images corresponding to every two exposure degrees;
respectively extracting each feature point of each image area of the two frames of third night scene images;
calculating the confidence coefficient of each feature point of each image area of the two frames of third night scene images;
according to the calculation result, determining the feature points with the confidence degrees meeting the set conditions in each image area of the two frames of third night scene images as target feature points so as to determine each target feature point in the two frames of third night scene images;
performing feature point alignment on the two frames of third night scene images according to each target feature point in the two frames of third night scene images to obtain two frames of fourth night scene images after alignment;
wherein, the setting conditions include: the confidence degrees of the feature points are positioned at the top N in the descending sequence of the confidence degrees of the feature points in the image region, wherein N is a positive integer.
In a possible embodiment, the image processing module 240 may further reduce, by using a histogram specified algorithm, the luminance of the third night view image with higher luminance in the two third night view images to the luminance of the third night view image with lower luminance in the third night view images corresponding to every two exposure levels.
In a possible implementation manner, the image processing module 240 may further enhance the contrast of the target night-scene image by using an automatic color-level algorithm, so as to obtain an enhanced target night-scene image.
Further, referring to fig. 8, a schematic block diagram of a structure of an electronic device 100 provided in an embodiment of the present application is shown, where the electronic device 100 may include a machine-readable storage medium 120 and a processor 130.
The processor 130 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an Application-Specific Integrated Circuit (ASIC), or one or more Integrated circuits for controlling the execution of the image Processing method provided in the method embodiments described below.
The machine-readable storage medium 120 may be, but is not limited to, a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an Electrically Erasable programmable Read-Only MEMory (EEPROM), a compact disc Read-Only MEMory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The machine-readable storage medium 120 may be self-contained and coupled to the processor 130 via a communication bus. The machine-readable storage medium 120 may also be integrated with the processor. The machine-readable storage medium 120 is used for storing machine-executable instructions for performing aspects of the present application. Processor 130 is configured to execute machine-executable instructions stored in machine-readable storage medium 120 to implement the foregoing method embodiments.
Since the electronic device 100 provided in the embodiment of the present application is another implementation form of the image processing method provided in the foregoing method embodiment, and the electronic device 100 can be used to execute another implementation form of the image processing method provided in the foregoing method embodiment, reference may be made to the foregoing method embodiment for obtaining technical effects, and details are not repeated here.
Embodiments of the present application further provide a readable storage medium containing computer-executable instructions, which when executed, may be configured to perform operations related to the image processing method provided in the foregoing method embodiments.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the present application has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
The above description is only for various embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and all such changes or substitutions are included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. An image processing method applied to an electronic device, the method comprising:
acquiring a frame of first night scene image, and identifying an overexposed area and an overexposed area from the first night scene image;
calculating a first proportion of the over-exposed area in the first night view image and a second proportion of the over-dark area in the first night view image;
determining the exposure of the image to be acquired according to the first proportion and the second proportion, wherein the step comprises the following steps of:
if the first proportion is smaller than a first set value and the second proportion is smaller than a second set value, determining the exposure of the image to be collected as the first exposure of the first night scene image;
if the first ratio is larger than a third set value, determining a second exposure according to the first exposure and the difference between the first ratio and the third set value, and determining that the exposure of the image to be acquired also comprises the second exposure;
if the second proportion is larger than a fourth set value, determining a third exposure according to the first exposure and the difference between the second proportion and the fourth set value, and determining that the exposure of the image to be acquired also comprises the third exposure;
determining the number of frames to be acquired corresponding to each exposure degree according to the sensitivity of the first night scene image and the determined exposure degree and the corresponding relation of the preset number of frames, wherein each number of frames to be acquired corresponds to one sensitivity and one exposure degree;
the exposure of the image to be acquired comprises a first exposure of the first night scene image, a second exposure smaller than the first exposure and a third exposure larger than the first exposure;
collecting a plurality of frames of second night scene images under the first exposure;
acquiring a frame of second night scene image under the second exposure;
acquiring a frame of second night scene image at the third exposure;
collecting each frame of the remaining second night scene images under the second exposure;
collecting the remaining each frame of second night scene image under the third exposure to obtain a plurality of frames of second night scene images corresponding to each exposure;
respectively carrying out multi-frame noise reduction processing on the multi-frame second night scene image corresponding to each exposure degree to obtain a third night scene image corresponding to each exposure degree, wherein the step comprises the following steps of:
aiming at each exposure, taking a first frame of second night scene image collected under the exposure as a reference frame, and aligning the rest second night scene images with the reference frame;
performing image fusion processing on each aligned residual second night scene image and the reference frame to obtain a third night scene image corresponding to the exposure;
and carrying out image processing on the third night scene image corresponding to each exposure degree to obtain a target night scene image.
2. The method according to claim 1, wherein the step of identifying the overexposed area and the overexposed area from the first night view image comprises:
acquiring a gray level histogram of the first night scene image;
determining the pixel points with the gray values larger than a first gray threshold value as first pixel points and determining the pixel points with the gray values smaller than a second gray threshold value as second pixel points according to the gray value of each pixel point in the gray histogram;
and determining the region including all the first pixel points as an overexposure region, and determining the region including all the second pixel points as an overexposure region.
3. The image processing method according to any one of claims 1 to 2, wherein the step of performing image processing on the third night view image corresponding to each exposure level to obtain a target night view image comprises:
respectively aligning feature points of the third night scene images corresponding to every two exposure degrees to obtain a fourth night scene image corresponding to each exposure degree after alignment;
and synthesizing the fourth night scene image corresponding to each exposure degree through a high dynamic range image algorithm to obtain a target night scene image.
4. The image processing method according to claim 3, wherein the step of respectively performing feature point alignment on the third night view images corresponding to every two exposure levels to obtain the fourth night view image corresponding to each aligned exposure level comprises:
dividing the two frames of third night scene images into a plurality of image areas aiming at the third night scene images corresponding to every two exposure degrees;
respectively extracting each feature point of each image area of the two frames of third night scene images;
calculating the confidence coefficient of each feature point of each image area of the two frames of third night scene images;
according to the calculation result, determining the feature points with the confidence degrees meeting the set conditions in each image area of the two frames of third night scene images as target feature points so as to determine each target feature point in the two frames of third night scene images;
performing feature point alignment on the two frames of third night scene images according to each target feature point in the two frames of third night scene images to obtain two frames of fourth night scene images after alignment;
wherein the setting conditions include: the confidence degrees of the feature points are positioned at the top N in the descending sequence of the confidence degrees of the feature points in the image region, wherein N is a positive integer.
5. The image processing method according to claim 3, wherein before the step of respectively aligning feature points of the third night scene images corresponding to every two exposure levels to obtain the aligned fourth night scene images corresponding to each exposure level, the method further comprises:
and aiming at the third night scene images corresponding to every two exposure degrees, reducing the brightness of the third night scene image with higher brightness to the brightness of the third night scene image with lower brightness in the two third night scene images through a histogram stipulation algorithm.
6. The image processing method according to claim 1, characterized in that the method further comprises:
and enhancing the contrast of the target night scene image by adopting an automatic color gradation algorithm to obtain an enhanced target night scene image.
7. An image processing apparatus applied to an electronic device, the apparatus comprising:
the acquisition determining module is used for acquiring a frame of first night scene image and identifying an overexposed area and an overexposed area from the first night scene image;
calculating a first proportion of the over-exposed area in the first night view image and a second proportion of the over-dark area in the first night view image;
determining the exposure of the image to be acquired according to the first proportion and the second proportion, wherein the step comprises the following steps of:
if the first proportion is smaller than a first set value and the second proportion is smaller than a second set value, determining the exposure of the image to be collected as the first exposure of the first night scene image;
if the first ratio is larger than a third set value, determining a second exposure according to the first exposure and the difference between the first ratio and the third set value, and determining that the exposure of the image to be acquired also comprises the second exposure;
if the second proportion is larger than a fourth set value, determining a third exposure according to the first exposure and the difference between the second proportion and the fourth set value, and determining that the exposure of the image to be acquired also comprises the third exposure;
determining the number of frames to be acquired corresponding to each exposure degree according to the sensitivity of the first night scene image and the determined exposure degree and the corresponding relation of the preset number of frames, wherein each number of frames to be acquired corresponds to one sensitivity and one exposure degree;
the exposure of the image to be acquired comprises a first exposure of the first night scene image, a second exposure smaller than the first exposure and a third exposure larger than the first exposure;
an image acquisition module to:
collecting a plurality of frames of second night scene images under the first exposure;
acquiring a frame of second night scene image under the second exposure;
acquiring a frame of second night scene image at the third exposure;
collecting each frame of the remaining second night scene images under the second exposure;
collecting the remaining each frame of second night scene image under the third exposure to obtain a plurality of frames of second night scene images corresponding to each exposure;
the noise reduction processing module is used for respectively carrying out multi-frame noise reduction processing on the multi-frame second night scene images corresponding to each exposure degree to obtain the third night scene images corresponding to each exposure degree, and the noise reduction processing module comprises the following steps:
aiming at each exposure, taking a first frame of second night scene image collected under the exposure as a reference frame, and aligning the rest second night scene images with the reference frame;
performing image fusion processing on each aligned residual second night scene image and the reference frame to obtain a third night scene image corresponding to the exposure;
and the image processing module is used for carrying out image processing on the third night scene image corresponding to each exposure degree to obtain a target night scene image.
8. An electronic device, comprising a machine-readable storage medium having machine-executable instructions stored thereon and a processor, wherein the processor, when executing the machine-executable instructions, implements the image processing method of any one of claims 1-6.
9. A readable storage medium having stored therein machine executable instructions which when executed perform the image processing method of any one of claims 1 to 6.
CN201910720898.2A 2019-08-06 2019-08-06 Image processing method and device, electronic equipment and readable storage medium Active CN110443766B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910720898.2A CN110443766B (en) 2019-08-06 2019-08-06 Image processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910720898.2A CN110443766B (en) 2019-08-06 2019-08-06 Image processing method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN110443766A CN110443766A (en) 2019-11-12
CN110443766B true CN110443766B (en) 2022-05-31

Family

ID=68433356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910720898.2A Active CN110443766B (en) 2019-08-06 2019-08-06 Image processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN110443766B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907701B (en) * 2019-11-19 2022-08-05 杭州海康威视数字技术股份有限公司 Method and device for acquiring image, computer equipment and storage medium
CN111242860B (en) * 2020-01-07 2024-02-27 影石创新科技股份有限公司 Super night scene image generation method and device, electronic equipment and storage medium
CN111310727B (en) * 2020-03-13 2023-12-08 浙江大华技术股份有限公司 Object detection method and device, storage medium and electronic device
CN111654623B (en) * 2020-05-29 2022-03-22 维沃移动通信有限公司 Photographing method and device and electronic equipment
CN112003996B (en) * 2020-08-12 2023-04-18 Oppo广东移动通信有限公司 Video generation method, terminal and computer storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101166240A (en) * 2006-10-19 2008-04-23 索尼株式会社 Image processing device, image forming device and image processing method
CN103413286A (en) * 2013-08-02 2013-11-27 北京工业大学 United reestablishing method of high dynamic range and high-definition pictures based on learning
CN103973988A (en) * 2013-01-24 2014-08-06 华为终端有限公司 Scene recognition method and device
CN107465882A (en) * 2017-09-22 2017-12-12 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN107679470A (en) * 2017-09-22 2018-02-09 天津大学 A kind of traffic mark board detection and recognition methods based on HDR technologies
CN108288253A (en) * 2018-01-08 2018-07-17 厦门美图之家科技有限公司 HDR image generation method and device
CN109218613A (en) * 2018-09-18 2019-01-15 Oppo广东移动通信有限公司 High dynamic-range image synthesis method, device, terminal device and storage medium
CN109218627A (en) * 2018-09-18 2019-01-15 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN109348089A (en) * 2018-11-22 2019-02-15 Oppo广东移动通信有限公司 Night scene image processing method, device, electronic equipment and storage medium
CN109862282A (en) * 2019-02-18 2019-06-07 Oppo广东移动通信有限公司 Character image treating method and apparatus
CN110072051A (en) * 2019-04-09 2019-07-30 Oppo广东移动通信有限公司 Image processing method and device based on multiple image

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578068B (en) * 2015-12-21 2018-09-04 广东欧珀移动通信有限公司 A kind of generation method of high dynamic range images, device and mobile terminal
CN108322646B (en) * 2018-01-31 2020-04-10 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101166240A (en) * 2006-10-19 2008-04-23 索尼株式会社 Image processing device, image forming device and image processing method
CN103973988A (en) * 2013-01-24 2014-08-06 华为终端有限公司 Scene recognition method and device
CN103413286A (en) * 2013-08-02 2013-11-27 北京工业大学 United reestablishing method of high dynamic range and high-definition pictures based on learning
CN107465882A (en) * 2017-09-22 2017-12-12 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN107679470A (en) * 2017-09-22 2018-02-09 天津大学 A kind of traffic mark board detection and recognition methods based on HDR technologies
CN108288253A (en) * 2018-01-08 2018-07-17 厦门美图之家科技有限公司 HDR image generation method and device
CN109218613A (en) * 2018-09-18 2019-01-15 Oppo广东移动通信有限公司 High dynamic-range image synthesis method, device, terminal device and storage medium
CN109218627A (en) * 2018-09-18 2019-01-15 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN109348089A (en) * 2018-11-22 2019-02-15 Oppo广东移动通信有限公司 Night scene image processing method, device, electronic equipment and storage medium
CN109862282A (en) * 2019-02-18 2019-06-07 Oppo广东移动通信有限公司 Character image treating method and apparatus
CN110072051A (en) * 2019-04-09 2019-07-30 Oppo广东移动通信有限公司 Image processing method and device based on multiple image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于多曝光融合及伪影去除的动态范围扩展技术研究;江燊煜;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160215(第02期);I138-1476 *
高动态范围图像合成与显示技术研究;孙婧;《中国优秀硕士学位论文全文数据库 信息科技辑》;20171215(第12期);I138-394 *

Also Published As

Publication number Publication date
CN110443766A (en) 2019-11-12

Similar Documents

Publication Publication Date Title
CN110443766B (en) Image processing method and device, electronic equipment and readable storage medium
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
CN108898567B (en) Image noise reduction method, device and system
CN110072051B (en) Image processing method and device based on multi-frame images
CN109636754B (en) Extremely-low-illumination image enhancement method based on generation countermeasure network
CN110473185B (en) Image processing method and device, electronic equipment and computer readable storage medium
JP4234195B2 (en) Image segmentation method and image segmentation system
CN113766125B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN110033418B (en) Image processing method, image processing device, storage medium and electronic equipment
WO2018136373A1 (en) Image fusion and hdr imaging
CN108198152B (en) Image processing method and device, electronic equipment and computer readable storage medium
WO2015184208A1 (en) Constant bracketing for high dynamic range operations (chdr)
CN110660090B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
JP2015532070A (en) Scene recognition method and apparatus
KR20120016476A (en) Image processing method and image processing apparatus
CN111915505A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111953893B (en) High dynamic range image generation method, terminal device and storage medium
CN107105172B (en) Focusing method and device
CN110796041A (en) Subject recognition method and device, electronic equipment and computer-readable storage medium
CN113012081A (en) Image processing method, device and electronic system
CN113177438A (en) Image processing method, apparatus and storage medium
CN114418879A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113298735A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113259594A (en) Image processing method and device, computer readable storage medium and terminal
US20170278229A1 (en) Image Processing Method, Computer Storage Medium, Apparatus and Terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant