WO2016029380A1 - 一种图像处理方法、计算机存储介质、装置及终端 - Google Patents
一种图像处理方法、计算机存储介质、装置及终端 Download PDFInfo
- Publication number
- WO2016029380A1 WO2016029380A1 PCT/CN2014/085280 CN2014085280W WO2016029380A1 WO 2016029380 A1 WO2016029380 A1 WO 2016029380A1 CN 2014085280 W CN2014085280 W CN 2014085280W WO 2016029380 A1 WO2016029380 A1 WO 2016029380A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- pixel
- images
- exposure time
- pixel value
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 11
- 238000000034 method Methods 0.000 claims description 38
- 239000011159 matrix material Substances 0.000 claims description 37
- 230000009466 transformation Effects 0.000 claims description 37
- 230000008569 process Effects 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000001186 cumulative effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 238000005282 brightening Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000009022 nonlinear effect Effects 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/682—Vibration or motion blur correction
- H04N23/684—Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time
- H04N23/6845—Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time by combination of a plurality of images sequentially taken
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6811—Motion detection based on the image signal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20182—Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20201—Motion blur correction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
Definitions
- Image processing method computer storage medium, device and terminal
- the present invention relates to the field of image processing technologies, and in particular, to an image processing method, a computer storage medium, a device, and a terminal. Background technique
- the (Sensor) area is small, and the maximum exposure time without significant motion blur when photographing is relatively short, so this phenomenon is more likely to occur. Especially in night scenes or in very dark indoor scenes, in order to achieve sufficient exposure for the photos, a longer exposure time is generally required, which is more noticeable.
- Embodiments of the present invention provide an image processing method, a computer storage medium, a device, and a terminal, which are used to solve the technical problem that the quality of a captured image is low due to a long exposure time.
- a first aspect of the present invention provides an image processing method, including:
- the first exposure time is greater than the preset exposure time, according to the preset exposure time Shooting N second images respectively, and the scene corresponding to the N second images is the same as the scene corresponding to the first image, where N is a positive integer;
- processing the N second images to obtain a first specific image including:
- N-1 fourth images Performing local motion compensation on the N-1 third images to obtain N-1 fourth images; according to pixel values of each pixel in the reference image, and each of the N-1 fourth images The pixel value of the pixel points, the first specific image is obtained.
- N-1 second images are respectively registered with the reference image to obtain N- 1 third image, including:
- N-1 third images Get N-1 fourth images, including:
- N-1 fourth images For each of the N-1 third images, the local motion compensation is performed according to the following steps to obtain N-1 fourth images:
- the jth pixel is a pixel value of the j-th pixel point in the reference image; wherein, j is any integer from 1 to M, and the M is a pixel point included in the reference image total.
- the method further includes:
- the jth pixel is kept The pixel value does not change.
- the possible implementation of the fourth possible implementation in a fifth possible implementation manner of the first aspect, according to the reference image a pixel value of each pixel, and a pixel value of each pixel in the N-1 fourth image, to obtain the first specific image, including:
- the first specific image is obtained based on the obtained summed pixel value.
- the obtaining the first specific image according to the obtained summation pixel value includes:
- Adjusting the brightness and chromaticity of the second specific image to obtain the first specific image includes:
- the method further includes: if the first exposure time is less than or equal to the preset exposure time, capturing the first image according to the first exposure time.
- a computer storage medium storing a program, the program including the steps as described above.
- a third aspect of the invention provides an image processing apparatus, including:
- a determining module configured to determine, when the first image is captured, a first exposure time required to capture the first image
- a determining module configured to determine whether the first exposure time is greater than a preset exposure time; wherein, the blur degree of the image captured according to the preset exposure time is less than or equal to a preset blur degree;
- the N second images are respectively captured according to the preset exposure time, and the scene corresponding to the N second images and the scene corresponding to the first image Similarly, the N is a positive integer;
- a processing module configured to process the N second images to obtain a first specific image; wherein, the first specific image has a blur degree smaller than a blur of the first image taken according to the first exposure time degree.
- the processing module is specifically configured to:
- the processing module is configured to separately use the remaining N-1 second images and the reference image Performing registration to obtain N-1 third images, specifically: determining, for each of the remaining N-1 second images, a transformation matrix between the second image and the reference image, And registering the second image with the reference image by using the transformation matrix to obtain a third image corresponding to the second image; wherein the transformation matrix is used to indicate the second image and the reference The relative motion relationship between images.
- the processing module is specifically configured to target the N-1
- the three images perform local motion compensation to obtain N-1 fourth images, which are specifically:
- N-1 fourth images For each of the N-1 third images, the local motion compensation is performed according to the following steps to obtain N-1 fourth images:
- the jth pixel is a pixel value of the j-th pixel point in the reference image; wherein, j is any integer from 1 to M, and the M is a pixel point included in the reference image total.
- the processing module is further configured to: determine a pixel value of the jth pixel point and the If the absolute value of the difference between the pixel values of the jth pixel in the reference image is greater than or equal to a preset threshold, if the pixel value of the jth pixel is the jth in the reference image If the absolute value of the difference between the pixel values of the pixel points is less than the predetermined threshold, the pixel value of the jth pixel point is kept unchanged.
- the processing module The block is specifically configured to obtain the first specific image according to a pixel value of each pixel in the reference image and a pixel value of each pixel in the N-1 fourth image, specifically:
- the first specific image is obtained based on the obtained summed pixel value.
- the processing module is specifically configured to obtain the first specific image according to the obtained summation pixel value, Specifically:
- the processing module is specifically configured to adjust brightness and chrominance of the second specific image to obtain The first specific image is specifically:
- the shooting module is further used After the determining module determines whether the first exposure time is greater than a preset exposure time, if the first exposure time is less than or equal to the preset exposure time, shooting the first exposure time according to the first exposure time An image.
- a fourth aspect of the present invention provides a terminal, including: a memory, an input device, and a processor; wherein the memory and the input device are respectively connected to the processor, where
- the memory is configured to store an instruction
- the processor is configured to execute the instruction, when the first image is captured, determine a first exposure time required to capture the first image; and determine whether the first exposure time is greater than a preset exposure time
- the degree of blur of the image taken according to the preset exposure time is less than or equal to the preset blur degree; if the first exposure time is greater than the preset exposure time, the pre-precision is performed by the input device Setting the exposure time to respectively capture N second images, and the scene corresponding to the N second images is the same as the scene corresponding to the first image, the N is a positive integer; and the N second images are Processing is performed to obtain a first specific image; wherein, the degree of blur of the first specific image is smaller than a degree of blur of the first image taken according to the first exposure time.
- the processor is configured to process the N second images to obtain a first specific image, specifically:
- N-1 third images are subjected to local motion compensation to obtain N-1 fourth images; according to pixel values of each pixel in the reference image, and each pixel in the N-1 fourth images The pixel value of the point, the first specific image is obtained.
- the processor is specifically configured to separately use the remaining N-1 second images and the reference image Registration is performed to obtain a N-1 third image, which is specifically:
- the processor is specifically used to target the N-1
- the three images perform local motion compensation to obtain N-1 fourth images, which are specifically:
- N-1 fourth images For each of the N-1 third images, the local motion compensation is performed according to the following steps to obtain N-1 fourth images:
- the Jth pixel The pixel value is a pixel value of the j-th pixel point in the reference image; wherein, j is any integer from 1 to M, and the M is a pixel point included in the reference image total.
- the processor is further configured to: execute the instruction, and determine the jth pixel point If the absolute value of the difference between the pixel value and the pixel value of the j-th pixel point in the reference image is greater than or equal to a preset threshold, if the pixel value of the j-th pixel point is in the reference image If the absolute value of the difference between the pixel values of the jth pixel is less than the preset threshold, the pixel value of the jth pixel is kept unchanged.
- the first specific image is obtained according to the pixel value of each pixel in the reference image and the pixel value of each pixel in the N-1 fourth image, specifically:
- the processor is specifically configured to obtain the first specific image according to the obtained summation pixel value, Specifically, the second specific image is obtained according to the obtained summation pixel value; and the brightness and chromaticity of the second specific image are adjusted to obtain the first specific image.
- the processor is specifically configured to adjust brightness and chrominance of the second specific image to obtain
- the first specific image is specifically: brightening the second specific image according to a luminance histogram Adjusting the degree; adjusting the chromaticity of the second specific image according to the adjusted brightness of the second specific image to obtain the first specific image.
- the possible implementation manner of the seventh possible implementation manner After the instruction is executed, after determining whether the first exposure time is greater than a preset exposure time, if the first exposure time is less than or equal to the preset exposure time, the first device is The exposure time captures the first image.
- an exposure time required to capture the first image in a current environment is determined, which is referred to as the first exposure time, and is determined to be exposed after the determination. Comparing time, if the first exposure time is greater than the preset exposure time, indicating that the first exposure time is too long, the first exposure time may be discarded, and the preset exposure time is used to shoot.
- the N second images are used to avoid exposing the exposure time when the image is taken as long as possible, and the exposure time used when the N images are taken is short, the possibility of the user's hand-held shaking is significantly reduced, or even if there is hand-held jitter , the degree of blurring caused by hand-held jitter is also reduced, thereby effectively improving the quality of image capture.
- the N second images are further processed to obtain the first specific image, wherein the first specific image is relative to the first image captured according to the first exposure time.
- the sharpness of the image is high, avoiding the appearance of illusion, blur, and even ghosting of the photograph taken.
- FIG. 1 is a flowchart of an image processing method according to an embodiment of the present invention.
- FIG. 2 is a flowchart of a method for performing image registration on a second image according to an embodiment of the present invention
- FIG. 3 is a flowchart of a method for performing local motion compensation on a third image according to an embodiment of the present invention
- FIG. 5 is a structural block diagram of an image processing apparatus according to an embodiment of the present invention.
- FIG. 6 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
- FIG. 7 is another schematic structural diagram of a terminal according to an embodiment of the present invention. detailed description
- An embodiment of the present invention provides an image processing method, including: determining, when capturing a first image, a first exposure time required for capturing the first image; determining whether the first exposure time is greater than a preset exposure time; The degree of blur of the image taken according to the preset exposure time is less than or equal to the preset blur degree; if the first exposure time is greater than the preset exposure time, respectively, N shots are taken according to the preset exposure time a second image, and the scene corresponding to the N second images is the same as the scene corresponding to the first image, and N is a positive integer; processing the N second images to obtain a first specific image; The degree of blur of the first specific image is less than the degree of blur of the first image taken according to the first exposure time.
- an exposure time required to capture the first image in a current environment is determined, which is referred to as the first exposure time, and is determined to be exposed after the determination. Comparing time, if the first exposure time is greater than the preset exposure time, indicating that the first exposure time is too long, the first exposure time may be discarded, and the preset exposure time is used.
- the N second images are taken to avoid exposing the exposure time to an excessively long time, and the exposure time used for capturing the N images is short, and the possibility of the user shaking hands is significantly reduced, or even if there is a hand Dithering, the degree of blurring caused by hand-held dithering is also reduced, thereby effectively improving the quality of image capture. And, the N second images are further processed to obtain the first specific image, wherein the first specific image is relative to the first image captured according to the first exposure time. The sharpness of the image is high, avoiding the appearance of illusion, blur, and even ghosting of the photograph taken.
- system and “network” are used interchangeably herein.
- the term “and/or” in this article is merely an association describing the associated object, indicating that there can be three relationships, for example For example, A and / or B, can mean: There are three cases of A, A and B, and B alone.
- the character "/" in this article unless otherwise specified, generally means that the contextual object is an "or" relationship.
- an embodiment of the present invention provides an image processing method, and a main flow of the method is described as follows.
- Step 101 When taking the first image, determine the first exposure time required to capture the first image.
- an exposure time required for capturing the image may be determined, and the image is referred to as the first image, and an exposure time required for capturing the image is referred to as the first exposure. time.
- the first exposure time required to capture the first image may be determined by an automatic exposure algorithm (Auto Exposure, ⁇ ).
- Auto Exposure, ⁇ an automatic exposure algorithm
- the automatic exposure is that the image processing device automatically adjusts the exposure amount according to the intensity of the light to prevent overexposure or underexposure. It can be understood that the first exposure time is determined by an automatic exposure algorithm according to the intensity of the light.
- Step 102 Determine whether the first exposure time is greater than a preset exposure time; wherein, the degree of blur of the image captured according to the preset exposure time is less than or equal to a preset blur degree.
- the preset exposure time can be estimated based on parameters such as the focal length of the lens (Lens) and the size of the sensor, regardless of factors such as the weight of the electronic device and the shooting level of the photographer.
- the preset exposure time may refer to a maximum exposure time without significant motion blur, that is, a photo taken according to the preset exposure time, and the corresponding blur degree is acceptable to the human eye.
- the photos taken according to the preset exposure time do not substantially affect normal viewing.
- the preset blur degree is the maximum degree of blur that the human eye can accept.
- Step 103 If the first exposure time is greater than the preset exposure time, respectively, shooting N second images according to the preset exposure time, and the scene corresponding to the N second images is The scene corresponding to the first image is the same, and N is a positive integer.
- N> t/T.
- t represents the first exposure time and T represents the preset exposure time.
- the jitter of the hand may occur due to the long time, which may result in blurred images.
- the method may include: When an exposure time is greater than the preset exposure time, the first image is stopped, and the N second images are respectively captured according to the preset exposure time.
- the first image is not yet started to be photographed, but the exposure time required to capture the first image is calculated first, so if the first exposure time is greater than the preset At the exposure time, the first image can be discarded, and the first exposure time is discarded, and the preset exposure time is directly used.
- N images can be taken, N being a positive integer, where each image is referred to as a second image, that is, N second images can be taken.
- N is preferably greater than or equal to 2, that is, a plurality of second images can be taken, so that if a plurality of images are processed, the final image obtained is better.
- the exposure time required for shooting is not excessively long, and the preset exposure time is the maximum exposure time without significant motion blur, and the degree of blurring of the captured image is low.
- N is an integer not less than 2
- the electronic device used for shooting is an electronic device having only one image capturing unit
- the N second images can be obtained by taking N times
- the electronic device used for shooting is An electronic device having two image acquisition units
- the N second images can be obtained by taking N/2 times. That is, the number of shots required is related to the number of image collection units that the electronic device used has.
- the image collection unit may be, for example, a camera.
- the scene corresponding to the N second images and the first image The corresponding scene is the same scene, that is, if the first exposure time is greater than the preset exposure time, the first image cannot be normally captured, and the N pictures may be captured for the same scene as the first image.
- Second image In the case where factors such as the degree of blurring of different images are not considered, the N pieces of the second image may be considered to be the same image.
- the scenes corresponding to the two images are the same, which may mean that the framing range of the shutter is the same when the two images are captured, or the focus of the focus is the same when the two images are taken, and the present invention does not limit.
- the scenes corresponding to the two images are the same, and can be approximated as the same as the subjects included in the two images.
- the method further includes: if the first exposure time is less than or equal to the preset exposure time, The first exposure time captures the first image.
- the first exposure time is not greater than the preset exposure time, indicating that the exposure time required to capture the first image is within an acceptable range, the first exposure may be directly taken according to the first exposure time. image. At this time, the possibility of blurring of the image due to the shaking of the user's hand is not very large.
- Step 104 Process the N second images to obtain a first specific image; wherein, the first specific image has a blur degree smaller than a blur degree of the first image captured according to the first exposure time.
- the N second images are captured, the N second images are further processed to obtain a final image, which may be referred to as the The first specific image.
- processing the N second images to obtain the first specific image may include: selecting one of the N second images.
- the second image is used as a reference image; respectively registering the remaining N-1 second images with the reference image to obtain N-1 third images; performing local motion compensation for the N-1 third images Obtaining N-1 fourth images; obtaining the first specific image according to pixel values of each pixel in the reference image and pixel values of each pixel in the N-1 fourth images .
- the remaining N-1 second images are respectively registered with the reference image to obtain N-1 third images, including: Determining, by each of the images in the image, a transformation matrix between the second image and the reference image, and registering the second image with the reference image through the transformation matrix to obtain a corresponding Zhang Di a third image of the second image; wherein the transformation matrix is used to indicate a relative motion relationship between the second image and the reference image.
- a second image is selected from the plurality of second images as the reference image.
- the present invention does not limit the specific selection method.
- the remaining N-1 second images other than the reference image in the N second images are registered with the reference image, and the registration is completed. Thereafter, the N-1 sheets of the third image are obtained.
- Step 201 Determine a transformation matrix between the second image and the reference image.
- the relative motion relationship between the second image and the reference image is estimated.
- a feature point extraction and matching algorithm may be used to determine the transformation matrix between the two.
- each second image is also There are the same 500 pixels.
- some feature points may be determined from all the pixel points, thereby determining all of the 500 pixel points or Part as a feature point.
- 300 pixel points are determined from 500 pixels as feature points, for each feature point, position information corresponding to the second image in the second image is determined, and the reference image is determined Position information corresponding to the position information, and then position change information of each feature point can be determined, and according to the position change information of each feature point, it can be determined between the second image and the reference image
- the transformation matrix can be a 3*3 square matrix.
- Step 202 Register the second image with the reference image by using the transformation matrix to obtain a corresponding third image.
- the second image is image-converted according to the transformation matrix to be registered to the reference image, and is obtained corresponding to the /
- the local motion compensation is performed on the N-1 third images, and the N-1 fourth image is obtained, including:
- each of the N-1 third images is subjected to local motion compensation according to the following steps, and N-1 fourth images are obtained.
- Step 301 determining, for the j-th pixel point included in the third image, an absolute difference between a pixel value of the j-th pixel point and a pixel value of the j-th pixel point in the reference image Whether the value is greater than or equal to the preset threshold.
- the reference image is represented by A
- the pixel value of the reference image at any one pixel point (X, y) is represented by A (x, _y), and (X, represents the N-1 sheet)
- the pixel value of any third image in the three images at the same pixel point (X, y) can be used for the following local motion compensation method:
- Th represents the preset threshold, and Th may be debugged according to experience, or may adopt an adaptive threshold, which is not limited by the present invention.
- I' ( , y) represents the pixel value of the fourth image in the N-th fourth image at the same pixel point (X , y ).
- the value of a pixel may be a value of a three-dimensional property, for example, the value of pixel 1 may represent (a, b, c), where a represents the value of the Y channel of the pixel 1 b represents the value of the U channel of the pixel 1 and c represents the value of the V channel of the pixel 1.
- the specific participation and comparison may refer to the value of the Y channel of the pixel, or the value of each pixel may be regarded as a vector, in the formula (1)
- the specific participation in the comparison and calculation may refer to the norm of the vector corresponding to the pixel, and so on.
- Step 302 If the absolute value of the difference between the pixel value of the jth pixel point and the pixel value of the jth pixel point in the reference image is greater than or equal to the preset threshold, then the jth a pixel value of the pixel points takes a pixel value of the j-th pixel point in the reference image; wherein, j is any integer from 1 to M, and the M is included in the reference image The total number of pixels.
- the pixel point ( ⁇ , y ) is the pixel corresponding to the moving object.
- the pixel value of the point is The pixel value of the point in the reference image is directly taken to avoid possible blurring or ghosting. Otherwise, indicating that the pixel point (X, y) is a pixel corresponding to the non-moving object, the pixel value of the point may retain the pixel value of the point in the third image to preserve as much as possible in the third image.
- the method may further include: if the absolute value of the difference between the pixel value of the Jth pixel point and the pixel value of the jth pixel point in the reference image is less than the preset threshold, The pixel value of the Jth pixel is unchanged.
- the above local motion compensation operation may be performed on each pixel in each third image, where j may take an integer from 1 to M, respectively, for each pixel point.
- the local motion compensation operation is performed, and the motion blur phenomenon can be well avoided, and the obtained image is better.
- the local motion compensation operation may be performed on a part of the pixels in each of the third images, where j may take a partial integer from 1 to M, respectively, and as to which integers are taken from j, for example, may be randomly selected, or It is also possible to select according to a specific rule, for example, a specific value can be selected, and the like, and the present invention is not limited. If the partial motion compensation operation is performed only for a part of the pixels, less steps are required to reduce the burden on the device.
- the N-1 fourth images are obtained.
- the first specific image is described as follows:
- Step 401 Sum the pixel value of the ith pixel in the reference image with the pixel value of the ith pixel in the N-1 fourth image, where the i is a slave Any integer from 1 to M, the M being the total number of pixel points included in the reference image.
- the image motion caused by the hand shake and the motion of the object in the scene are already registered.
- the pixel values of each pixel in the N images can be separately summed and summed. .
- I ( x, y ) represents any one of the first specific images
- a cj) represents the pixel of the reference image at the same pixel (X, y)
- ri ( , indicates that any one of the fourth images in the fourth image of Ni is at the same pixel
- the pixels included in each image are the same.
- the above summation operation may be performed for each of the pixel points, and i may take an integer from 1 to M, respectively, that is, the obtained summed pixel value is at most M, thus obtaining Because the first specific image contains all the original pixels, it will restore the original image more accurately, and the effect will be better.
- the above summation operation may be performed on some of the pixel points, and i may take a partial integer from 1 to M, respectively, and as to which integers are taken by i, for example, may be randomly selected, or may be according to a specific rule.
- a specific value or the like can be selected, and the present invention is not limited. If only a part of the pixels are selected for the summation operation, fewer steps are required, the workload of the device is reduced, and if the selected pixel is a feature point, the obtained image can be restored to the original image. The object to be shot of the image.
- Step 402 Obtain the first specific image according to the obtained summation pixel value.
- the reference image includes a total of three pixel points, which are pixel point 1, pixel point 2, and pixel point 3, and each of the N-1 first images also includes the three pixel points, that is, The pixel point 1, the pixel point 2, and the pixel point 3.
- the value of the pixel point 2 in the reference image is 2, the value of the pixel point 2 in the first fourth image is 2, and the value of the pixel 2 in the second fourth image is 4, Then, the values of these pixel points 2 are added together, and the obtained pixel point 2 has a value of 8.
- the value of the pixel point 3 in the reference image is 1, the value of the pixel point 3 in the first fourth image is 2, and the value of the pixel 3 in the second fourth image is 1. Then, the values of these pixel points 3 are added together, and the obtained pixel point 3 has a value of 4.
- the first specific image can be obtained.
- the specific values herein are merely examples and do not represent true values.
- the examples herein are merely illustrative of the manner in which the first particular image is obtained.
- the above participation in the specific calculation may also refer to the value of the Y channel of the pixel, or the value of each pixel may be regarded as a vector, and the specific calculation may also refer to the norm of the vector corresponding to the pixel. , and many more.
- the first specific image after obtaining the first specific image, it may be determined whether a pixel value of each pixel in the first specific image exceeds a maximum pixel value that can be displayed by the display device, if there is a pixel point If the pixel value exceeds the maximum pixel value that can be displayed by the display device, the brightness and chromaticity of the first specific image can be adjusted separately to obtain the adjusted first specific image.
- the first specific image before the adjustment may be referred to as a second specific image, The first specific image obtained after the second specific image adjustment is obtained.
- the image obtained from the summed pixel value has the pixel value of all the pixels in the range that the display device can display, then the brightness and chromaticity adjustment may not be performed on the image.
- the image obtained from the summed pixel values may be referred to as a first specific image.
- the image obtained according to the summed pixel value has a pixel value of a pixel in the pixel that is not within the range that the display device can display, then the brightness and chromaticity adjustment of the image is required.
- An image obtained from the summed pixel values may be referred to as a second specific image, and an image obtained by performing luminance and chromaticity adjustment on the second specific image may be referred to as a first specific image.
- obtaining the first specific image according to the obtained summation pixel value may include: obtaining a second specific image according to the obtained summation pixel value; and brightness of the second specific image And the chromaticity is adjusted to obtain the first specific image.
- adjusting brightness and chromaticity of the second specific image to obtain a first specific image comprising: adjusting brightness of the second specific image according to a luminance histogram; Adjusting the brightness of the second specific image, and adjusting the chromaticity of the second specific image to obtain the first specific image.
- the luminance histogram graphically represents the number of pixels of each brightness level of the image, The distribution of pixels in the image is now available.
- the second specific image needs to be dynamic.
- the range is compressed to obtain the desired brightness distribution. For example, dynamic adaptive compression can be performed based on the luminance histogram distribution of the image.
- the second specific image is entirely dark, the left area of the luminance histogram occupies a large proportion, and may be compressed by using a convex curve to improve the brightness of the second specific image;
- the second specific image is entirely bright, and may be compressed by a concave curve to reduce the brightness of the second specific image. If most of the pixels of the second specific image are concentrated in the middle region of the luminance histogram, the transparency of the image is generally poor, and the S-curve may be used for compression to improve the contrast of the second specific image.
- the brightness of the second specific image may be adjusted first, and then the chromaticity of the second specific image may be adjusted according to the brightness adjustment result.
- the chrominance components (U, V) are processed accordingly to obtain a desired color saturation.
- u out (u ln - 128 x N) x ⁇ HL + 128 ( 3 )
- v out (v in - 128 XN) X ⁇ HL + 128 ( 4 )
- M.son f represents the value of U in the first specific image
- v. ut represents the value of V in the first specific image
- M represents the value of U in the second specific image, indicating the number The value of V in the two specific images
- > OTi represents the value of Y in the first specific image
- _y,07 indicates the value of Y in the second specific image
- ⁇ indicates the degree of compression of the luminance component
- N can generally take a value of 5 or 6, so that the final image obtained has a better effect. If the value of N is small, the brightness of the first specific image obtained may not be very good in some cases. If the value of N is large, the calculation amount is relatively large, and the calculation time is relatively long. It takes up more computing resources, so in general, N is 5 or 6 is a better choice. Of course, in practical applications, different values may be taken for N according to requirements, and embodiments corresponding to various values of N are all within the protection scope of the present invention.
- An embodiment of the present invention provides a computer storage medium, where the computer storage medium stores a program, and the program includes the steps described in the flowcharts of FIG. 1 to FIG.
- the computer storage medium includes, for example, a USB flash drive, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like, which can store program codes. medium.
- an embodiment of the present invention provides an image processing apparatus, where the apparatus may include a determining module 501, a determining module 502, a photographing module 503, and a processing module 504.
- the determining module 501 is configured to determine, when the first image is captured, a first exposure time required to capture the first image
- the determining module 502 is configured to determine whether the first exposure time is greater than a preset exposure time; wherein, the blur degree of the image captured according to the preset exposure time is less than or equal to a preset blur degree; When the first exposure time is greater than the preset exposure time, the N second images are respectively captured according to the preset exposure time, and the scene corresponding to the N second images and the scene corresponding to the first image Same, N is a positive integer;
- the processing module 504 is configured to process the N second images to obtain a first specific image, where the degree of blur of the first specific image is smaller than the blur of the first image taken according to the first exposure time degree.
- the processing module 504 is specifically configured to: select a second image from the N second images as a reference image; respectively, the remaining N-1 The second image is registered with the reference image to obtain N-1 third images; local motion compensation is performed for the N-1 third images to obtain N-1 fourth images; In the image The pixel value of each pixel, and the pixel value of each pixel in the N-1 fourth image, obtain the first specific image.
- the processing module 504 is specifically configured to register the remaining N-1 second images with the reference image to obtain a N-1 third image, specifically:
- Each of the N-1 second images respectively determines a transformation matrix between the second image and the reference image, and the second image is matched with the reference image by the transformation matrix A third image corresponding to the second image is obtained; wherein the transformation matrix is used to indicate a relative motion relationship between the second image and the reference image.
- the processing module 504 is specifically configured to perform local motion compensation on the N-1 third images to obtain a N-1 fourth image, specifically:
- N-1 fourth images For each of the N-1 third images, the local motion compensation is performed according to the following steps to obtain N-1 fourth images:
- the jth pixel is a pixel value of the j-th pixel point in the reference image; wherein, j is any integer from 1 to M, and the M is a pixel point included in the reference image total.
- the processing module 504 is further configured to: determine an absolute difference between a pixel value of the jth pixel point and a pixel value of the jth pixel point in the reference image After the value is greater than or equal to the preset threshold, if the absolute value of the difference between the pixel value of the jth pixel and the pixel value of the jth pixel in the reference image is less than the preset value, Then, the pixel value of the jth pixel is kept unchanged.
- the processing module 504 is specifically configured to obtain, according to a pixel value of each pixel in the reference image, and a pixel value of each pixel in the N-1 fourth image
- the first specific image is specifically: a pixel value of an ith pixel point in the reference image, Calculating a pixel value of the ith pixel point in the fourth image of N1, wherein the i is any integer from 1 to M, and the M is a pixel point included in the reference image
- the total number of the first specific images is obtained based on the obtained summed pixel values.
- the processing module 504 is specifically configured to obtain the first specific image according to the obtained summation pixel value, specifically: obtaining a second specific image according to the obtained summation pixel value; The brightness and chromaticity of the second specific image are adjusted to obtain the first specific image.
- the processing module 504 is specifically configured to adjust the brightness and chromaticity of the second specific image to obtain a first specific image, specifically: according to the brightness histogram, the second specific Adjusting the brightness of the image; adjusting the chromaticity of the second specific image according to the adjusted brightness of the second specific image to obtain the first specific image.
- the photographing module 503 is further configured to: after the determining module 502 determines whether the first exposure time is greater than a preset exposure time, if the first exposure time is less than or equal to the preset exposure Time, the first image is taken according to the first exposure time.
- the embodiment of the present invention provides a terminal, and the terminal and the device in FIG. 5 may be the same device.
- the terminal may include a processor 601, a memory 602, and an input device 603 connected to the same bus 600. Since both are connected to the bus 600, the memory 602 and the input device 603 are respectively connected to the processor 601.
- the memory 602 is used to store instructions required by the processor 601 to execute the program; the processor 601 is configured to read the instructions stored in the memory 602 to perform the following method: when capturing the first image, determining that the first image is required for capturing a first exposure time; determining whether the first exposure time is greater than a preset exposure time; wherein, the blur degree of the image captured according to the preset exposure time is less than or equal to a preset blur degree; if the first exposure time If the preset exposure time is greater than the preset exposure time, the N second images are respectively captured by the input device 603 according to the preset exposure time, and the scene corresponding to the N second images and the scene corresponding to the first image Similarly, N is a positive integer; processing the N second images to obtain a first specific image; wherein, the first specific image has a blur degree smaller than the first image captured according to the first exposure time The degree of blurring.
- the processor 301 is configured to process the N second images to obtain a first specific image, specifically: from the N second images. Selecting a second image as a reference image; respectively registering the remaining N-1 second images with the reference image to obtain N-1 third images; and performing, for the N-1 third images Local motion compensation, obtaining N-1 fourth images; obtaining the number according to pixel values of each pixel in the reference image and pixel values of each pixel in the N-1 fourth image A specific image.
- the processor 301 is specifically configured to register the remaining N-1 second images with the reference image to obtain a N-1 third image, specifically:
- Each of the N-1 second images respectively determines a transformation matrix between the second image and the reference image, and the second image is matched with the reference image by the transformation matrix A third image corresponding to the second image is obtained; wherein the transformation matrix is used to indicate a relative motion relationship between the second image and the reference image.
- the processor 301 is specifically configured to perform local motion compensation on the N-1 third images, and obtain N-1 fourth images, specifically:
- N-1 fourth images For each of the N-1 third images, the local motion compensation is performed according to the following steps to obtain N-1 fourth images:
- the jth pixel is a pixel value of the j-th pixel point in the reference image; wherein, j is any integer from 1 to M, and the M is a pixel point included in the reference image total.
- the processor 301 is further configured to: execute the instruction, determine, in determining a pixel value of the jth pixel point and a pixel of the jth pixel point in the reference image If the absolute value of the difference between the values is greater than or equal to the preset threshold, if the absolute value of the difference between the pixel value of the jth pixel point and the pixel value of the jth pixel point in the reference image is smaller than the Preset threshold, then Keeping the pixel value of the jth pixel point unchanged.
- the processor 301 is specifically configured to obtain, according to a pixel value of each pixel in the reference image, and a pixel value of each pixel in the N-1 fourth image, Specifically, the first specific image is: summing a pixel value of a first pixel in the reference image and a pixel value of the ith pixel in the N-1 fourth image, Wherein i is any integer from 1 to M, and the M is a total number of pixel points included in the reference image; and the first specific image is obtained according to the obtained summed pixel value.
- the processor 301 is specifically configured to obtain the first specific image according to the obtained summation pixel value, specifically: obtaining a second specific image according to the obtained summation pixel value; The brightness and chromaticity of the second specific image are adjusted to obtain the first specific image.
- the processor 301 is specifically configured to adjust the brightness and chromaticity of the second specific image to obtain the first specific image, specifically: according to the brightness histogram Adjusting the brightness of the second specific image; adjusting the chromaticity of the second specific image according to the adjusted brightness of the second specific image to obtain the first specific image.
- the processor 301 is further configured to: execute the instruction, after determining whether the first exposure time is greater than a preset exposure time, if the first exposure time is less than or equal to the preset When the exposure time is set, the first image is captured by the input device 603 according to the first exposure time.
- the input device 603 may include a device having an image capturing function, such as a camera, a dual camera, and the like.
- FIG. 7 is another schematic structural diagram of a terminal provided in the embodiment of the present invention.
- the terminal in FIG. 7 and the terminal in FIG. 6 and the image processing apparatus in FIG. 5 may be the same device. 7 is only a more detailed structural diagram of the terminal.
- the terminal includes: an input device 703, a processor 701, an output device 701, a random access memory 702, a read only memory 702, and a bus 700.
- the processor 701 is coupled to the input device 703, the output device 701, the random access memory 702, and the read only memory 702 via the bus 700, respectively.
- the input/output system directs the system to boot, and the image processing device is booted into a normal operating state.
- the processor 601 in FIG. 6 is the same component as the processor 701 in FIG. 7, and the bus 600 in FIG. 6 is the same component as the bus 700 in FIG. 7, the input device 603 in FIG. 6 and the input device 603 in FIG.
- the input device 703 is the same component, and the memory 602 in FIG. 6 is the same component as the random access memory 702 in FIG.
- the application and operating system are run in the random access memory 702.
- the input device 703 is used for image acquisition, wherein the input device 703 may include a camera, a dual camera, and the like having an image capture function.
- the output device 701 is for displaying a result image, wherein the output device 701 can include a touch screen, a display, a printer, and the like.
- the photographing process of the long exposure time is split into a plurality of short exposure time photographing processes, and the registration and cumulative summation of the late algorithm are performed, and the result of the cumulative summation is dynamically mapped to the standard luminance output. It can reduce the blur caused by hand-held jitter, improve the blur caused by the motion of objects in the scene, and realize the adaptive adjustment of image brightness, which is of great significance for improving the user's photo experience.
- An embodiment of the present invention provides an image processing method, including: determining, when capturing a first image, a first exposure time required for capturing the first image; determining whether the first exposure time is greater than a preset exposure time; The degree of blur of the image taken according to the preset exposure time is less than or equal to the preset blur degree; if the first exposure time is greater than the preset exposure time, respectively, N shots are taken according to the preset exposure time a second image, and the scene corresponding to the N second images is the same as the scene corresponding to the first image, and N is a positive integer; processing the N second images to obtain a first specific image; The degree of blur of the first specific image is less than the degree of blur of the first image taken according to the first exposure time.
- an exposure time required to capture the first image in a current environment is determined, which is referred to as the first exposure time, and is determined to be exposed after the determination. Comparing time, if the first exposure time is greater than the preset exposure time, indicating that the first exposure time is too long, the first exposure time may be discarded, and the preset exposure is adopted Taking the N second images by light time, thereby avoiding the exposure time being too long when the image is taken as much as possible, and the exposure time used when the N images are taken is short, and the possibility of the user's hand shake is significantly reduced, or Even with hand-held jitter, the degree of blurring caused by hand-held jitter is reduced, which effectively improves the quality of image capture. And, the N second images are further processed to obtain the first specific image, wherein the first specific image is relative to the first image captured according to the first exposure time. The sharpness of the image is high, avoiding the appearance of illusion, blur, and even ghosting of the photograph
- the disclosed systems, devices, and methods may be implemented in other ways.
- the device embodiments described above are merely illustrative.
- the division of the modules or units is only a logical function division.
- there may be another division manner for example, multiple units or components may be used. Combined or can be integrated into another system, or some features can be ignored, or not executed.
- the coupling or direct coupling or communication connection between the various components shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical, mechanical or otherwise.
- the components displayed for the unit may or may not be physical units, ie may be located in one place, or may be distributed over multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solution of the embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
- the integrated unit, if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application, in essence or the contribution to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
- the instructions include a plurality of instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform all or part of the steps of the methods described in various embodiments of the present application.
- the foregoing storage medium includes: a medium that can store program codes, such as a USB flash drive, a removable hard disk, a read only memory, a random access memory, a magnetic disk, or an optical disk.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP14900893.0A EP3179716B1 (en) | 2014-08-27 | 2014-08-27 | Image processing method, computer storage medium, device, and terminal |
US15/507,080 US10235745B2 (en) | 2014-08-27 | 2014-08-27 | Image processing method, computer storage medium, apparatus and terminal |
PCT/CN2014/085280 WO2016029380A1 (zh) | 2014-08-27 | 2014-08-27 | 一种图像处理方法、计算机存储介质、装置及终端 |
CN201480046202.8A CN105556957B (zh) | 2014-08-27 | 2014-08-27 | 一种图像处理方法、计算机存储介质、装置及终端 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2014/085280 WO2016029380A1 (zh) | 2014-08-27 | 2014-08-27 | 一种图像处理方法、计算机存储介质、装置及终端 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016029380A1 true WO2016029380A1 (zh) | 2016-03-03 |
Family
ID=55398595
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2014/085280 WO2016029380A1 (zh) | 2014-08-27 | 2014-08-27 | 一种图像处理方法、计算机存储介质、装置及终端 |
Country Status (4)
Country | Link |
---|---|
US (1) | US10235745B2 (zh) |
EP (1) | EP3179716B1 (zh) |
CN (1) | CN105556957B (zh) |
WO (1) | WO2016029380A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108540728A (zh) * | 2017-03-06 | 2018-09-14 | 中兴通讯股份有限公司 | 长曝光拍照方法及装置 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2018415738B2 (en) | 2018-03-27 | 2021-10-28 | Huawei Technologies Co., Ltd. | Photographing Mobile Terminal |
US11915446B2 (en) * | 2018-10-24 | 2024-02-27 | Siemens Healthineers Ag | Generating a medical result image |
JP2020184669A (ja) * | 2019-05-07 | 2020-11-12 | シャープ株式会社 | 画像処理装置、撮像装置、画像処理方法、プログラム |
CN112637500B (zh) * | 2020-12-22 | 2023-04-18 | 维沃移动通信有限公司 | 图像处理方法及装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101331754A (zh) * | 2005-10-14 | 2008-12-24 | 京瓷株式会社 | 成像设备和成像方法 |
CN101345824A (zh) * | 2007-07-09 | 2009-01-14 | 三星电子株式会社 | 补偿相机的手抖动的方法和设备 |
CN101426091A (zh) * | 2007-11-02 | 2009-05-06 | 韩国科亚电子股份有限公司 | 使用目标跟踪的数字图像稳定的装置及其方法 |
CN102340626A (zh) * | 2010-07-14 | 2012-02-01 | 株式会社尼康 | 摄像装置、及图像合成方法 |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7088865B2 (en) * | 1998-11-20 | 2006-08-08 | Nikon Corporation | Image processing apparatus having image selection function, and recording medium having image selection function program |
US6778210B1 (en) | 1999-07-15 | 2004-08-17 | Olympus Optical Co., Ltd. | Image pickup apparatus with blur compensation |
DE10120446C2 (de) * | 2001-04-26 | 2003-04-17 | Zeiss Carl | Projektionsbelichtungsanlage sowie Verfahren zur Kompensation von Abbildungsfehlern in einer Projektionsbelichtungsanlage, insbesondere für die Mikro-Lithographie |
US7158502B2 (en) * | 2001-06-14 | 2007-01-02 | Motorola, Inc. | Slot cycle assignment within a communication system |
US6847907B1 (en) * | 2002-12-31 | 2005-01-25 | Active Optical Networks, Inc. | Defect detection and repair of micro-electro-mechanical systems (MEMS) devices |
CN100481887C (zh) | 2003-06-17 | 2009-04-22 | 松下电器产业株式会社 | 信息产生装置、图像拾取装置以及图像拾取方法 |
JP4378272B2 (ja) * | 2004-12-15 | 2009-12-02 | キヤノン株式会社 | 撮影装置 |
US7773115B2 (en) * | 2004-12-15 | 2010-08-10 | Texas Instruments Incorporated | Method and system for deblurring digital camera images using reference image and motion estimation |
JP4357471B2 (ja) | 2005-09-30 | 2009-11-04 | 三洋電機株式会社 | 画像取得装置及びプログラム |
US8018999B2 (en) | 2005-12-05 | 2011-09-13 | Arcsoft, Inc. | Algorithm description on non-motion blur image generation project |
JP2007243774A (ja) * | 2006-03-10 | 2007-09-20 | Olympus Imaging Corp | 電子的ぶれ補正装置 |
US7616826B2 (en) * | 2006-07-28 | 2009-11-10 | Massachusetts Institute Of Technology | Removing camera shake from a single photograph using statistics of a natural image |
US7602418B2 (en) * | 2006-10-11 | 2009-10-13 | Eastman Kodak Company | Digital image with reduced object motion blur |
US7924316B2 (en) * | 2007-03-14 | 2011-04-12 | Aptina Imaging Corporation | Image feature identification and motion compensation apparatus, systems, and methods |
JP4706936B2 (ja) | 2008-09-26 | 2011-06-22 | ソニー株式会社 | 撮像装置、その制御方法およびプログラム |
US8339475B2 (en) * | 2008-12-19 | 2012-12-25 | Qualcomm Incorporated | High dynamic range image combining |
JP5276529B2 (ja) * | 2009-06-18 | 2013-08-28 | キヤノン株式会社 | 画像処理装置およびその方法 |
JP5567235B2 (ja) * | 2012-03-30 | 2014-08-06 | 富士フイルム株式会社 | 画像処理装置、撮影装置、プログラム及び画像処理方法 |
-
2014
- 2014-08-27 CN CN201480046202.8A patent/CN105556957B/zh active Active
- 2014-08-27 EP EP14900893.0A patent/EP3179716B1/en active Active
- 2014-08-27 US US15/507,080 patent/US10235745B2/en active Active
- 2014-08-27 WO PCT/CN2014/085280 patent/WO2016029380A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101331754A (zh) * | 2005-10-14 | 2008-12-24 | 京瓷株式会社 | 成像设备和成像方法 |
CN101345824A (zh) * | 2007-07-09 | 2009-01-14 | 三星电子株式会社 | 补偿相机的手抖动的方法和设备 |
CN101426091A (zh) * | 2007-11-02 | 2009-05-06 | 韩国科亚电子股份有限公司 | 使用目标跟踪的数字图像稳定的装置及其方法 |
CN102340626A (zh) * | 2010-07-14 | 2012-02-01 | 株式会社尼康 | 摄像装置、及图像合成方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3179716A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108540728A (zh) * | 2017-03-06 | 2018-09-14 | 中兴通讯股份有限公司 | 长曝光拍照方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN105556957B (zh) | 2018-06-26 |
CN105556957A (zh) | 2016-05-04 |
EP3179716A4 (en) | 2017-06-14 |
EP3179716B1 (en) | 2019-10-09 |
US10235745B2 (en) | 2019-03-19 |
EP3179716A1 (en) | 2017-06-14 |
US20170278229A1 (en) | 2017-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11558558B1 (en) | Frame-selective camera | |
CN111028189B (zh) | 图像处理方法、装置、存储介质及电子设备 | |
CN111418201B (zh) | 一种拍摄方法及设备 | |
CN108335279B (zh) | 图像融合和hdr成像 | |
US9077913B2 (en) | Simulating high dynamic range imaging with virtual long-exposure images | |
CN104349066B (zh) | 一种生成高动态范围图像的方法、装置 | |
TWI602152B (zh) | 影像擷取裝置及其影像處理方法 | |
US9131201B1 (en) | Color correcting virtual long exposures with true long exposures | |
US9117134B1 (en) | Image merging with blending | |
TWI530911B (zh) | 動態曝光調整方法及其電子裝置 | |
WO2015047877A1 (en) | Using a second camera to adjust settings of first camera | |
WO2014093042A1 (en) | Determining an image capture payload burst structure based on metering image capture sweep | |
CN106412458A (zh) | 一种图像处理方法和装置 | |
CN107864340B (zh) | 一种摄影参数的调整方法及摄影设备 | |
WO2016029380A1 (zh) | 一种图像处理方法、计算机存储介质、装置及终端 | |
CN105120247A (zh) | 一种白平衡调整方法及电子设备 | |
CN107613190B (zh) | 一种拍照方法及终端 | |
KR20180019708A (ko) | 촬영 방법 및 장치 | |
CN105812670B (zh) | 一种拍照的方法及终端 | |
CN105391940B (zh) | 一种图像推荐方法及装置 | |
CN107105172B (zh) | 一种用于对焦的方法和装置 | |
WO2014093048A1 (en) | Determining an image capture payload burst structure | |
CN106791451B (zh) | 一种智能终端的拍照方法 | |
CN101472064A (zh) | 拍摄系统及其景深处理方法 | |
JP6290038B2 (ja) | 電子機器、方法及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201480046202.8 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14900893 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15507080 Country of ref document: US |
|
REEP | Request for entry into the european phase |
Ref document number: 2014900893 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014900893 Country of ref document: EP |