WO2015180659A1 - 图像处理方法和图像处理装置 - Google Patents

图像处理方法和图像处理装置 Download PDF

Info

Publication number
WO2015180659A1
WO2015180659A1 PCT/CN2015/080021 CN2015080021W WO2015180659A1 WO 2015180659 A1 WO2015180659 A1 WO 2015180659A1 CN 2015080021 W CN2015080021 W CN 2015080021W WO 2015180659 A1 WO2015180659 A1 WO 2015180659A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
depth
refocused
images
refocused images
Prior art date
Application number
PCT/CN2015/080021
Other languages
English (en)
French (fr)
Inventor
徐晶
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to JP2016559561A priority Critical patent/JP6736471B2/ja
Priority to EP15800270.9A priority patent/EP3101624B1/en
Priority to KR1020167025332A priority patent/KR101893047B1/ko
Publication of WO2015180659A1 publication Critical patent/WO2015180659A1/zh
Priority to US15/361,640 priority patent/US20170076430A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/72Combination of two or more compensation controls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/21Indexing scheme for image data processing or generation, in general involving computational photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image processing method and an image processing apparatus.
  • the camera In conventional photography, in order to highlight a certain subject, the camera is often focused on the depth plane where the subject is located, so that the subject is clearly imaged on the sensor, while the images of other depth plane objects are blurred.
  • a technique of refocusing the focus plane or depth of field that is, a refocusing technique, which is a range in which the imaging device can clearly image, can be reselected according to the needs of the user.
  • the light field camera uses refocusing technology.
  • the light field camera comprises a microlens array.
  • each microlens in the microlens array forms an image on the sensor to obtain an image array, and the image array can be processed by a refocusing algorithm to obtain a certain depth plane.
  • Refocused image After forming the refocused image, the user can obtain a focused image of the scene in a depth plane each time as needed, so that the user can only see a clear image of the scene of a depth plane, and the scenes seen in other depth planes are seen.
  • the image is blurred.
  • Embodiments of the present invention provide an image processing method and an image processing apparatus capable of simultaneously obtaining clear images of scenes of a plurality of depth planes.
  • the first aspect provides an image processing method, including: determining depth information of multiple depth planes, where depth information of multiple depth planes is used to indicate multiple depth planes, and multiple depth planes respectively a plurality of refocused images corresponding to a plurality of depth planes, wherein the plurality of refocused images are generated from raw data of the plurality of refocused images; generating a multi-depth planar refocused image according to the depth information, wherein the multi-depth planar refocused image comprises The focused portion of multiple refocused images.
  • the multiple depth planes respectively correspond to the original data of the plurality of refocused images
  • the generating the multi-depth plane refocusing image according to the depth information comprises: Depth information of the depth plane determines raw data of the plurality of refocused images; a multi-depth planar refocused image is generated from the raw data of the plurality of refocused images, wherein the multi-depth planar refocused image includes a focused portion of the plurality of refocused images.
  • the generating the multi-depth plane refocusing image according to the original data of the multiple refocused images includes: using a refocusing algorithm to multiple refocused images The raw data is subjected to a refocusing process to generate a plurality of refocused images; the plurality of refocused images are combined to generate a multi-depth planar refocused image.
  • combining the plurality of refocused images includes: combining the focused portions of the plurality of refocused images using an image fusion method.
  • determining a point spread function of the pixels of the plurality of refocused images determining a point spread function of the pixels of the plurality of refocused images; generating a fusion weight template according to the point spread function of the pixel, where the fusion weight template includes the pixel
  • the fusion weights of the pixels with the high degree of focus and the high degree of focus are higher than the fusion weights of the pixels with low focus; according to the fusion weight template, the image fusion of the plurality of refocused images is performed.
  • each of the plurality of refocused images includes a focused portion and a non-focused portion, and the plurality of refocused images are combined, including: A focused portion and a non-focused portion are selected from each of the refocused images in the refocused image; the focused portion and the unfocused portion are stitched together.
  • the multi-depth plane refocusing image is generated according to the depth information of the multiple depth planes and the plurality of refocused images, including: according to depths of multiple depth planes The information determines original data of the plurality of refocused images; generates a focus area and a non-focus area of each of the plurality of refocused images based on the original data of the plurality of refocused images; stitches the focused portion and the unfocused portion.
  • the method of the first aspect further includes: generating a refocus image corresponding to all depth planes according to original data of the refocused image of all depth planes, where all depth planes respectively correspond to Refocus image of all depth planes, according to depth Generating information, generating a multi-depth plane refocusing image, comprising: selecting a refocusing image of a plurality of depth planes from refocusing images of all depth planes according to depth information of the plurality of depth planes; generating a refocused image according to the plurality of depth planes A multi-depth plane refocusing image, wherein the multi-depth planar refocusing image comprises a focused portion of a plurality of refocused images.
  • the generating a multi-depth plane refocusing image according to the refocusing images of the multiple depth planes includes: determining a point spread function of the pixels of all the refocused images; Generating a fusion weight template according to a point spread function of pixels of all refocused images, wherein the fusion weight template includes fusion weights of pixels of all refocused images, and fusion weights of pixels of the plurality of refocused images are higher than all refocused images a fusion weight of pixels of other refocused images other than the refocused image, the fusion weight of the pixels with high degree of focus in the plurality of refocused images is higher than the fusion weight of the pixels with lower focus in the plurality of refocused images; Fusion weight template for image fusion of all refocused images.
  • generating a multi-depth plane refocus image according to the refocus image of the plurality of depth planes including: refocusing each of the plurality of refocused images A focused portion and a non-focused portion are selected in the image; the focused portion and the unfocused portion are stitched together.
  • the method of the first aspect further includes: querying a refocus image of the plurality of depth planes from the lookup table by using a depth of the depth plane or a pixel coordinate as an index a focusing portion and a non-focusing portion, wherein the focusing portion and the non-focusing portion are stored in a lookup table with an index of a depth plane or a pixel coordinate, wherein the multi-depth plane refocusing image is generated according to the refocused image of the plurality of depth planes, including : framing a focused portion and a non-focused portion of a refocused image of a plurality of depth planes.
  • the focused portion and the non-focused portion are stitched, including: an image of the focused portion and a non-focused portion Part of the image is preprocessed, image registration and image fusion.
  • the method of the first aspect further includes: displaying one of the plurality of refocused images; acquiring the displayed Multiple user inputs on multiple regions of the refocused image, wherein the plurality of user inputs correspond to a plurality of depth planes; outputting a multi-depth planar refocus image on the display device; wherein depth information for the plurality of depth planes is determined,
  • the method includes: determining depth information of multiple depth planes according to multiple user inputs.
  • the user input is one of the following: a user's single click input, multi click input, single point sliding input or multiple points on the touch screen Sliding input; user gesture detected by the attitude sensor in the input device; user motion detected by the motion tracking module in the input device.
  • determining the depth information of the multiple depth planes includes: determining, according to the predefined input, multiple presets corresponding to the input The depth plane, the method further comprising: outputting a multi-depth plane refocus image corresponding to the predefined input on the display device.
  • a second aspect provides an image processing apparatus, including: a determining module, configured to determine depth information of a plurality of depth planes, wherein depth information of the plurality of depth planes is used to indicate a plurality of depth planes, and the plurality of depth planes respectively correspond to a plurality of refocused images of the plurality of depth planes, wherein the plurality of refocused images are generated from raw data of the plurality of refocused images; and a generating module configured to generate a multi-depth planar refocused image according to the depth information, Wherein the multi-depth plane refocusing image comprises a focused portion of the plurality of refocused images.
  • the multiple depth planes respectively correspond to the original data of the plurality of refocused images
  • the generating module determines the original data of the plurality of refocused images according to the depth information of the plurality of depth planes. And generating a multi-depth planar refocused image from the raw data of the plurality of refocused images, wherein the multi-depth planar refocused image includes a focused portion of the plurality of refocused images.
  • the generating module refocusing the original data of the multiple refocused images by using a refocusing algorithm, generating multiple refocused images, and combining multiple Refocus the image to generate a multi-depth plane refocus image.
  • the generating module combines the focused portions of the plurality of refocused images by using an image fusion method.
  • the generating module determines a point spread function of the pixels of the plurality of refocused images, and generates a fusion weight template according to the point spread function of the pixel, and The image fusion is performed on the plurality of refocused images according to the fusion weight template, wherein the fusion weight template includes the fusion weight of the pixel, and the fusion weight of the pixel with high degree of focus is higher than the fusion weight of the pixel with low focus.
  • the generating module selects a focused portion and a non-focused portion from each of the plurality of refocused images Part and stitch the focus part and the non-focus part.
  • the generating module determines, according to the depth information of the multiple depth planes, the original data of the multiple refocused images, according to the multiple refocused images
  • the original data generates a focus area and a non-focus area of each of the plurality of refocused images, and splices the focus portion and the non-focus portion.
  • the generating module generates a refocus image corresponding to all depth planes according to original data of the refocused image of all depth planes, where all depth planes respectively correspond to refocusing of all depth planes And selecting, according to the depth information of the plurality of depth planes, the refocused images of the plurality of depth planes from the refocused images of the entire depth plane, and generating the multi-depth plane refocusing images according to the refocused images of the plurality of depth planes, wherein the multiple depths
  • the planar refocused image includes a focused portion of a plurality of refocused images.
  • the generating module determines a point spread function of the pixels of all the refocused images, and generates a fusion according to the point spread function of the pixels of all the refocused images Weighting the template, and performing image fusion on all the refocused images according to the fusion weight template, wherein the fusion weight template includes the fusion weights of the pixels of all the refocused images, and the fusion weight of the pixels of the plurality of refocused images is higher than that of all the refocused images a fusion weight of pixels of other refocused images other than the plurality of refocused images, a fusion weight of pixels having a high degree of focus in the plurality of refocused images is higher than a fusion weight of pixels having a lower degree of focus in the plurality of refocused images .
  • the generating module selects a focused portion and a non-focused portion from each of the plurality of refocused images; And unfocused parts.
  • the image processing apparatus of the second aspect further includes: a query module, configured to query from the depth or pixel coordinates of the depth plane The table queries the focused portion and the unfocused portion of the refocused image of the plurality of depth planes, wherein the focused portion and the unfocused portion are stored in the lookup table with the depth or pixel coordinates of the depth plane, wherein the generating module splicing the plurality of depths The focused portion and the unfocused portion of the planar refocused image.
  • a query module configured to query from the depth or pixel coordinates of the depth plane The table queries the focused portion and the unfocused portion of the refocused image of the plurality of depth planes, wherein the focused portion and the unfocused portion are stored in the lookup table with the depth or pixel coordinates of the depth plane, wherein the generating module splicing the plurality of depths The focused portion and the unfocused portion of the planar refocused image.
  • the generating module performs the image of the focused portion and the image of the unfocused portion when the splicing is performed. Pre-processing, image registration and image fusion.
  • the image processing apparatus of the second aspect further includes: a display module, configured to display one of the plurality of refocused images a focus image; an acquisition module, configured to acquire a plurality of user inputs on the plurality of regions of the displayed refocused image, wherein the plurality of user inputs correspond to the plurality of depth planes, wherein the display module outputs the generated depth on the display device
  • the planar refocus image, the determining module determines depth information of the plurality of depth planes according to the plurality of user inputs.
  • the user input is one of the following: a user's single click input, multi click input, single point sliding input or multiple points on the touch screen Sliding input; user gesture detected by the attitude sensor in the input device; user motion detected by the motion tracking module in the input device.
  • the determining module determines, according to the predefined input, a plurality of depth planes corresponding to the predefined input
  • the image processing apparatus further includes: A display module for outputting a multi-depth plane refocus image corresponding to the predefined input on the display device.
  • a multi-depth plane refocus image can be generated according to the depth information, since the multi-depth plane refocus image includes the focus portions of the plurality of depth refocus images, so that the focus portions of the plurality of depth refocus images can be simultaneously displayed Thus, it is possible to simultaneously obtain clear images of scenes of a plurality of depth planes.
  • FIG. 1 is a schematic flow chart of an image processing method in accordance with one embodiment of the present invention.
  • FIG. 2 is a schematic flow chart of an image processing procedure according to another embodiment of the present invention.
  • Figure 3 shows a schematic diagram of the correspondence between the depth plane and the user input.
  • FIG. 4 is a schematic illustration of two planes of a two-plane representation in accordance with an embodiment of the present invention.
  • Figure 5 is a schematic illustration of the principle of synthetic image photography in accordance with an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a geometric relationship of a model of a synthetic image photography principle, in accordance with an embodiment of the present invention.
  • FIG. 7 is a schematic flow chart of an image processing procedure according to still another embodiment of the present invention.
  • FIG. 8 is a schematic flow chart of an image processing procedure according to another embodiment of the present invention.
  • FIG. 9 is a schematic flow chart of an image processing procedure according to still another embodiment of the present invention.
  • Figure 10 is a block diagram showing the structure of an image processing apparatus according to an embodiment of the present invention.
  • Figure 11 is a block diagram showing the structure of an image processing apparatus according to another embodiment of the present invention.
  • the depth may refer to the distance of the scene to the camera.
  • the plurality of depth planes may be depth-continuous, which is not limited by the embodiments of the present invention.
  • the plurality of depth planes may also be deeply discontinuous. It should be understood that each depth plane may correspond to one focus plane or one depth of field. It should be noted that “multiple” as used herein includes two or more.
  • Embodiments of the present invention may be applied to cameras, as well as to other user terminals (e.g., cell phones or computers) for processing raw data of a refocused image to generate a multi-depth planar refocused image.
  • user terminals e.g., cell phones or computers
  • FIG. 1 is a schematic flow chart of an image processing method in accordance with one embodiment of the present invention.
  • the method of Figure 1 can be performed by an image processing device.
  • the method of Figure 1 includes the following.
  • the depth information may include at least one of depth plane information, pixel coordinate information, pixel color information, pixel point spread function information, light field information of a pixel corresponding light, and trace information of a pixel corresponding light, or any combination thereof.
  • Embodiments of the present invention do not limit how to determine depth information of a plurality of depth planes, and the depth information may be input from a user or predefined.
  • the image processing apparatus can obtain a depth plane corresponding to the refocusing area according to the user's input or a predefined input. For example, the depth plane corresponding to the area that needs to be refocused is determined based on real-time input or selection by the user on the user interface. When the user selects multiple zones on the user interface The domain, the plurality of regions correspond to the depth plane, and the image processing device can thereby know the depth plane corresponding to the region that needs to be refocused.
  • the embodiment according to the present invention is not limited thereto, and the image processing apparatus may also determine depth information of a plurality of depth planes based on pre-defined information.
  • the multi-depth plane refocus image comprises a focus portion of the plurality of refocused images.
  • a portion of the image is focused, ie, the portion of the image is sharp, while other portions of the image are unfocused, ie, other portions of the image are blurred.
  • Determining depth information for multiple depth planes refers to determining which depth plane images need to be refocused.
  • the process of generating the multi-depth plane refocus image may be to combine the focus portions corresponding to the plurality of depth planes.
  • a plurality of depth plane corresponding focusing portions and remaining non-focusing portions may be combined, so that a complete image can be presented.
  • a multi-depth plane refocus image may be generated from depth information, since the multi-depth plane refocus image includes a focus portion of a plurality of depth refocus images, so that focus of a plurality of depth refocus images may be simultaneously displayed Partially, it is possible to simultaneously obtain clear images of scenes of multiple depth planes.
  • the image processing apparatus may determine original data of the plurality of refocused images according to depth information of the plurality of depth planes, and generate a multi-depth plane refocusing image according to the original data of the plurality of refocused images, wherein the multi-depth plane refocusing The image includes a focused portion of the plurality of refocused images.
  • the original data of the refocused image corresponding to the depth plane may be determined according to the depth information input by the user, and the depth plane that needs to be refocused and the original of the refocused image corresponding to the depth plane may also be determined according to a predefined definition. data.
  • the method of obtaining the refocus depth plane information according to the user input or the predefined input may vary depending on the type of the original data.
  • the image processing apparatus may generate a focused portion from the original data of the refocused image, or may generate an unfocused portion from the original data of the refocused image, or may simultaneously generate the focused portion and the unfocused portion according to the original data of the refocused image.
  • the raw data may be captured by a camera module (eg, a light field camera) with a microlens array.
  • Each depth plane has raw data of a respective refocused image for generating a refocused image corresponding to the depth plane as needed.
  • a multi-depth plane refocus image may be generated according to original data of a plurality of refocused images corresponding to a plurality of depth planes, since the multi-depth plane refocus image includes more The depth of the focus portion of the refocused image makes it possible to simultaneously display the focused portions of the plurality of depth refocused images, thereby enabling simultaneous obtaining of sharp images of the scenes of the plurality of depth planes.
  • the multi-depth plane refocus image corresponding to the plurality of depth spaces described above is generated using the original data according to the depth information, it is not necessary to generate all the refocused images, and thus, a large amount of storage space is saved.
  • the raw data of the above-described refocused image may be any data used to generate a refocused image, including but not limited to the following various image data or a combination of various image data below.
  • the above raw data may be an image taken by one or more ordinary camera modules.
  • the above multiple images refer to images taken on the same scene under different shooting parameter settings, such as different focal lengths, different apertures, different exposure amounts, different sensing wavelengths, etc.; or images taken at the same scene at different positions.
  • the raw data may also be one or more aperture-encoded images taken by a camera module with an aperture mask, a phase mask, or other type of mask.
  • the above original data may also be an image processed by the aperture coded image through various image processing algorithms.
  • the above raw data may also be one or more image arrays taken by a camera module with a microlens array or an aperture array.
  • the original data may also be an image processed by the image array through various image processing algorithms, such as a single depth plane refocus image, an all-focus image, a virtual pinhole image, and the like.
  • the original data may also be one or more image arrays captured by a plurality of camera modules composed of the same or different camera modules. Further, the original data may also be an image processed by the image array through various image processing algorithms.
  • the foregoing original data may also be a combination of depth information and images respectively obtained by the depth sensing device and the camera module for the same scene.
  • the depth sensing device includes various devices realized by using the principle of flight time of light, phase difference of light, and structured light illumination.
  • the image processing apparatus may refocus the original data of the plurality of refocused images using a refocusing algorithm, generate a plurality of refocused images, and combine the plurality of refocused images to generate a multi-depth plane refocused image.
  • the refocusing algorithm may be a convolution algorithm, a deconvolution algorithm, a fusion algorithm, a stitching algorithm, a ray tracing algorithm, a filtering algorithm or other single depth plane refocusing algorithm or a combination of these algorithms.
  • the image fusion method is used to merge multiple refocused images. Focus section.
  • the image fusion method is an image analysis method, and the image processing device can combine two or more images into one image by using an image fusion method. Since the information between multiple images of the same scene has redundancy and complementarity, the composite image obtained by the image fusion method can depict the image more comprehensively and accurately.
  • Image fusion methods include gray-based algorithms (eg, direct averaging, weighted averaging, median filtering, multi-resolution spline techniques, etc.), image fusion algorithms based on regions of interest, fusion based on color space transformation Algorithms and fusion domain-based fusion algorithms.
  • the image processing apparatus may determine a point spread function of pixels of the plurality of refocused images when combining the focused portions of the plurality of refocused images by using an image fusion method, generate a fusion weight template according to a point spread function of the pixels, and The weight template is merged to perform image fusion on the plurality of refocused images, wherein the fusion weight template includes the fusion weight of the pixel, and the fusion weight of the pixel with high degree of focus is higher than the fusion weight of the pixel with low focus.
  • the degree of focus of the pixel is proportional to the blending weight.
  • the degree of focus is also called the degree of sharpness, and the degree of focus can be measured according to the point spread function.
  • a focus evaluation function can also be used to evaluate the degree of focus.
  • each of the plurality of refocused images includes a focus portion and a non-focus portion
  • each of the plurality of refocused images may be The focused portion and the unfocused portion are selected from the refocused images, and the focused portion and the unfocused portion are stitched.
  • a focused portion and a non-focused portion may be selected from each of the plurality of refocused images.
  • an image may be optionally selected, and the focused portions of the plurality of refocused images corresponding to the plurality of depth planes are fused with the image such that the convergence of the focused portion and the unfocused portion is more natural.
  • a focused portion and a non-focused portion of each of the plurality of refocused images are generated from the original data of the plurality of refocused images, and the focused portion and the unfocused portion are stitched.
  • the focused portion and the unfocused portion are directly generated from the original data.
  • the steps of intercepting the focused portion and the unfocused portion can be omitted, simplifying the image processing.
  • the image processing apparatus may further generate a refocus image corresponding to all depth planes according to original data of the refocused image of all depth planes, according to multiple The depth information of the depth plane selects a refocus image of a plurality of depth planes from the refocused images of all depth planes, and generates a multi-depth plane refocus image according to the refocused images of the plurality of depth planes, wherein all depth planes respectively correspond to all A refocus image of a depth plane, the multi-depth planar refocus image comprising a focused portion of a plurality of refocused images.
  • the full-part re-focus image is generated in advance, so that the corresponding re-focus image can be selected from the pre-generated re-focus image after determining the depth information, thereby shortening
  • the time to generate multi-depth refocus images enhances the user experience.
  • the image processing apparatus may determine a point spread function of pixels of all the refocused images; generate a fusion weight template according to a point spread function of pixels of all the refocused images, wherein the fusion weight template includes all weights Focusing the fusion weights of the pixels of the image, the fusion weights of the pixels of the plurality of refocused images are higher than the fusion weights of the pixels of the refocused images other than the plurality of refocused images in the entire refocused image, in the plurality of refocused images The fusion weight of the pixel with high degree of focus is higher than the fusion weight of the pixel with lower degree of focus in the plurality of refocused images; according to the fusion weight template, image fusion is performed on all the refocused images.
  • the blending weights of the plurality of refocused images determined by the depth information are higher than the weights of the other refocused images, and for a plurality of refocused images, the degree of focusing of the pixels is proportional to the blending weight.
  • the image processing apparatus may select a focused portion and a non-focused portion from each of the plurality of refocused images, and stitch the focused portion and the unfocused portion.
  • the method of FIG. 1 further includes: selecting a focused portion and a non-focused portion from each of the refocused images; indexing the depth or pixel coordinates of the depth plane, respectively Storing the focused portion and the unfocused portion in the lookup table; querying the focused portion and the unfocused portion of the refocused image of the plurality of depth planes from the lookup table with the depth or pixel coordinates of the depth plane as an index, wherein the plurality of depth planes are according to the plurality of depth planes
  • the refocused image generates a multi-depth planar refocused image comprising: a focused portion and a non-focused portion of the refocused image that stitches the plurality of depth planes.
  • the depth of the depth plane is the distance from each depth plane to a reference plane (eg, a camera), and each depth plane may correspond to one depth distance.
  • the portion of the image corresponding to each depth plane may include a plurality of pixels, and the coordinates of each pixel may correspond to one depth plane, and each depth plane may correspond to a plurality of pixel coordinates.
  • Depth can be established before stitching the focus and non-focus sections The correspondence between the depth or pixel coordinates of the plane and the focused portion, and/or the depth of the depth plane or the correspondence between the pixel coordinates and the unfocused portion, and storing the above correspondence in the lookup table.
  • the embodiment of the present invention does not limit the timing of establishing the correspondence, and may be, for example, during the completion of the photographing process or after the completion of the photographing process, or at any time before the stitching of the focus portion and the non-focus portion.
  • the correspondence between the focused portion and the unfocused portion of all depth planes and the depth or pixel coordinates of the depth plane may be established in advance, such that when a user input is received on the user interface, the depth plane may first be determined according to the user input. The depth or pixel coordinates are then obtained from the lookup table according to the depth or pixel coordinates of the depth plane, and the focused portion and the non-focused portion are stitched into a multi-depth plane refocused image, thereby improving the user experience.
  • the image of the focused portion and the image of the unfocused portion may be subjected to pre-processing, image registration, and image fusion.
  • the method of FIG. 1 further includes: displaying one of the plurality of refocused images, acquiring a plurality of user inputs on the plurality of regions of the displayed refocused image, and A multi-depth plane refocus image is generated on the display device, wherein the plurality of user inputs correspond to the plurality of depth planes, and when the depth information is determined, the depth information of the plurality of depth planes may be determined according to the plurality of user inputs.
  • a refocus image of a single depth plane can be displayed on the user interface prior to displaying the multi-depth plane refocusing image.
  • the image processing apparatus may refocus an object on a plurality of different depth planes according to a user's needs, or refocus an object in a plurality of discrete depth planes, and output a multi-depth plane refocus image on the display device, or output different depth planes.
  • Embodiments of the present invention may display only the generated multi-depth plane refocusing image, as well as the original image and/or the refocused image of the different depth planes generated in the middle.
  • the multi-depth plane refocusing image is displayed on the user interface immediately after generating the multi-depth plane refocusing of the heavy image.
  • the user input is one of: a single click input, a multi-click input, a single-point sliding input or a multi-slide input of the user on the touch screen; the user gesture detected by the attitude sensor in the input device Enter the user action detected by the motion tracking module in the device.
  • a plurality of depth levels corresponding to the predefined inputs may be determined according to predefined inputs.
  • the method of FIG. 1 further includes outputting a multi-depth plane refocus image corresponding to the predefined input on the display device.
  • FIG. 2 is a schematic flow chart of an image processing procedure according to another embodiment of the present invention.
  • the embodiment of Fig. 2 is an example of the image processing method of Fig. 1.
  • FIG. 3 shows a schematic diagram of the correspondence between the depth plane and the area input by the user.
  • an image may be displayed on a user interface of a user device (eg, a camera), which may be a normal picture, which may display scenes at different depth planes, but only clearly display a scene of a certain depth plane .
  • a user device eg, a camera
  • the user device can store raw data of the refocused image of different depth planes.
  • the area or position of each scene on the picture corresponds to the depth plane in which the scene is located.
  • the user's input can be received in multiple regions or locations.
  • the above user input may be a discontinuous instruction input by the user through the input device, for example, clicking a mouse, double clicking the mouse, pressing a button, and tapping the stylus on the touch screen.
  • the input of the above user may also be a continuity instruction input by the user through the input device, for example, simply moving the mouse and recording its position, thereby implementing a continuous click action.
  • the input device of the embodiment of the invention may be a mouse, a touchpad, a multi-finger touch screen, a stylus used on a tablet or a screen, an eye tracking device, a joystick, a four-way button navigation control, and a pressure sensitive direction navigation control. , sliders, dials, circular touchpads or infrared sensory devices.
  • a plurality of depth planes may constitute one depth interval, and therefore, a plurality of depth planes may also be selected by selecting a plurality of depth intervals.
  • the area 1 and the area 2 on the image displayed by the user interface correspond to the depth section 1 and the depth section 2.
  • the user can select the depth plane corresponding to the two depth intervals by selecting the regions corresponding to the two depth intervals.
  • the user may select or click on more regions or locations on the image in order to obtain more refocus images of the depth plane.
  • the depth information is used to indicate information of a plurality of depth planes that need to be refocused.
  • the depth information may include depth plane information, pixel coordinate information, pixel color information, pixel point spread function information, and One of the light field information of the light corresponding to the pixel, the trace information of the light corresponding to the pixel, or any combination thereof.
  • the raw data of the refocused image may be one or more image arrays taken by a camera module with a microlens array or an aperture array; or a frame taken by a plurality of camera arrays configured with the same or different camera modules Or multiple image arrays; or multiple images taken by different camera modules at different locations for the same scene. Since the disparity information in the image array contains the depth information of the scene, the depth information of the pixel can be obtained by a block matching method, a Graph Cuts method, a multi-baseline method, etc., thereby obtaining a desired Depth plane information. The depth information (for example, the correspondence between depth and pixel coordinates) of each pixel corresponding object point can be obtained by the depth extraction method.
  • the user input can indicate the coordinates of the selected pixel, and the depth plane information selected by the user can be obtained by combining the pixel coordinates selected by the user.
  • the raw data of the refocused image may be a plurality of images taken by a single normal camera module.
  • multiple images refer to images taken on the same scene under different parameter settings, such as different focal lengths of the lens, different apertures, different exposure amounts, different lens and sensor distances, different lens distances, different lens curvatures, different sensing wavelengths, etc. .
  • the required depth plane information can be obtained by using the information.
  • the user inputs the indication pixel coordinate information. Different degrees of focus of the pixels correspond to different depth information, so that the depth plane information selected by the user can be obtained according to the user input.
  • the raw data of the refocused image may be one or more aperture encoded images taken by a camera module with an aperture mask, phase mask or other type of mask. Since the encoded images are different when the objects are in different depth planes, the required depth plane information can be obtained by using this information. For example, the user inputs the pixel coordinate information, and the encoded images generated by the pixels of different depths are different, and the depth information of the pixels can be deduced from the features of the encoded image.
  • the raw data of the refocused image may be a combination of depth information and images respectively obtained by the depth sensing device and the camera module for the same scene.
  • the depth sensing device may be various devices realized by using the principle of flight time of light, phase difference of light, structured light illumination, and the like. Therefore, the depth map information provided by the depth sensing device can be used to obtain the desired depth plane information. For example, the user inputs coordinate information indicating the selected pixel, and as long as the correspondence relationship between the pixel coordinate information and the pixel depth information (ie, the depth map) is obtained, the depth plane information selected by the user can be determined.
  • mapping relationships or correspondences between pixel coordinates and pixel depth information in different ways.
  • a depth plane that needs to be refocused can be determined to determine the original data corresponding to the depth planes that need to be refocused.
  • Methods of generating refocusing images of different depth planes include a convolution algorithm, a deconvolution algorithm, a blending algorithm, a stitching algorithm, a ray tracing algorithm, a filtering algorithm, a single depth plane refocusing algorithm, or any combination of the above algorithms.
  • the degree of focus of the pixel, the radius of the circle of the circle, the point spread function, the gradient, the intensity difference, the structure tensor, or any combination thereof can also be obtained before generating the refocused image of the different depth planes.
  • the degree of focus of the pixel, the radius of the circle of diffusion, the point spread function, the gradient, the intensity difference, and the structural tensor can be convolution, deconvolution, Fourier transform, inverse Fourier transform, interpolation, derivation, or any of them.
  • the combination calculation can also be obtained by machine learning, statistics, theoretical simulation and other methods.
  • Embodiments of the present invention may obtain refocused images of different depth planes by techniques such as light field redrawing, three-dimensional reconstruction, synthetic aperture, and the like. A specific example of the refocusing operation is shown in the embodiment of Fig. 6.
  • refocus image corresponding to the plurality of depth planes selected by the user input may be generated upon receiving the user input.
  • Embodiments of the present invention are not limited thereto. For example, refocus images of all depth planes may be generated in advance, and then, when user input is received, multiple selected from the re-focus images may be directly selected from the user input. Refocus image of the depth plane.
  • Embodiments of the present invention may incorporate multiple refocused images using an image fusion method.
  • a weighted fusion can be employed.
  • Embodiments of the present invention may calculate fusion weights prior to merging refocused images of different depth planes. In order to reduce the calculation, only the refocused images corresponding to the selected plurality of depth planes may be merged.
  • the information of the fusion weights may be stored in the same file as the refocused image, or separately formed into a fusion weight template or lookup table, and stored separately in another file.
  • embodiments of the present invention may incorporate multiple refocusing images using image stitching methods.
  • the camera can display a multi-depth planar refocus image on the user interface immediately after generating a refocused image of different depth flat intervals.
  • the generated multi-depth plane refocusing image it is also possible to display the refocusing of the original image and/or the intermediate depth interval generated in the middle. image.
  • multiple images can be displayed on a split screen in the same user interface.
  • generating a refocused image of different depth planes may be performed at any step before synthesizing the multi-depth plane refocusing image, for example, may be performed during the imaging process, or immediately after the end of the imaging process, or may be synthesized more
  • the depth plane is performed when the image is refocused, and the embodiment of the present invention does not limit this.
  • FIG. 4 is a schematic illustration of two planes of a two-plane representation in accordance with an embodiment of the present invention.
  • Figure 5 is a schematic illustration of the principle of synthetic image photography in accordance with an embodiment of the present invention.
  • 6 is a schematic diagram of a geometric relationship of a model of a synthetic image photography principle, in accordance with an embodiment of the present invention.
  • the light field camera includes a main lens and an image sensor (not shown), and a microlens array is disposed between the main lens and the image sensor.
  • the image sensor records a small image formed on each microlens array, and a plurality of small images constitute an image array.
  • the corresponding image processing software can also be set in the light field camera, and the recorded image array can be reconstructed into a conventional image form acceptable to the user, and the effect of focusing on different depth planes and the effect of viewing the scene from different perspectives are presented.
  • the light field camera can be refocused on the user-selected depth plane by software according to the user's needs. The depth of focus of the image may not be fixed and may vary according to the needs of the user.
  • the light field camera records the light angle information while recording the light intensity information.
  • the angle information of the light contains the depth information of the scene in the scene.
  • the light field camera captures three-dimensional information of the scene (eg, three-dimensional light field data).
  • the light field camera can focus on different depth planes by using a refocusing algorithm according to the needs of the user.
  • the camera can process the image using the refocusing algorithm, and finally presents the focus in the selected depth interval. effect.
  • the camera may first generate a refocus image of the plurality of depth planes corresponding to the plurality of regions according to the original data before generating the multi-depth plane refocus image, and then Fusion or stitching into multiple depth plane refocusing images.
  • a two-plane representation can be used to represent the light field, that is, the coordinates of one ray L are the coordinates of the intersection of the ray on the two parallel planes u-v and s-t.
  • the light field information collected by the light field camera is represented by L(u, v, s, t).
  • the light field information of the combined light field of the light field camera is represented by L'(u', v', s', t'), and the relationship between the two is as shown in Fig. 5.
  • the value of the irradiance image of the composite image plane is:
  • D is the distance between the composite image plane and the synthetic aperture.
  • A is a function of the aperture, for example, the value in the aperture is 1, and the value outside the aperture is zero.
  • is the angle of incidence of the ray (u', v', s', t') relative to the composite image plane. According to the principle of paraxial approximation, cos 4 ⁇ in the above equation can be ignored, and 1/D 2 is ignored, and the following formula is obtained:
  • L and L' are as shown in Fig. 6, and equation (2) can be expressed by the light field information L(u, v, s, t). According to the principle of synthetic photography, the relationship between L and L' can be obtained as follows:
  • is a scaling factor that characterizes the distance between the main lens plane and the composite image plane
  • is a scaling factor that characterizes the distance from the synthetic aperture plane to the microlens plane.
  • FIG. 7 is a schematic flow chart of an image processing procedure according to still another embodiment of the present invention.
  • the embodiment of Fig. 7 is an example of the image processing method of Fig. 1, and a detailed description is omitted as appropriate.
  • 710 to 740 of FIG. 7 are similar to steps 210 to 240 of FIG. 2, respectively, and are not described herein again.
  • step of determining the point spread function of the pixel in step 745 may be performed by determining the degree of focus of the pixel, the radius of the circle of the circle, the gradient, the intensity difference, the structure tensor, the light field information of the pixel corresponding ray, and the trace of the pixel corresponding ray.
  • the steps of the information or any combination of them are substituted.
  • the point spread function can be replaced by these parameters.
  • step of determining the point spread function may be performed at any step prior to generating the merge weight template.
  • FIG. 8 is a schematic flow chart of an image processing procedure according to another embodiment of the present invention.
  • the embodiment of Fig. 8 is an example of the image processing method of Fig. 1, and a detailed description is omitted as appropriate.
  • 810 to 840 of FIG. 8 are similar to steps 210 to 240 of FIG. 2, respectively, and are not described herein again.
  • FIG. 9 is a schematic flow chart of an image processing procedure according to still another embodiment of the present invention.
  • the embodiment of Fig. 9 is an example of the image processing method of Fig. 1, and a detailed description is omitted as appropriate.
  • step 920 obtaining user input on the user interface. Similar to step 220 of FIG. 2, details are not described herein again.
  • the depth information may include depth or pixel coordinates of a depth plane of a depth plane corresponding to each user input area.
  • the pixel coordinates may be the coordinates of any of the pixels in the region, for example, may be the coordinates of the center point of the region.
  • the focused portion and the unfocused portion are intercepted from the refocused image of the different depth planes, and the focused portion and the unfocused portion are stored in the focused portion lookup table and the unfocused portion lookup table, respectively.
  • the focus portion query table stores a correspondence relationship between the focus portion and the depth or pixel coordinates of the depth plane
  • the non-focus portion lookup table stores the correspondence between the depth of the unfocused portion and the depth plane or the pixel coordinates.
  • Embodiments of the present invention may employ an image stitching method to combine a focused portion and a non-focused portion of a plurality of depth planes to synthesize a multi-depth plane refocus image.
  • a focus area and a non-focus area may be intercepted from the image; the above areas are stitched to form a multi-depth plane refocus image.
  • the focus area and the non-focus area contain at least one pixel.
  • the image stitching process mainly includes three steps of preprocessing, registration and fusion.
  • Preprocessing includes image denoising, image correction, and image projection.
  • the image projection may be a planar projection method, a spherical projection method, a cube projection method, or a cylindrical projection method.
  • Image denoising can be neighborhood averaging, spatial domain low-pass filtering, and spatial domain nonlinear filtering.
  • the image correction can be a correction for the gray value deviation or a correction for the geometric deformation.
  • the correction method for the gray value deviation is as follows: the normalized image gray value is:
  • f denotes the gradation of the reference image
  • ⁇ f denotes the average gradation of the reference image
  • ⁇ f denotes the standard deviation of the reference image
  • g denotes the gradation of the image to be spliced
  • ⁇ g denotes the average gradation of the image to be spliced
  • ⁇ g Indicates the standard deviation of the image to be stitched.
  • Image registration methods can be block matching algorithm, image registration method based on fast Fourier transform, phase correlation image registration method based on Fourier transform, contour feature based algorithm, corner detection algorithm, scale invariant Scale-invariant feature transform (SIFT) algorithm, Speeded Up Robust Features (SURF) algorithm, optical flow based method, SIFT flow based method.
  • the registration algorithm can determine the corresponding position of the image to be stitched, and then stitch the images together by finding the transformation relationship between the images and resampling.
  • the model of image transformation can be translation, rotation, scaling, reflection, miscut, and any combination thereof.
  • step 970 displaying the generated multi-depth plane refocus image. Similar to step 270 of FIG. 2, details are not described herein again.
  • FIG. 10 is a block diagram showing the structure of an image processing apparatus 1000 according to an embodiment of the present invention.
  • the image processing apparatus 1000 includes a determination module 1010 and a generation module 1020.
  • the determining module 1010 is configured to determine depth information of the plurality of depth planes, where the depth information of the plurality of depth planes is used to indicate a plurality of depth planes, where the plurality of depth planes respectively correspond to the plurality of refocusing images of the plurality of depth planes, where The refocused images are generated from raw data of a plurality of refocused images.
  • the generating module 1020 is configured to generate a multi-depth plane refocusing image according to the depth information, wherein the multi-depth plane refocusing image comprises a focusing portion of the plurality of refocused images.
  • the plurality of depth planes respectively correspond to the original data of the plurality of refocused images
  • the generating module 1020 determines the original data of the plurality of refocused images according to the depth information of the plurality of depth planes, and according to the plurality of refocusing
  • the raw data of the image generates a multi-depth planar refocused image, wherein the multi-depth planar re-focus image includes a focused portion of the plurality of refocused images.
  • the generating module 1020 performs refocusing processing on the original data of the plurality of refocused images by using a refocusing algorithm to generate a plurality of refocused images; and combines the plurality of refocused images to generate a multi-depth planar refocused image.
  • the generation module 1020 combines the focused portions of the plurality of refocused images using an image fusion method.
  • the generating module 1020 determines a point spread function of pixels of the plurality of refocused images; generates a fusion weight template according to a point spread function of the pixels, and according to the fusion weight module The board performs image fusion on a plurality of refocused images, wherein the fusion weight template includes a fusion weight of the pixel, and the fusion weight of the pixel with high degree of focus is higher than the fusion weight of the pixel with low focus.
  • the generating module 1020 intercepts the focused portion and the unfocused portion from each of the plurality of refocused images, and stitches the focused portion and the unfocused portion.
  • the generating module 1020 determines original data of the plurality of refocused images according to the depth information of the plurality of depth planes, and generates each of the plurality of refocused images according to the original data of the plurality of refocused images.
  • the focused portion and the unfocused portion, and the focused portion and the unfocused portion are stitched together.
  • the generating module 1020 further generates a refocus image corresponding to all depth planes according to original data of the refocused image of all depth planes, where all depth planes respectively correspond to refocus images of all depth planes, wherein the generating module 1020: selecting refocus images of multiple depth planes from refocus images of all depth planes according to depth information of multiple depth planes, and generating multi-depth plane refocusing images according to refocus images of multiple depth planes, where multiple depth planes
  • the refocused image includes a focused portion of the plurality of refocused images.
  • the generating module 1020 determines a point spread function of the pixels of all the refocused images, generates a fusion weight template according to the point spread function of the pixels of all the refocused images, and performs all the refocused images according to the fusion weight template.
  • Image fusion wherein the fusion weight template includes fusion weights of pixels of all refocused images, and fusion weights of pixels of the plurality of refocused images are higher than pixels of other refocused images of the plurality of refocused images except for the plurality of refocused images
  • the fusion weights of the pixels with high degree of focus in the plurality of refocused images are higher than the fusion weights of the pixels with lower focus in the plurality of refocused images.
  • the generating module 1020 selects a focused portion and a non-focused portion from each of the plurality of refocused images, and stitches the focused portion and the unfocused portion.
  • the image processing apparatus 1000 further includes: a storage module 1030, a selection module 1040, and a query module 1050.
  • the selecting module 1040 is configured to select a focused portion and a non-focused portion from each of the refocused images; the storage module 1030 indexes the depth portion or the pixel coordinates of the depth plane, respectively, and stores the focused portion and the unfocused portion respectively.
  • the query module 1050 queries the focus table and the non-focus portion of the refocus image of the plurality of depth planes from the lookup table with the depth or pixel coordinates of the depth plane as an index, wherein the generation module 1020 splicing the weights of the plurality of depth planes The focused portion and the unfocused portion of the image are focused.
  • the generating module 1020 images the focused portion when stitching
  • the images of the unfocused portion are subjected to preprocessing, image registration, and image fusion.
  • the image processing apparatus 1000 further includes: a display module 1070, configured to display one of the plurality of refocused images; and an obtaining module 1060, configured to acquire the refocused image displayed a plurality of user inputs on a plurality of regions, wherein the plurality of user inputs correspond to a plurality of depth planes, wherein the display module 1030 outputs a multi-depth plane refocus image on the display device, the determining module 1010 determining the plurality of user inputs based on the plurality of user inputs Depth information for depth planes.
  • a display module 1070 configured to display one of the plurality of refocused images
  • an obtaining module 1060 configured to acquire the refocused image displayed a plurality of user inputs on a plurality of regions, wherein the plurality of user inputs correspond to a plurality of depth planes
  • the display module 1030 outputs a multi-depth plane refocus image on the display device
  • the determining module 1010 determining the plurality of user inputs based
  • the user input is one of: a single click input, a multi-click input, a single-point sliding input or a multi-slide input of the user on the touch screen; the user gesture detected by the attitude sensor in the input device Enter the user action detected by the motion tracking module in the device.
  • the determining module 1010 determines a plurality of depth planes corresponding to the predefined inputs according to the predefined input
  • the image processing apparatus 1000 further includes: a display module 1030, configured to output and pre-display on the display device.
  • FIG. 11 is a block diagram showing the structure of an image processing apparatus 1100 according to another embodiment of the present invention.
  • the image processing apparatus 1100 includes a processor 1110, a memory 1120, and a communication bus 1130.
  • the processor 1110 calls the code stored in the memory 1120 through the communication bus 1130 to determine depth information of multiple depth planes, wherein depth information of the plurality of depth planes is used to indicate multiple depth planes, and the plurality of depth planes respectively correspond to multiple a plurality of refocused images of depth planes, wherein the plurality of refocused images are generated from raw data of the plurality of refocused images, and generating a multi-depth planar refocused image according to the depth information, wherein the multi-depth planar refocused image comprises a plurality of Refocus the focused portion of the image.
  • the processor 1110 respectively determines a plurality of depth planes corresponding to the original data of the plurality of refocused images, and the processor determines the original data of the plurality of refocused images according to the depth information of the plurality of depth planes, and according to the plurality of The raw data of the refocused image generates a multi-depth planar refocused image, wherein the multi-depth planar refocused image includes a focused portion of the plurality of refocused images.
  • the processor 1110 performs a refocusing process on the original data of the plurality of refocused images using a refocusing algorithm, generates a plurality of refocused images, and combines the plurality of refocused images to generate a multi-depth plane refocusing image.
  • the processor 1110 combines the focused portions of the plurality of refocused images using an image fusion method.
  • the processor 1110 determines a point spread function of pixels of the plurality of refocused images; generates a fusion weight template according to a point spread function of the pixels, and performs image fusion on the plurality of refocused images according to the fusion weight template,
  • the fusion weight template includes the fusion weight of the pixel, and the fusion weight of the pixel with high degree of focus is higher than the fusion weight of the pixel with low focus.
  • the processor 1110 selects a focused portion and a non-focused portion from each of the plurality of refocused images, and stitches the focused portion and the unfocused portion.
  • the processor 1110 determines original data of the plurality of refocused images according to depth information of the plurality of depth planes, and generates each of the plurality of refocused images according to the original data of the plurality of refocused images.
  • the focus area and the non-focus area, and the focus portion and the non-focus portion are stitched together.
  • the processor 1110 further generates a refocus image corresponding to all depth planes according to original data of the refocused image of all depth planes, where all depth planes respectively correspond to refocus images of all depth planes, wherein the processor 1110 Selecting a plurality of depth plane refocus images from refocus images of the entire depth plane according to depth information of the plurality of depth planes, and generating a multi-depth plane refocus image according to the refocus images of the plurality of depth planes, wherein the multi-depth plane is heavy
  • the focused image includes a focused portion of the plurality of refocused images.
  • the processor 1110 determines a point spread function of pixels of all the refocused images, generates a fusion weight template according to a point spread function of pixels of all the refocused images, and according to the fusion weight template, according to an embodiment of the present invention, Performing image fusion on all refocused images, wherein the fusion weight template includes fusion weights of pixels of all refocused images, and fusion weights of pixels of the plurality of refocused images are higher than all refocused images except for multiple refocused images The fusion weight of the pixels of the other refocused images, the fusion weight of the pixels with high degree of focus in the plurality of refocused images is higher than the fusion weight of the pixels with lower focus in the plurality of refocused images.
  • the processor 1110 selects a focused portion and a non-focused portion from each of the plurality of refocused images, and stitches the focused portion and the unfocused portion.
  • the processor 1110 performs pre-processing, image registration, and image fusion on the image of the focused portion and the image of the unfocused portion when stitching.
  • the image processing apparatus 1100 further includes: a display 1140 for displaying one of the plurality of refocused images; and an input interface 1150 for acquiring the refocused image displayed a plurality of user inputs on a plurality of regions, wherein the plurality of user inputs correspond to a plurality of depth planes, wherein the display 1140 outputs a multi-depth plane refocus image, processing The device 1110 determines depth information of the plurality of depth planes according to the plurality of user inputs.
  • the user input is one of: a single click input, a multi-click input, a single-point sliding input or a multi-slide input on the touch screen of the display 1140; the attitude sensor in the input device detects User gesture; user action detected by the motion tracking module in the input device.
  • the processor 1110 determines a plurality of depth planes corresponding to the predefined input according to the predefined input, and the method further includes: displaying, by the display 1140, a multi-depth plane corresponding to the predefined input. Refocus the image.
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated in one unit. In the unit.
  • the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the present invention which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Abstract

本发明的实施例提供了一种图像处理方法和图像处理装置。该方法包括:确定多个深度平面的深度信息,多个深度平面的深度信息用于指示多个深度平面,多个深度平面分别对应于多个深度平面的多个重聚焦图像,其中多个重聚焦图像由多个重聚焦图像的原始数据生成;根据深度信息,生成多深度平面重聚焦图像,其中多深度平面重聚焦图像包括多个重聚焦图像的聚焦部分。本发明的技术方案能够同时获得多个深度平面的物体的清晰图像。

Description

图像处理方法和图像处理装置
本申请要求于2014年5月28日提交中国专利局、申请号为201410230506.1、发明名称为“图像处理方法和图像处理装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及图像处理技术领域,尤其是涉及一种图像处理方法和图像处理装置。
背景技术
在常规摄影中,为了突出某个主题景物,常常会将相机对焦到该主题景物所在的深度平面,使得该主题景物清晰成像在传感器上,而其它深度平面的物体的成像则是模糊的。
随着数字成像技术、图像处理、机器视觉的发展,产生了重聚焦技术。根据重聚焦技术,在图像形成之后,根据用户的需要,可以重新选择聚焦平面或景深的技术,即重聚焦技术,其中景深是指成像设备能够清晰成像的范围。
例如,光场相机就采用了重聚焦技术。光场相机包含微透镜阵列,拍摄时,微透镜阵列中的每个微透镜在传感器上形成一个图像,从而获得图像阵列,并且可以采用重聚焦算法对图像阵列进行处理,以获得某个深度平面的重聚焦图像。在形成重聚焦图像之后,用户根据需要每次能够获得位于一个深度平面内景物的聚焦图像,这样,用户只能看到一个深度平面的景物的清晰图像,而看到的位于其它深度平面的景物的图像则是模糊的。
发明内容
本发明的实施例提供了一种图像处理方法和图像处理装置,能够同时获得多个深度平面的景物的清晰图像。
第一方面,提供了一种图像处理方法,包括:确定多个深度平面的深度信息,多个深度平面的深度信息用于指示多个深度平面,多个深度平面分别 对应于多个深度平面的多个重聚焦图像,其中多个重聚焦图像由多个重聚焦图像的原始数据生成;根据深度信息,生成多深度平面重聚焦图像,其中多深度平面重聚焦图像包括多个重聚焦图像的聚焦部分。
结合第一方面,在第一种可能的实现方式中,多个深度平面分别对应于多个重聚焦图像的原始数据,其中,根据深度信息,生成多深度平面重聚焦图像,包括:根据多个深度平面的深度信息确定多个重聚焦图像的原始数据;根据多个重聚焦图像的原始数据生成多深度平面重聚焦图像,其中多深度平面重聚焦图像包括多个重聚焦图像的聚焦部分。
结合在第一种可能的实现方式,在第二种可能的实现方式中,上述根据多个重聚焦图像的原始数据生成多深度平面重聚焦图像,包括:采用重聚焦算法对多个重聚焦图像的原始数据进行重聚焦处理,生成多个重聚焦图像;合并多个重聚焦图像,以生成多深度平面重聚焦图像。
结合第二种可能的实现方式,在第三种可能的实现方式中,合并多个重聚焦图像,包括:采用图像融合方法合并多个重聚焦图像的聚焦部分。
结合第三种可能的实现方式,在第四种可能的实现方式中,确定多个重聚焦图像的像素的点扩散函数;根据像素的点扩散函数生成融合权重模板,其中融合权重模板包括像素的融合权重,并且聚焦程度高的像素的融合权重高于聚焦程度低的像素的融合权重;根据融合权重模板,对多个重聚焦图像进行图像融合。
结合第二种可能的实现方式,在第五种可能的实现方式中,多个重聚焦图像中的每个重聚焦图像包括聚焦部分和非聚焦部分,合并多个重聚焦图像,包括:从多个重聚焦图像中的每个重聚焦图像中选取聚焦部分和非聚焦部分;拼接聚焦部分和非聚焦部分。
结合第一种可能的实现方式,在第六种可能的实现方式中,根据多个深度平面的深度信息和多个重聚焦图像生成多深度平面重聚焦图像,包括:根据多个深度平面的深度信息确定多个重聚焦图像的原始数据;根据多个重聚焦图像的原始数据生成多个重聚焦图像中的每个重聚焦图像的聚焦区域和非聚焦区域;拼接聚焦部分和非聚焦部分。
结合第一方面,在第七种可能的实现方式中,第一方面的方法还包括:根据全部深度平面的重聚焦图像的原始数据生成全部深度平面对应的重聚焦图像,全部深度平面分别对应于全部深度平面的重聚焦图像,其中根据深 度信息,生成多深度平面重聚焦图像,包括:根据多个深度平面的深度信息从全部深度平面的重聚焦图像中选择多个深度平面的重聚焦图像;根据多个深度平面的重聚焦图像生成多深度平面重聚焦图像,其中多深度平面重聚焦图像包括多个重聚焦图像的聚焦部分。
结合第七种可能的实现方式,在第八种可能的实现方式下,根据多个深度平面的重聚焦图像生成多深度平面重聚焦图像,包括:确定全部重聚焦图像的像素的点扩散函数;根据全部重聚焦图像的像素的点扩散函数生成融合权重模板,其中融合权重模板包括全部重聚焦图像的像素的融合权重,多个重聚焦图像的像素的融合权重高于全部重聚焦图像中除多个重聚焦图像之外的其它重聚焦图像的像素的融合权重,多个重聚焦图像中聚焦程度高的像素的融合权重高于多个重聚焦图像中聚焦程度较低的像素的融合权重;根据融合权重模板,对全部重聚焦图像进行图像融合。
结合第七种可能的实现方式,在第九种可能的实现方式下,根据多个深度平面的重聚焦图像生成多深度平面重聚焦图像,包括:从多个重聚焦图像中的每个重聚焦图像中选取聚焦部分和非聚焦部分;拼接聚焦部分和非聚焦部分。
结合第七可能的实现方式,在第十种可能的实现方式中,第一方面的方法还包括:以深度平面的深度或像素坐标为索引从查询表中查询多个深度平面的重聚焦图像的聚焦部分和非聚焦部分,其中,聚焦部分和非聚焦部分以深度平面的深度或像素坐标为索引存储在查询表中,其中根据多个深度平面的重聚焦图像生成多深度平面重聚焦图像,包括:拼接多个深度平面的重聚焦图像的聚焦部分和非聚焦部分。
结合第五种、第六种、第九种或第十种可能的实现方式,在第十一种可能的实现方式中,拼接聚焦部分和非聚焦部分,包括:对聚焦部分的图像和非聚焦部分的图像进行预处理、图像配准和图像融合。
结合第一方面或上述任一种可能的实现方式,在第十二种可能的实现方式中,第一方面的方法还包括:显示多个重聚焦图像中的一个重聚焦图像;获取在所显示的重聚焦图像的多个区域上的多个用户输入,其中多个用户输入对应于多个深度平面;在显示设备上输出生成多深度平面重聚焦图像;其中确定多个深度平面的深度信息,包括:根据多个用户输入确定多个深度平面的深度信息。
结合第十二种可能的实现方式,在第十三种可能的实现方式中,用户输入为以下之一:用户在触摸屏上的单点点击输入、多点点击输入、单点滑动输入或多点滑动输入;输入设备中的姿态传感器探测到的用户姿态;输入设备中的动作跟踪模块探测到的用户动作。
结合第一方面或上述任一种可能的实现方式,在第十四种可能的实现方式中,确定多个深度平面的深度信息,包括:根据预定义的输入确定预定义的输入对应的多个深度平面,方法还包括:在显示设备上输出与预定义的输入对应的多深度平面重聚焦图像。
第二方面,提供了一种图像处理装置,包括:确定模块,用于确定多个深度平面的深度信息,其中多个深度平面的深度信息用于指示多个深度平面,多个深度平面分别对应于多个深度平面的多个重聚焦图像,其中所述多个重聚焦图像由所述多个重聚焦图像的原始数据生成;生成模块,用于根据深度信息,生成多深度平面重聚焦图像,其中多深度平面重聚焦图像包括多个重聚焦图像的聚焦部分。
结合第二方面,在第一种可能的实现方式中,多个深度平面分别对应于多个重聚焦图像的原始数据,生成模块根据多个深度平面的深度信息确定多个重聚焦图像的原始数据,并且根据多个重聚焦图像的原始数据生成多深度平面重聚焦图像,其中多深度平面重聚焦图像包括多个重聚焦图像的聚焦部分。
结合第一种可能的实现方式,在第二种可能的实现方式中,生成模块采用重聚焦算法对多个重聚焦图像的原始数据进行重聚焦处理,生成多个重聚焦图像,并且合并多个重聚焦图像,以生成多深度平面重聚焦图像。
结合第二方面的第二种可能的实现方式,在第三种可能的实现方式中,生成模块采用图像融合方法合并多个重聚焦图像的聚焦部分。
结合第二方面的第三种可能的实现方式,在第四种可能的实现方式中,生成模块确定多个重聚焦图像的像素的点扩散函数,根据像素的点扩散函数生成融合权重模板,并且根据融合权重模板,对多个重聚焦图像进行图像融合,其中融合权重模板包括像素的融合权重,并且聚焦程度高的像素的融合权重高于聚焦程度低的像素的融合权重。
结合第二方面的第二种可能的实现方式,在第五种可能的实现方式中,生成模块从多个重聚焦图像中的每个重聚焦图像中选取聚焦部分和非聚焦 部分,并拼接聚焦部分和非聚焦部分。
结合第二方面的第一种可能的实现方式,在第六种可能的实现方式中,生成模块根据多个深度平面的深度信息确定多个重聚焦图像的原始数据,根据多个重聚焦图像的原始数据生成多个重聚焦图像中的每个重聚焦图像的聚焦区域和非聚焦区域,并且拼接聚焦部分和非聚焦部分。
结合第二方面,在第七种可能的实现方式中,生成模块根据全部深度平面的重聚焦图像的原始数据生成全部深度平面对应的重聚焦图像,全部深度平面分别对应于全部深度平面的重聚焦图像,根据多个深度平面的深度信息从全部深度平面的重聚焦图像中选择多个深度平面的重聚焦图像,并且根据多个深度平面的重聚焦图像生成多深度平面重聚焦图像,其中多深度平面重聚焦图像包括多个重聚焦图像的聚焦部分。
结合第二方面的第七种可能的实现方式,在第八种可能的实现方式中,生成模块确定全部重聚焦图像的像素的点扩散函数,根据全部重聚焦图像的像素的点扩散函数生成融合权重模板,并且根据融合权重模板,对全部重聚焦图像进行图像融合,其中融合权重模板包括全部重聚焦图像的像素的融合权重,多个重聚焦图像的像素的融合权重高于全部重聚焦图像中除多个重聚焦图像之外的其它重聚焦图像的像素的融合权重,多个重聚焦图像中聚焦程度高的像素的融合权重高于多个重聚焦图像中聚焦程度较低的像素的融合权重。
结合第二方面的第七种可能的实现方式,在第九种可能的实现方式中,生成模块从多个重聚焦图像中的每个重聚焦图像中选取聚焦部分和非聚焦部分;拼接聚焦部分和非聚焦部分。
结合第二方面的第七种可能的实现方式,在第十种可能的实现方式中,第二方面的图像处理装置还包括:查询模块,用于以深度平面的深度或像素坐标为索引从查询表中查询多个深度平面的重聚焦图像的聚焦部分和非聚焦部分,其中,聚焦部分和非聚焦部分以深度平面的深度或像素坐标为索引存储在查询表中,其中生成模块拼接多个深度平面的重聚焦图像的聚焦部分和非聚焦部分。
结合第五种、第六种、第九种或第十种可能的实现方式,在第十一种可能的实现方式中,生成模块在进行拼接时对聚焦部分的图像和非聚焦部分的图像进行预处理、图像配准和图像融合。
结合第二方面或上述任一种可能的实现方式,在第十二种可能的实现方式中,第二方面的图像处理装置还包括:显示模块,用于显示多个重聚焦图像中的一个重聚焦图像;获取模块,用于获取在所显示的重聚焦图像的多个区域上的多个用户输入,其中多个用户输入对应于多个深度平面,其中显示模块在显示设备上输出生成多深度平面重聚焦图像,确定模块根据多个用户输入确定多个深度平面的深度信息。
结合第十二种可能的实现方式,在第十三种可能的实现方式中,用户输入为以下之一:用户在触摸屏上的单点点击输入、多点点击输入、单点滑动输入或多点滑动输入;输入设备中的姿态传感器探测到的用户姿态;输入设备中的动作跟踪模块探测到的用户动作。
结合第二方面或上述任一种可能的实现方式,在第十四种可能的实现方式中,确定模块根据预定义的输入确定预定义的输入对应的多个深度平面,图像处理装置还包括:显示模块,用于在显示设备上输出与预定义的输入对应的多深度平面重聚焦图像。
本发明的技术方案,可以根据深度信息生成多深度平面重聚焦图像,由于该多深度平面重聚焦图像包含多个深度重聚焦图像的聚焦部分,使得可以同时显示多个深度重聚焦图像的聚焦部分,从而能够同时获得多个深度平面的景物的清晰图像。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例中所需要使用的附图作简单地介绍,显而易见地,下面所描述的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是根据本发明的一个实施例的图像处理方法的示意性流程图。
图2是根据本发明的另一实施例的图像处理过程的示意性流程图。
图3示出了深度平面与用户输入的对应关系的示意图。
图4是根据本发明的实施例的双平面表示法的两个平面的示意图。
图5是根据本发明的实施例的合成像摄影原理的示意图。
图6是根据本发明的实施例的合成像摄影原理的模型的几何关系的示意图。
图7是根据本发明的又一实施例的图像处理过程的示意性流程图。
图8是根据本发明的另一实施例的图像处理过程的示意性流程图。
图9是根据本发明的再一实施例的图像处理过程的示意性流程图。
图10是根据本发明的一个实施例的图像处理装置的结构示意图。
图11是根据本发明的另一实施例的图像处理装置的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
在本发明的实施例中,深度可以指景物到相机的距离。多个深度平面可以是深度连续的,本发明的实施例对此不作限定,例如,多个深度平面也可以是深度间断的。应理解的是,每个深度平面可以对应一个聚焦平面,也可以对应一个景深。需要说明的是,本文中所说的“多个”,包括两个及两个以上。
本发明的实施例可以应用于相机中,也可以应用于其它用户终端(例如,手机或计算机)中,用于处理重聚焦图像的原始数据以生成多深度平面重聚焦图像。
图1是根据本发明的一个实施例的图像处理方法的示意性流程图。图1的方法可以由图像处理装置执行。图1的方法包括以下内容。
110,确定多个深度平面的深度信息,多个深度平面的深度信息用于指示多个深度平面,多个深度平面分别对应于多个深度平面的多个重聚焦图像,其中多个重聚焦图像由多个重聚焦图像的原始数据生成。
深度信息可以包含深度平面信息、像素坐标信息、像素颜色信息、像素点扩散函数信息、像素对应光线的光场信息、像素对应光线的追迹信息中的至少一种或者它们的任意组合。本发明的实施例对如何确定多个深度平面的深度信息的方式不作限定,深度信息可以来自用户的输入或者预先定义。
图像处理装置可以根据用户的输入或者预定义输入获得需要重聚焦区域所对应的深度平面。例如,根据用户在用户界面上的实时输入或选择来确定需要重聚焦的区域所对应的深度平面。当用户在用户界面上选中多个区 域,多个区域与深度平面相对应,图像处理装置据此可以获知需要重聚焦的区域所对应的深度平面。根据本发明的实施例并不限于此,图像处理装置也可以根据预先定义的信息确定多个深度平面的深度信息。
120,根据深度信息,生成多深度平面重聚焦图像,其中多深度平面重聚焦图像包括多个重聚焦图像的聚焦部分。
例如,在每个深度平面对应的重聚焦图像中,图像的一部分是聚焦的,即图像的这部分是清晰的,而图像的其它部分是非聚焦的,即图像的其它部分模糊的。确定多个深度平面的深度信息是指确定需要对哪些深度平面的图像进行重聚焦。在这种情况下,生成多深度平面重聚焦图像的过程可以是将多个深度平面对应的聚焦部分进行合并。另外,在生成的多深度平面重聚焦图像中,可以合并多个深度平面对应聚焦部分和剩余的非聚焦部分,从而能够呈现完整的图像。
根据本发明的实施例,可以根据深度信息生成多深度平面重聚焦图像,由于该多深度平面重聚焦图像包含多个深度重聚焦图像的聚焦部分,使得可以同时显示多个深度重聚焦图像的聚焦部分,从而能够同时获得多个深度平面的景物的清晰图像。
在120中,图像处理装置可以根据多个深度平面的深度信息确定多个重聚焦图像的原始数据,并且根据多个重聚焦图像的原始数据生成多深度平面重聚焦图像,其中多深度平面重聚焦图像包括多个重聚焦图像的聚焦部分。
例如,可以根据用户输入的深度信息确定与该深度平面相对应的重聚焦图像的原始数据,还可以根据预先定义来确定需要重聚焦的深度平面以及与该深度平面相对应的重聚焦图像的原始数据。根据用户输入或预定义输入获得重聚焦深度平面信息的方法可以根据原始数据的种类不同而有所不同。
图像处理装置可以根据重聚焦图像的原始数据生成聚焦部分,或者可以根据重聚焦图像的原始数据生成非聚焦部分,或者可以根据重聚焦图像的原始数据同时生成聚焦部分和非聚焦部分。例如,原始数据可以是由带有微透镜阵列的摄像模块(例如,光场相机)拍摄得到的。每个深度平面具有各自的重聚焦图像的原始数据,用于根据需要生成与该深度平面相对应的重聚焦图像。
根据本发明的实施例,可以根据多个深度平面对应的多个重聚焦图像的原始数据生成多深度平面重聚焦图像,由于该多深度平面重聚焦图像包含多 个深度重聚焦图像的聚焦部分,使得可以同时显示多个深度重聚焦图像的聚焦部分,从而能够同时获得多个深度平面的景物的清晰图像。另外,由于根据深度信息使用原始数据生成与上述多个深度空间对应的多深度平面重聚焦图像,无需生成全部的重聚焦图像,因此,节省了大量的存储空间。
应理解,上述重聚焦图像的原始数据可以是任何用来生成重聚焦图像的数据,包括但不限于以下各种图像数据或者以下各种图像数据的组合。
例如,上述原始数据可以是一幅或多幅普通摄像模块拍摄的图像。上述多幅图像是指在不同拍摄参数设置下对同一场景拍摄的图像,如焦距不同、光圈不同、曝光量不同、感应波长不同等;或者是指在不同位置对同一场景拍摄的图像。
可替代地,作为另一实施例,上述原始数据还可以是带有孔径掩模板、相位掩模板或其他类型掩模板的摄像模块拍摄的一幅或多幅孔径编码图像。进一步地,上述原始数据还可以是由孔径编码图像经各种图像处理算法处理得到的图像。
可替代地,作为另一实施例,上述原始数据还可以是带有微透镜阵列或孔径阵列的摄像模块拍摄的一幅或多幅图像阵列。进一步地,上述原始数据也可以是由图像阵列经各种图像处理算法处理得到的图像,如单一深度平面重聚焦图像、全聚焦图像、虚拟针孔图像等。
可替代地,作为另一实施例,上述原始数据还可以是多个配置相同或不同的摄像模块所组成的摄像阵列拍摄的一幅或多幅图像阵列。进一步地,上述原数据也可以是由图像阵列经各种图像处理算法处理得到的图像。
可替代地,作为另一实施例,上述原始数据还可以是深度感应设备与摄像模块针对同一场景分别获得的深度信息和图像的组合。其中深度感应设备包括利用光的飞行时间、光的相位差、结构光光照等原理实现的各种设备。
在120中,图像处理装置可以采用重聚焦算法对多个重聚焦图像的原始数据进行重聚焦处理,生成多个重聚焦图像,并且合并多个重聚焦图像,以生成多深度平面重聚焦图像。
根据本发明的实施例,重聚焦算法可以是卷积算法、去卷积算法、融合算法、拼接算法、光线追迹算法、滤波算法或其它单一深度平面的重聚焦算法或者这些算法的组合。
在120中,采用图像融合(Image Fusion)方法合并多个重聚焦图像的 聚焦部分。
例如,图像融合方法是一种图像分析方法,图像处理装置可以采用图像融合方法将两幅或多幅图像合成一幅图像。由于同一场景的多幅图像之间的信息存在冗余性和互补性,因此,经图像融合方法得到的合成图像则可以更全面、更精确地描绘图像。图像融合方法包括基于灰度的算法,(例如,直接平均法、加权平均法、中值滤波法、多分辨率样条技术等)、基于感兴趣区域的图像融合算法、基于颜色空间变换的融合算法以及基于变换域的融合算法等。
在120中,图像处理装置可以在采用图像融合方法合并多个重聚焦图像的聚焦部分时,确定多个重聚焦图像的像素的点扩散函数,根据像素的点扩散函数生成融合权重模板,并且根据融合权重模板,对多个重聚焦图像进行图像融合,其中融合权重模板包括像素的融合权重,并且聚焦程度高的像素的融合权重高于聚焦程度低的像素的融合权重。换句话说,像素的聚焦程度与融合权重成正比。这样,使得清楚呈现的是与多个深度空间对应的聚焦部分。
例如,聚焦程度也称为清晰的程度,可以根据点扩散函数来衡量聚焦程度的高低。另外,也可以采用聚焦评价函数来评估聚焦程度。
根据本发明的实施例,多个重聚焦图像中的每个重聚焦图像包括聚焦部分和非聚焦部分,在120中,在合并多个重聚焦图像时,可以从多个重聚焦图像中的每个重聚焦图像中选取聚焦部分和非聚焦部分,并且拼接聚焦部分和非聚焦部分。例如,可以从多个重聚焦图像中的每个重聚焦图像中选取聚焦部分和非聚焦部分。
可替代地,也可以任选一幅图像,并且将多个深度平面对应的多个重聚焦图像的聚焦部分与该图像融合,使得聚焦部分与非聚焦部分的衔接更加自然。
在120中,根据多个重聚焦图像的原始数据生成多个重聚焦图像中的每个重聚焦图像的聚焦部分和非聚焦部分,并且拼接聚焦部分和非聚焦部分。
换句话说,直接由原始数据生成聚焦部分和非聚焦部分。这种情况下,可以省去截取聚焦部分和非聚焦部分的步骤,简化了图像处理过程。
根据本发明的实施例,在120中,图像处理装置还可以根据全部深度平面的重聚焦图像的原始数据生成全部深度平面对应的重聚焦图像,根据多个 深度平面的深度信息从全部深度平面的重聚焦图像中选择多个深度平面的重聚焦图像,并且根据多个深度平面的重聚焦图像生成多深度平面重聚焦图像,其中全部深度平面分别对应于全部深度平面的重聚焦图像,多深度平面重聚焦图像包括多个重聚焦图像的聚焦部分。
换句话说,在确定深度信息并生成多深度平面重聚焦图像之前,预先生成全部分重聚焦图像,这样可以在确定深度信息之后从预先生成的重聚焦图像中选择相应的重聚焦图像,从而缩短了生成多深度重聚焦图像的时间,提升了用户体验。
根据本发明的实施例,在120中,图像处理装置可以确定全部重聚焦图像的像素的点扩散函数;根据全部重聚焦图像的像素的点扩散函数生成融合权重模板,其中融合权重模板包括全部重聚焦图像的像素的融合权重,多个重聚焦图像的像素的融合权重高于全部重聚焦图像中除多个重聚焦图像之外的其它重聚焦图像的像素的融合权重,多个重聚焦图像中聚焦程度高的像素的融合权重高于多个重聚焦图像中聚焦程度较低的像素的融合权重;根据融合权重模板,对全部重聚焦图像进行图像融合。
换句话说,由深度信息确定的多个重聚焦图像的融合权重高于其它重聚焦图像的权重,并且对于多个重聚焦图像,像素的聚焦程度与融合权重成正比。这样,使得清楚呈现的是与多个深度空间对应的聚焦部分。
根据本发明的实施例,在120中,图像处理装置可以从多个重聚焦图像中的每个重聚焦图像中选取聚焦部分和非聚焦部分,并且拼接聚焦部分和非聚焦部分。
可选地,作为另一实施例,图1的方法还包括:从全部重聚焦图像中的每个重聚焦图像中选取聚焦部分和非聚焦部分;以深度平面的深度或像素坐标为索引,分别将聚焦部分和非聚焦部分存储在查询表中;以深度平面的深度或像素坐标为索引从查询表中查询多个深度平面的重聚焦图像的聚焦部分和非聚焦部分,其中根据多个深度平面的重聚焦图像生成多深度平面重聚焦图像,包括:拼接多个深度平面的重聚焦图像的聚焦部分和非聚焦部分。
例如,深度平面的深度为各个深度平面到基准平面(例如,相机)的距离,每个深度平面可以对应一个深度距离。图像上与每个深度平面对应的部分可以包括多个像素,每个像素的坐标可以对应一个深度平面,每个深度平面可以对应多个像素坐标。可以在拼接聚焦部分和非聚焦部分之前建立深度 平面的深度或像素坐标与聚焦部分的对应关系,和/或深度平面的深度或像素坐标与非聚焦部分的对应关系,并且将上述对应关系存储在查询表中。本发明的实施例对建立该对应关系的时机不作限定,例如,可以是在完成摄影过程中或者在完成摄影过程之后,也可以是在拼接聚焦部分和非聚焦部分之前的任何时间。例如,可以预先建立针对所有深度平面的聚焦部分和非聚焦部分与深度平面的深度或像素坐标的对应关系,这样,当在用户界面上接收到用户输入时,可以首先根据用户输入确定深度平面的深度或像素坐标,然后根据深度平面的深度或像素坐标从查询表中获取聚焦部分和非聚焦部分,并将聚焦部分和非聚焦部分拼接成多深度平面重聚焦图像,从而提高了用户体验。另外,为了节省存储空间,还可以只存储用户所选择的深度平面对应的原始数据的聚焦部分和非聚焦部分与深度平面的深度或像素坐标的对应关系。
根据本发明的实施例,在拼接聚焦部分和非聚焦部分时,可以对聚焦部分的图像和非聚焦部分的图像进行预处理、图像配准和图像融合。
可选地,作为另一实施例,图1的方法还包括:显示多个重聚焦图像中的一个重聚焦图像,获取在所显示的重聚焦图像的多个区域上的多个用户输入,并且在显示设备上输出生成多深度平面重聚焦图像,其中多个用户输入对应于多个深度平面,并且在确定深度信息时,可以根据多个用户输入确定多个深度平面的深度信息。
例如,在显示多深度平面重聚焦图像之前,可以在用户界面上显示一幅单一深度平面的重聚焦图像。图像处理装置可以根据用户需要,重聚焦多个不同深度平面上的物体,或者重聚焦多段不连续的深度平面内的物体,并且在显示设备上输出多深度平面重聚焦图像,或者输出不同深度平面的重聚焦景物的图像。本发明的实施例可以只显示生成的多深度平面重聚焦图像,还可以显示原有图像和/或中间生成的不同深度平面的重聚焦图像。
可选地,在另一实施例中,在生成多深度平面重聚焦重图像之后,立即在用户界面上显示多深度平面重聚焦图像。
根据本发明的实施例,用户输入为以下之一:用户在触摸屏上的单点点击输入、多点点击输入、单点滑动输入或多点滑动输入;输入设备中的姿态传感器探测到的用户姿态;输入设备中的动作跟踪模块探测到的用户动作。
在110中,可以根据预定义的输入确定预定义的输入对应的多个深度平 面,图1的方法还包括:在显示设备上输出与预定义的输入对应的多深度平面重聚焦图像。
下面结合具体例子,更加详细地描述本发明的实施例。
图2是根据本发明的另一实施例的图像处理过程的示意性流程图。图2的实施例是图1的图像处理方法的例子。图3示出了深度平面与用户输入的区域之间的对应关系的示意图。
210,在用户界面上显示图像。
例如,可以在用户设备(例如,相机)的用户界面上显示一幅图像,该图像可以是普通图片,该图像可以显示位于不同深度平面的景物,但只能够清晰地显示某个深度平面的景物。用户设备可以存储不同深度平面的重聚焦图像的原始数据。每个景物在图片上的区域或位置对应于该景物所在的深度平面。
220,在用户界面上获取用户输入。
例如,当用户点击或触摸用户界面上显示的图像的多个区域或位置时,可以在多个区域或位置上接收到用户的输入。上述用户输入可以是用户通过输入设备输入的非连续性指令,例如,单击鼠标、双击鼠标、按下按钮、在触摸屏上轻触手写笔。上述用户的输入还可以是用户通过输入设备输入的连续性指令,例如,简单移动鼠标,并且记录它的位置,从而实现一个连续的点击动作。本发明实施例的输入设备可以是鼠标、触摸板、多指感应的触摸屏、写字板或屏幕上使用的手写笔,眼部跟踪设备、操纵杆、四通按钮导航控制、压力敏感的方向导航控制、滑条、拨轮、圆形触摸板或红外体感设备等。
应理解,多个深度平面可以构成一个深度区间,因此,也可以通过选择多个深度区间的方式选择多个深度平面。参见图3,用户界面显示的图像上的区域1和区域2对应于深度区间1和深度区间2。这样,用户可以通过选择这两个深度区间对应的区域来选择这两个深度区间对应的深度平面。应理解,用户可以在图像上选择中或点击更多个区域或位置,以便获得更多的深度平面的重聚焦图像。
230,根据用户的输入确定多个深度平面的深度信息。
深度信息用于指示多个需要重聚焦的深度平面的信息。深度信息可以包含深度平面信息、像素坐标信息、像素颜色信息、像素点扩散函数信息、与 像素对应的光线的光场信息、与像素对应的光线的追迹信息中的一种或它们的任意组合。
重聚焦图像的原始数据可以是带有微透镜阵列或孔径阵列的摄像模块拍摄的一幅或多幅图像阵列;或是由多个配置相同或不同的摄像模块所组成的摄像阵列拍摄的一幅或多幅图像阵列;或是单个摄像模组针对同一场景,在不同位置拍摄的多幅图像。由于图像阵列中的视差信息包含场景的深度信息,因此,可以通过块匹配法、图割(Graph Cuts)法、多基线(Multi-Baseline)法等方法获得像素的深度信息,从而获得所需的深度平面信息。通过深度提取方法可以获得每个像素对应物点的深度信息(例如,深度与像素坐标的对应关系)。用户输入可以指示选中像素的坐标,结合用户选中的像素坐标,就可以获得用户选中的深度平面信息。
重聚焦图像的原始数据可以是单个普通摄像模块拍摄的多幅图像。其中多幅图像是指在不同参数设置下对同一场景拍摄的图像,如镜头焦距不同、光圈不同、曝光量不同、镜头与传感器距离不同、镜片之间距离不同、镜片曲率不同、感应波长不同等。由于景物在不同深度平面时,图像的聚焦程度和/或光强信息是不同的,因此,利用这些信息可以获得所需的深度平面信息。例如,用户输入指示像素坐标信息。像素的不同聚焦程度对应不同的深度信息,从而能够根据用户输入获得用户选中的深度平面信息。
重聚焦图像的原始数据可以是带有孔径掩模板、相位掩模板或其他类型掩模板的摄像模块拍摄的一幅或多幅孔径编码图像。由于物体在不同深度平面时,编码图像是不同,因此,利用这些信息可以获得所需的深度平面信息。例如,用户输入指示像素坐标信息,不同深度的像素所产生的编码图像是不同的,从编码图像的特征可以反推出像素的深度信息。
重聚焦图像的原始数据可以是深度感应设备与摄像模块针对同一场景分别获得的深度信息和图像的组合。深度感应设备可以是利用光的飞行时间、光的相位差、结构光光照等原理实现的各种设备。因此,利用深度感应设备提供的深度地图可以获得所需的深度平面信息。例如,用户输入指示所选中的像素的坐标信息,只要求出像素坐标信息与像素深度信息的对应关系(即深度地图),就能确定用户选中的深度平面信息。
以上几种方法均为通过不同的方式获得像素坐标与像素深度信息的映射关系或对应关系。
240,根据确定的深度信息确定相应的原始数据。
根据确定的深度信息可以确定需要重聚焦的深度平面,从而确定与这些需要重聚焦的深度平面相对应的原始数据。
250,对所确定的原始数据进行重聚焦操作,生成不同深度平面(或平面)的重聚焦图像。
生成不同深度平面的重聚焦图像的方法包括卷积算法、去卷积算法、融合算法、拼接算法、光线追迹算法、滤波算法、单一深度平面的重聚焦算法或上述算法的任意组合。根据重聚焦算法的不同,在生成不同深度平面的重聚焦图像之前,还可以获取像素的聚焦程度、弥散圆半径、点扩散函数、梯度、强度差值、结构张量或它们的任意组合。像素的聚焦程度、弥散圆半径、点扩散函数、梯度、强度差值、结构张量可以通过卷积、反卷积、傅里叶变换、逆傅里叶变换、插值、求导或它们的任意组合计算得到,也可以通过机器学习、统计、理论仿真等方法获得。
本发明的实施例可以通过光场重绘、三维重建、合成孔径等技术获得不同深度平面的重聚焦图像。重聚焦操作的具体的例子详见图6的实施例。
为了减少运算,可以在接收到用户输入时,仅生成用户输入所选择的多个深度平面对应的重聚焦图像。本发明的实施例并不限于此,例如,也可以预先生成所有深度平面的重聚焦图像,然后,在接收到用户输入时,可以直接从这些重聚焦图像中选择与用户输入所选择的多个深度平面的重聚焦图像。
260,合并多个重聚焦图像,从而得到多深度平面重聚焦图像。
本发明实施例可以采用图像融合方法合并多个重聚焦图像。在图像融合方法中,可以采用带权重的融合。本发明的实施例可以在融合不同深度平面的重聚焦图像之前计算融合权重。为了减少计算,可以只融合所选择的多个深度平面对应的重聚焦图像。融合权重的信息可以与重聚焦图像存储在同一文件中,或单独形成融合权重模板或查询表,并单独存储在另一文件中。
可替代地,本发明实施例可以采用图像拼接方法合并多个重聚焦图像。
270,显示生成的多深度平面重聚焦图像。
例如,相机可以在生成不同深度平区间的重聚焦图像之后,立即在用户界面上显示多深度平面重聚焦图像。另外,在显示生成的多深度平面重聚焦图像的同时,还可以显示原有图像和/或中间生成的不同深度平区间的重聚焦 图像。例如,可以在同一用户界面上分屏显示多个图像。
应理解,生成不同深度平面的重聚焦图像可以在合成多深度平面重聚焦图像之前的任意步骤执行,例如,可以在摄像过程中执行,或者在摄像过程结束之后立即执行,也可以在需要合成多深度平面重聚焦图像时才执行,本发明的实施例对此不作限制。
下面以光场相机为例说明如何生成不同深度平面的重聚焦图像。图4是根据本发明的实施例的双平面表示法的两个平面的示意图。图5是根据本发明的实施例的合成像摄影原理的示意图。图6是根据本发明的实施例的合成像摄影原理的模型的几何关系的示意图。
光场相机包括主镜头和图像传感器(未示出),主镜头和图像传感器之间设置有一个微透镜阵列。图像传感器记录每个微透镜阵列上形成小的图像,很多个小的图像组成图像阵列。光场相机内还可以设置相应的图像处理软件,可以将记录的图像阵列重建成用户可接受的常规图像形式,并且呈现出对不同深度平面进行聚焦的效果和从不同视角观看场景的效果。在完成拍摄过程之后,光场相机可以根据用户的需求,通过软件在用户选择的深度平面上进行重聚焦。图像的聚焦深度可以不固定,并且可以根据用户的需求改变。光场相机在记录光线强度信息的同时,还记录了光线的角度信息。光线的角度信息包含着场景中景物的深度信息。换句话说,光场相机采集场景的三维信息(例如,三维光场数据)。光场相机在获取场景的三维光场数据之后,可以根据用户的需求,利用重聚焦算法分别对不同的深度平面聚焦。当用户选择用户界面上显示的图像中的一个区域(该区域可以对应场景中的一个深度平面)时,相机可以利用重聚焦算法对图像进行处理,最终呈现出聚焦在所选择的深度平区间的效果。而当用户选择用户界面上显示的图像中的多个区域时,相机在生成多深度平面重聚焦图像之前,可以首先根据原始数据生成多个区域对应的多个深度平面的重聚焦图像,然后再融合或拼接成多深度平面重聚焦图像。
如图4所示,可以采用双平面表示法来表示光场,即一条光线L的坐标为该光线在两个平行平面u-v和s-t上的交点的坐标。例如,光场相机采集的光场信息用L(u,v,s,t)来表示。光场相机的合成光场的光场信息用L’(u’,v’,s’,t’)来表示,两者关系如图5所示。
合成像平面的辐照图像的值为:
Figure PCTCN2015080021-appb-000001
其中D为合成像平面与合成孔径之间的距离。A为孔径函数,例如,孔径内的值是1,孔径外的值是0。θ是光线(u’,v’,s’,t’)相对合成像平面的入射角。根据近轴近似原理,可以忽略上式中的cos4θ,并且忽略1/D2,得到如下公式:
Figure PCTCN2015080021-appb-000002
L和L’的关系如图6所示,并且可以用光场信息L(u,v,s,t)来表示公式(2)。根据合成摄影原理可以得到L与L’之间的关系如下:
Figure PCTCN2015080021-appb-000003
Figure PCTCN2015080021-appb-000004
其中,α为表征主镜头平面到合成像面之间的距离的比例系数;β表征合成孔径平面到微透镜平面的距离的比例系数。
由公式(4)和(2)得到合成摄影公式:
Figure PCTCN2015080021-appb-000005
根据公式(5)式绘制图像,即可获得不同深度平面的重聚焦图像。
图7是根据本发明的又一实施例的图像处理过程的示意性流程图。图7的实施例是图1的图像处理方法的例子,在此适当省略详细的描述。图7的710至740分别与图2的步骤210至240类似,在此不再赘述。
710,在用户界面上显示图像。
720,在用户界面上获取用户输入。
730,根据用户的输入确定对应的深度信息。
740,利用深度信息对原始数据进行重聚焦操作,生成不同深度平面的重聚焦图像。
745,确定多个重聚焦图像的像素的点扩散函数。
750,根据像素的点扩散函数生成融合权重模板。
760,根据融合权重模板,对多个重聚焦图像进行图像融合。
770,显示生成的多深度平面重聚焦图像。
应理解,步骤745的确定像素的点扩散函数的步骤可以由确定像素的聚焦程度、弥散圆半径、梯度、强度差值、结构张量、像素对应光线的光场信息、像素对应光线的追迹信息的步骤或它们的任意组合替代。换句话说,点扩散函数可以由这些参数替代。
应理解,确定点扩散函数的步骤可以在生成融合权重模板之前任意步骤执行。
图8是根据本发明的另一实施例的图像处理过程的示意性流程图。图8的实施例是图1的图像处理方法的例子,在此适当省略详细的描述。图8的810至840分别与图2的步骤210至240类似,在此不再赘述。
810,在用户界面上显示图像。
820,在用户界面上获取用户输入。
830,根据用户的输入确定对应的深度信息。
840,利用深度信息对原始数据进行重聚焦操作,生成不同深度平面的重聚焦图像。
850,根据重聚焦图像上像素的聚焦程度确定像素的融合权重,其中聚焦程度高的像素的融合权重高于同一重聚焦图像上聚焦程度低的像素的融合权重。
860,根据多个深度平面的融合权重,对多个重聚焦图像进行图像融合。
870,显示生成的多深度平面重聚焦图像。
图9是根据本发明的再一实施例的图像处理过程的示意性流程图。图9的实施例是图1的图像处理方法的例子,在此适当省略详细的描述。
910,在用户界面上显示图像。与图2的步骤210类似,在此不再赘述。
920,在用户界面上获取用户输入。与图2的步骤220类似,在此不再赘述。
930,根据用户的输入确定对应的深度信息。
例如,该深度信息可以包括每个用户输入区域对应的深度平面的深度平面的深度或像素坐标。该像素坐标可以是该区域中的任一像素的坐标,例如,可以是该区域的中心点的坐标。
940,利用深度信息对原始数据进行重聚焦操作,生成不同深度平面的重聚焦图像。
945,从不同深度平面的重聚焦图像截取聚焦部分和非聚焦部分,并将聚焦部分和非聚焦部分分别存储在聚焦部分查询表和非聚焦部分查询表中。
可替代地,作为另一实施例,也可以是预先(例如,在接收用户输入之前)根据原始数据生成所有深度平面的重聚焦图像,然后从所有深度平面的重聚焦图像中截取聚焦部分和非聚焦部分,并且将聚焦部分和非聚焦部分分别存储在聚焦部分查询表和非聚焦部分查询表;或者预先根据原始数据生成所有深度平面的聚焦部分和非聚焦部分,并且将聚焦部分和非聚焦部分分别存储在非聚焦部分查询表中。在这种情况下,可以省略步骤940和945。
上述聚焦部分查询表中存储聚焦部分与深度平面的深度或像素坐标的对应关系,上述非聚焦部分查询表中存储非聚焦部分与深度平面的深度或像素坐标的对应关系。
950,根据深度信息确定每个用户输入的深度平面的深度或像素坐标,并且根据深度平面的深度或像素坐标查询查询表得到用户输入对应的深度平面的聚焦部分和非聚焦部分。
960,将多个深度平面对应的聚焦部分和非聚焦部分合成多深度平面重聚焦图像。
本发明的实施例可以采用图像拼接方法多个深度平面对应的聚焦部分和非聚焦部分合成多深度平面重聚焦图像。例如,可以从图像中截取聚焦区域和非聚焦区域;拼接上述区域形成多深度平面重聚焦图像。聚焦区域和非聚焦区域至少包含一个像素。图像拼接过程主要包含预处理、配准和融合三个步骤。预处理包括图像去噪、图像修正和图像投影。图像投影可以是平面投影法、球面投影法、立方体投影法或柱面投影法。图像去噪可以是邻域平均法、空间域低通滤波法、空间域非线性滤波法。图像修正可以是针对灰度值偏差的修正,或者针对几何形变的修正。例如,针对灰度值偏差的修正方法如下:归一化后的图像灰度值为:
Figure PCTCN2015080021-appb-000006
其中f表示参考图像的灰度,μf表示参考图像的平均灰度、σf表示参考 图像的标准差,g表示待拼接图像的灰度,μg表示待拼接图像的平均灰度,σg表示待拼接图像的标准差。图像配准的方法可以是块匹配算法、基于快速傅里叶变换的图像配准法、基于傅里叶变换的相位相关图像配准法、基于轮廓特征的算法、角点检测算法、尺度不变特征转换(Scale-invariant feature transform,SIFT)算法、特征配准(Speeded Up Robust Features,SURF)算法、基于光流的方法、基于SIFT流的方法。通过配准算法可以确定待拼接图像的对应位置,再通过找到图像间的变换关系并进行重采样,就可将图像拼接在一起。图像变换的模型可以是图像的平移、旋转、缩放、反射、错切和它们的任意组合。
970,显示生成的多深度平面重聚焦图像。与图2的步骤270类似,在此不再赘述。
上面描述了根据本发明实施例的图像处理方法和过程,下面分别结合图10和图11描述根据本发明实施例的图像处理装置。
图10是根据本发明的一个实施例的图像处理装置1000的结构示意图。图像处理装置1000包括:确定模块1010和生成模块1020。
确定模块1010用于确定多个深度平面的深度信息,其中多个深度平面的深度信息用于指示多个深度平面,多个深度平面分别对应于多个深度平面的多个重聚焦图像,其中多个重聚焦图像由多个重聚焦图像的原始数据生成。生成模块1020用于根据深度信息,生成多深度平面重聚焦图像,其中多深度平面重聚焦图像包括多个重聚焦图像的聚焦部分。
根据本发明的实施例,多个深度平面分别对应于多个重聚焦图像的原始数据,生成模块1020根据多个深度平面的深度信息确定多个重聚焦图像的原始数据,并且根据多个重聚焦图像的原始数据生成多深度平面重聚焦图像,其中多深度平面重聚焦图像包括多个重聚焦图像的聚焦部分。
生成模块1020采用重聚焦算法对多个重聚焦图像的原始数据进行重聚焦处理,生成多个重聚焦图像;合并多个重聚焦图像,以生成多深度平面重聚焦图像。
根据本发明的实施例,生成模块1020采用图像融合方法合并多个重聚焦图像的聚焦部分。
根据本发明的实施例,生成模块1020确定多个重聚焦图像的像素的点扩散函数;根据像素的点扩散函数生成融合权重模板,并且根据融合权重模 板,对多个重聚焦图像进行图像融合,其中融合权重模板包括像素的融合权重,并且聚焦程度高像素的融合权重高于聚焦程度低的像素的融合权重。
根据本发明的实施例,生成模块1020从多个重聚焦图像中的每个重聚焦图像中截取聚焦部分和非聚焦部分,并拼接聚焦部分和非聚焦部分。
根据本发明的实施例,生成模块1020根据多个深度平面的深度信息确定多个重聚焦图像的原始数据,根据多个重聚焦图像的原始数据生成多个重聚焦图像中的每个重聚焦图像的聚焦部分和非聚焦部分,并且拼接聚焦部分和非聚焦部分。
根据本发明的实施例,生成模块1020还根据全部深度平面的重聚焦图像的原始数据生成全部深度平面对应的重聚焦图像,全部深度平面分别对应于全部深度平面的重聚焦图像,其中,生成模块1020根据多个深度平面的深度信息从全部深度平面的重聚焦图像中选择多个深度平面的重聚焦图像,并且根据多个深度平面的重聚焦图像生成多深度平面重聚焦图像,其中多深度平面重聚焦图像包括多个重聚焦图像的聚焦部分。
根据本发明的实施例,生成模块1020确定全部重聚焦图像的像素的点扩散函数,根据全部重聚焦图像的像素的点扩散函数生成融合权重模板,并且根据融合权重模板,对全部重聚焦图像进行图像融合,其中融合权重模板包括全部重聚焦图像的像素的融合权重,多个重聚焦图像的像素的融合权重高于全部重聚焦图像中除多个重聚焦图像之外的其它重聚焦图像的像素的融合权重,多个重聚焦图像中聚焦程度高的像素的融合权重高于多个重聚焦图像中聚焦程度较低的像素的融合权重。
根据本发明的实施例,生成模块1020从多个重聚焦图像中的每个重聚焦图像中选取聚焦部分和非聚焦部分,并且拼接聚焦部分和非聚焦部分。
可选地,作为另一实施例,图像处理装置1000还包括:存储模块1030、选取模块1040和查询模块1050。选取模块1040,用于从全部重聚焦图像中的每个重聚焦图像中选取聚焦部分和非聚焦部分;存储模块1030以深度平面的深度或像素坐标为索引,分别将聚焦部分和非聚焦部分存储在查询表中;查询模块1050以深度平面的深度或像素坐标为索引从查询表中查询多个深度平面的重聚焦图像的聚焦部分和非聚焦部分,其中生成模块1020拼接多个深度平面的重聚焦图像的聚焦部分和非聚焦部分。
根据发明的实施例,生成模块1020在进行拼接时对聚焦部分的图像和 非聚焦部分的图像进行预处理、图像配准和图像融合。
可选地,作为另一实施例,图像处理装置1000还包括:显示模块1070,用于显示多个重聚焦图像中的一个重聚焦图像;获取模块1060,用于获取在所显示的重聚焦图像的多个区域上的多个用户输入,其中多个用户输入对应于多个深度平面,其中显示模块1030在显示设备上输出生成多深度平面重聚焦图像,确定模块1010根据多个用户输入确定多个深度平面的深度信息。
根据本发明的实施例,用户输入为以下之一:用户在触摸屏上的单点点击输入、多点点击输入、单点滑动输入或多点滑动输入;输入设备中的姿态传感器探测到的用户姿态;输入设备中的动作跟踪模块探测到的用户动作。
可选地,作为另一实施例,确定模块1010根据预定义的输入确定预定义的输入对应的多个深度平面,图像处理装置1000还包括:显示模块1030,用于在显示设备上输出与预定义的输入对应的多深度平面重聚焦图像。
图像处理装置1000的各个单元的操作和功能可以参考上述图1的方法,为了避免重复,在此不再赘述。
图11是根据本发明的另一实施例的图像处理装置1100的结构示意图。图像处理装置1100包括:处理器1110、存储器1120和通信总线1130。
处理器1110通过通信总线1130调用存储器1120中存储的代码,用以确定多个深度平面的深度信息,其中多个深度平面的深度信息用于指示多个深度平面,多个深度平面分别对应于多个深度平面的多个重聚焦图像,其中多个重聚焦图像由多个重聚焦图像的原始数据生成,并且根据深度信息,生成多深度平面重聚焦图像,其中多深度平面重聚焦图像包括多个重聚焦图像的聚焦部分。
根据本发明的实施例,处理器1110多个深度平面分别对应于多个重聚焦图像的原始数据,处理器根据多个深度平面的深度信息确定多个重聚焦图像的原始数据,并且根据多个重聚焦图像的原始数据生成多深度平面重聚焦图像,其中多深度平面重聚焦图像包括多个重聚焦图像的聚焦部分。
根据本发明的实施例,处理器1110采用重聚焦算法对多个重聚焦图像的原始数据进行重聚焦处理,生成多个重聚焦图像,并且合并多个重聚焦图像,以生成多深度平面重聚焦图像。
根据本发明的实施例,处理器1110采用图像融合方法合并多个重聚焦图像的聚焦部分。
根据本发明的实施例,处理器1110确定多个重聚焦图像的像素的点扩散函数;根据像素的点扩散函数生成融合权重模板,并且根据融合权重模板,对多个重聚焦图像进行图像融合,其中融合权重模板包括像素的融合权重,并且聚焦程度高像素的融合权重高于聚焦程度低的像素的融合权重。
根据本发明的实施例,处理器1110从多个重聚焦图像中的每个重聚焦图像中选取聚焦部分和非聚焦部分,并且拼接聚焦部分和非聚焦部分。
根据本发明的实施例,处理器1110根据多个深度平面的深度信息确定多个重聚焦图像的原始数据,根据多个重聚焦图像的原始数据生成多个重聚焦图像中的每个重聚焦图像的聚焦区域和非聚焦区域,并且拼接聚焦部分和非聚焦部分。
根据本发明的实施例,处理器1110还根据全部深度平面的重聚焦图像的原始数据生成全部深度平面对应的重聚焦图像,全部深度平面分别对应于全部深度平面的重聚焦图像,其中处理器1110根据多个深度平面的深度信息从全部深度平面的重聚焦图像中选择多个深度平面的重聚焦图像,并且根据多个深度平面的重聚焦图像生成多深度平面重聚焦图像,其中多深度平面重聚焦图像包括多个重聚焦图像的聚焦部分。
根据本发明的实施例,根据发明的实施例,处理器1110确定全部重聚焦图像的像素的点扩散函数,根据全部重聚焦图像的像素的点扩散函数生成融合权重模板,并且根据融合权重模板,对全部重聚焦图像进行图像融合,其中融合权重模板包括全部重聚焦图像的像素的融合权重,多个重聚焦图像的像素的融合权重高于全部重聚焦图像中除多个重聚焦图像之外的其它重聚焦图像的像素的融合权重,多个重聚焦图像中聚焦程度高的像素的融合权重高于多个重聚焦图像中聚焦程度较低的像素的融合权重。
根据发明的实施例,处理器1110从多个重聚焦图像中的每个重聚焦图像中选取聚焦部分和非聚焦部分,并且拼接聚焦部分和非聚焦部分。
根据发明的实施例,处理器1110在进行拼接时对聚焦部分的图像和非聚焦部分的图像进行预处理、图像配准和图像融合。
可选地,作为另一实施例,图像处理装置1100还包括:显示器1140,用于显示多个重聚焦图像中的一个重聚焦图像;输入接口1150,用于获取在所显示的重聚焦图像的多个区域上的多个用户输入,其中多个用户输入对应于多个深度平面,其中显示器1140输出生成多深度平面重聚焦图像,处理 器1110根据多个用户输入确定多个深度平面的深度信息。
根据本发明的实施例,用户输入为以下之一:用户在显示器1140的触摸屏上的单点点击输入、多点点击输入、单点滑动输入或多点滑动输入;输入设备中的姿态传感器探测到的用户姿态;输入设备中的动作跟踪模块探测到的用户动作。
可选地,作为另一实施例,处理器1110根据预定义的输入确定预定义的输入对应的多个深度平面,方法还包括:显示器1140,用于输出与预定义的输入对应的多深度平面重聚焦图像。
图像处理装置1100的各个单元的操作和功能可以参考上述图1的方法,为了避免重复,在此不再赘述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一 个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。

Claims (30)

  1. 一种图像处理方法,其特征在于,包括:
    确定多个深度平面的深度信息,所述多个深度平面的深度信息用于指示所述多个深度平面,所述多个深度平面分别对应于所述多个深度平面的多个重聚焦图像,其中所述多个重聚焦图像由所述多个重聚焦图像的原始数据生成;
    根据所述深度信息,生成多深度平面重聚焦图像,其中所述多深度平面重聚焦图像包括所述多个重聚焦图像的聚焦部分。
  2. 根据权利要求1所述的图像处理方法,其特征在于,所述多个深度平面分别对应于所述多个重聚焦图像的原始数据,其中,所述根据所述深度信息,生成多深度平面重聚焦图像,包括:
    根据所述多个深度平面的深度信息确定所述多个重聚焦图像的原始数据;
    根据所述多个重聚焦图像的原始数据生成多深度平面重聚焦图像,其中所述多深度平面重聚焦图像包括所述多个重聚焦图像的聚焦部分。
  3. 根据权利要求2所述的图像处理方法,其特征在于,所述根据所述多个重聚焦图像的原始数据生成多深度平面重聚焦图像,包括:
    采用重聚焦算法对所述多个重聚焦图像的原始数据进行重聚焦处理,生成所述多个重聚焦图像;
    合并所述多个重聚焦图像,以生成所述多深度平面重聚焦图像。
  4. 根据权利要求3所述的图像处理方法,其特征在于,所述合并所述多个重聚焦图像,包括:
    采用图像融合方法合并所述多个重聚焦图像的聚焦部分。
  5. 根据权利要求4所述的图像处理方法,其特征在于,所述采用图像融合方法合并所述多个重聚焦图像的聚焦部分,包括:
    确定所述多个重聚焦图像的像素的点扩散函数;
    根据所述像素的点扩散函数生成融合权重模板,其中所述融合权重模板包括像素的融合权重,并且聚焦程度高的像素的融合权重高于聚焦程度低的像素的融合权重;
    根据所述融合权重模板,对所述多个重聚焦图像进行图像融合。
  6. 根据权利要求3所述的图像处理方法,其特征在于,所述多个重聚 焦图像中的每个重聚焦图像包括聚焦部分和非聚焦部分,所述合并所述多个重聚焦图像,包括:
    从所述多个重聚焦图像中的每个重聚焦图像中选取聚焦部分和非聚焦部分;
    拼接所述聚焦部分和非聚焦部分。
  7. 根据权利要求2所述的图像处理方法,其特征在于,所述根据所述多个重聚焦图像生成所述多深度平面重聚焦图像,包括:
    根据所述多个重聚焦图像的原始数据生成所述多个重聚焦图像中的每个重聚焦图像的聚焦部分和非聚焦部分;
    拼接所述聚焦部分和非聚焦部分。
  8. 根据权利要求1所述的图像处理方法,其特征在于,还包括:
    根据全部深度平面的重聚焦图像的原始数据生成全部深度平面对应的重聚焦图像,所述全部深度平面分别对应于所述全部深度平面的重聚焦图像,
    其中,所述根据所述深度信息,生成多深度平面重聚焦图像,包括:
    根据多个深度平面的深度信息从所述全部深度平面的重聚焦图像中选择所述多个深度平面的重聚焦图像;
    根据所述多个深度平面的重聚焦图像生成所述多深度平面重聚焦图像,其中所述多深度平面重聚焦图像包括所述多个重聚焦图像的聚焦部分。
  9. 根据权利要求8所述的图像处理方法,其特征在于,所述根据所述多个深度平面的重聚焦图像生成所述多深度平面重聚焦图像,包括:
    确定所述全部重聚焦图像的像素的点扩散函数;
    根据所述全部重聚焦图像的像素的点扩散函数生成融合权重模板,其中所述融合权重模板包括全部重聚焦图像的像素的融合权重,所述多个重聚焦图像的像素的融合权重高于所述全部重聚焦图像中除所述多个重聚焦图像之外的其它重聚焦图像的像素的融合权重,所述多个重聚焦图像中聚焦程度高的像素的融合权重高于所述多个重聚焦图像中聚焦程度较低的像素的融合权重;
    根据所述融合权重模板,对所述全部重聚焦图像进行图像融合。
  10. 根据权利要求8所述的图像处理方法,其特征在于,所述根据所述多个深度平面的重聚焦图像生成所述多深度平面重聚焦图像,包括:
    从所述多个重聚焦图像中的每个重聚焦图像中选取聚焦部分和非聚焦部分;
    拼接所述聚焦部分和所述非聚焦部分。
  11. 根据权利要求8所述的图像处理方法,其特征在于,还包括:
    以深度平面的深度或像素坐标为索引从所述查询表中查询所述多个深度平面的重聚焦图像的聚焦部分和非聚焦部分,其中,所述聚焦部分和非聚焦部分以深度平面的深度或像素坐标为索引存储在查询表中,
    其中所述根据所述多个深度平面的重聚焦图像生成所述多深度平面重聚焦图像,包括:
    拼接多个深度平面的重聚焦图像的聚焦部分和非聚焦部分。
  12. 根据权利要求6、7、10或11任一项所述的图像处理方法,其特征在于,所述拼接所述聚焦部分和非聚焦部分,包括:
    对所述聚焦部分的图像和所述非聚焦部分的图像进行预处理、图像配准和图像融合。
  13. 根据权利要求1至12中的任一项所述的图像处理方法,其特征在于,还包括:
    显示所述多个重聚焦图像中的一个重聚焦图像;
    获取在所显示的重聚焦图像的多个区域上的多个用户输入,其中所述多个用户输入对应于所述多个深度平面;
    在显示设备上输出所述生成多深度平面重聚焦图像;
    其中所述确定多个深度平面的深度信息,包括:
    根据所述多个用户输入确定所述多个深度平面的深度信息。
  14. 根据权利要求13所述的图像处理方法,其特征在于,所述用户输入为以下之一:
    用户在触摸屏上的单点点击输入、多点点击输入、单点滑动输入或多点滑动输入;
    输入设备中的姿态传感器探测到的用户姿态;
    输入设备中的动作跟踪模块探测到的用户动作。
  15. 根据权利要求1至12中的任一项所述的图像处理方法,其特征在于,所述确定多个深度平面的深度信息,包括:
    根据预定义的输入确定所述预定义的输入对应的多个深度平面,所述方 法还包括:
    在显示设备上输出与所述预定义的输入对应的多深度平面重聚焦图像。
  16. 一种图像处理装置,其特征在于,包括:
    确定模块,用于确定多个深度平面的深度信息,其中所述多个深度平面的深度信息用于指示所述多个深度平面,所述多个深度平面分别对应于所述多个深度平面的多个重聚焦图像;
    生成模块,用于根据所述深度信息,生成多深度平面重聚焦图像,其中所述多深度平面重聚焦图像包括所述多个重聚焦图像的聚焦部分。
  17. 根据权利要求16所述的图像处理装置,其特征在于,所述多个深度平面分别对应于所述多个重聚焦图像的原始数据,所述生成模块根据多个深度平面的深度信息确定所述多个重聚焦图像的原始数据,并且根据所述多个重聚焦图像的原始数据生成多深度平面重聚焦图像,其中所述多深度平面重聚焦图像包括所述多个重聚焦图像的聚焦部分。
  18. 根据权利要求17所述的图像处理装置,其特征在于,所述生成模块采用重聚焦算法对所述多个重聚焦图像的原始数据进行重聚焦处理,生成所述多个重聚焦图像;合并所述多个重聚焦图像,以生成所述多深度平面重聚焦图像。
  19. 根据权利要求18所述的图像处理装置,其特征在于,所述生成模块采用图像融合方法合并所述多个重聚焦图像的聚焦部分。
  20. 根据权利要求19所述的图像处理装置,其特征在于,所述生成模块确定所述多个重聚焦图像的像素的点扩散函数,根据所述像素的点扩散函数生成融合权重模板,根据所述融合权重模板,对所述多个重聚焦图像进行图像融合,其中所述融合权重模板包括像素的融合权重,并且聚焦程度高的像素的融合权重高于聚焦程度低的像素的融合权重。
  21. 根据权利要求18所述的图像处理装置,其特征在于,所述生成模块从所述多个重聚焦图像中的每个重聚焦图像中选取聚焦部分和非聚焦部分,并拼接所述聚焦部分和非聚焦部分。
  22. 根据权利要求17所述的图像处理装置,其特征在于,所述生成模块根据所述多个深度平面的深度信息确定所述多个重聚焦图像的原始数据,根据所述多个重聚焦图像的原始数据生成所述多个重聚焦图像中的每个重聚焦图像的聚焦区域和非聚焦区域,并且拼接所述聚焦部分和非聚焦部分。
  23. 根据权利要求16所述的图像处理装置,所述生成模块还根据全部深度平面的重聚焦图像的原始数据生成全部深度平面对应的重聚焦图像,所述生成模块根据多个深度平面的深度信息从所述全部深度平面的重聚焦图像中选择所述多个深度平面的重聚焦图像,并且根据所述多个深度平面的重聚焦图像生成所述多深度平面重聚焦图像,其中,所述全部深度平面分别对应于所述全部深度平面的重聚焦图像,并且所述多深度平面重聚焦图像包括所述多个重聚焦图像的聚焦部分。
  24. 根据权利要求23所述的图像处理装置,其特征在于,所述生成模块确定所述全部重聚焦图像的像素的点扩散函数,根据所述全部重聚焦图像的像素的点扩散函数生成融合权重模板,并且根据所述融合权重模板,对所述全部重聚焦图像进行图像融合,其中所述融合权重模板包括全部重聚焦图像的像素的融合权重,所述多个重聚焦图像的像素的融合权重高于所述全部重聚焦图像中除所述多个重聚焦图像之外的其它重聚焦图像的像素的融合权重,所述多个重聚焦图像中聚焦程度高的像素的融合权重高于所述多个重聚焦图像中聚焦程度较低的像素的融合权重。
  25. 根据权利要求23所述的图像处理装置,其特征在于,所述生成模块从所述多个重聚焦图像中的每个重聚焦图像中选取聚焦部分和非聚焦部分;拼接所述聚焦部分和所述非聚焦部分。
  26. 根据权利要求23所述的图像处理装置,其特征在于,还包括:
    查询模块,用于以深度距离或像素坐标为索引从所述查询表中查询所述多个深度平面的重聚焦图像的聚焦部分和非聚焦部分,其中,所述聚焦部分和非聚焦部分以深度平面的深度或像素坐标为索引存储在查询表中,其中所述生成模块拼接多个深度平面的重聚焦图像的聚焦部分和非聚焦部分。
  27. 根据权利要求21、22、25或26所述的图像处理装置,其特征在于,所述生成模块在进行拼接时对所述聚焦部分的图像和所述非聚焦部分的图像进行预处理、图像配准和图像融合。
  28. 根据权利要求16至27中的任一项所述的图像处理装置,其特征在于,还包括:
    显示模块,用于显示所述多个重聚焦图像中的一个重聚焦图像;
    获取模块,用于获取在所显示的重聚焦图像的多个区域上的多个用户输入,其中所述多个用户输入对应于所述多个深度平面,其中所述显示模块在 显示设备上输出所述生成多深度平面重聚焦图像,所述确定模块根据所述多个用户输入确定所述多个深度平面的深度信息。
  29. 根据权利要求28所述的图像处理装置,其特征在于,所述用户输入为以下之一:用户在触摸屏上的单点点击输入、多点点击输入、单点滑动输入或多点滑动输入;输入设备中的姿态传感器探测到的用户姿态;输入设备中的动作跟踪模块探测到的用户动作。
  30. 根据权利要求16至29中的任一项所述的图像处理装置,其特征在于,所述确定模块根据预定义的输入确定所述预定义的输入对应的多个深度平面,所述图像处理装置还包括:
    显示模块,用于在显示设备上输出与所述预定义的输入对应的多深度平面重聚焦图像。
PCT/CN2015/080021 2014-05-28 2015-05-28 图像处理方法和图像处理装置 WO2015180659A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2016559561A JP6736471B2 (ja) 2014-05-28 2015-05-28 画像処理方法および画像処理装置
EP15800270.9A EP3101624B1 (en) 2014-05-28 2015-05-28 Image processing method and image processing device
KR1020167025332A KR101893047B1 (ko) 2014-05-28 2015-05-28 이미지 처리 방법 및 이미지 처리 장치
US15/361,640 US20170076430A1 (en) 2014-05-28 2016-11-28 Image Processing Method and Image Processing Apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410230506.1 2014-05-28
CN201410230506.1A CN105335950B (zh) 2014-05-28 2014-05-28 图像处理方法和图像处理装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/361,640 Continuation US20170076430A1 (en) 2014-05-28 2016-11-28 Image Processing Method and Image Processing Apparatus

Publications (1)

Publication Number Publication Date
WO2015180659A1 true WO2015180659A1 (zh) 2015-12-03

Family

ID=54698118

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/080021 WO2015180659A1 (zh) 2014-05-28 2015-05-28 图像处理方法和图像处理装置

Country Status (6)

Country Link
US (1) US20170076430A1 (zh)
EP (1) EP3101624B1 (zh)
JP (1) JP6736471B2 (zh)
KR (1) KR101893047B1 (zh)
CN (1) CN105335950B (zh)
WO (1) WO2015180659A1 (zh)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9918017B2 (en) 2012-09-04 2018-03-13 Duelight Llc Image sensor apparatus and method for obtaining multiple exposures with zero interframe time
US9531961B2 (en) 2015-05-01 2016-12-27 Duelight Llc Systems and methods for generating a digital image using separate color and intensity data
US10558848B2 (en) 2017-10-05 2020-02-11 Duelight Llc System, method, and computer program for capturing an image with correct skin tone exposure
US9819849B1 (en) 2016-07-01 2017-11-14 Duelight Llc Systems and methods for capturing digital images
US9807322B2 (en) 2013-03-15 2017-10-31 Duelight Llc Systems and methods for a digital image sensor
US10924688B2 (en) 2014-11-06 2021-02-16 Duelight Llc Image sensor apparatus and method for obtaining low-noise, high-speed captures of a photographic scene
US11463630B2 (en) 2014-11-07 2022-10-04 Duelight Llc Systems and methods for generating a high-dynamic range (HDR) pixel stream
FR3037756B1 (fr) * 2015-06-18 2020-06-19 Lens Correction Technologies Procede et dispositif de production d'une image numerique
JP2017050662A (ja) * 2015-09-01 2017-03-09 キヤノン株式会社 画像処理装置、撮像装置および画像処理プログラム
JP2017191071A (ja) * 2016-04-15 2017-10-19 キヤノン株式会社 分光データ処理装置、撮像装置、分光データ処理方法および分光データ処理プログラム
EP3507765A4 (en) 2016-09-01 2020-01-01 Duelight LLC SYSTEMS AND METHODS FOR FOCUS ADJUSTMENT BASED ON TARGET DEVELOPMENT INFORMATION
WO2018129692A1 (en) * 2017-01-12 2018-07-19 Intel Corporation Image refocusing
CN106898048B (zh) * 2017-01-19 2019-10-29 大连理工大学 一种可适应复杂场景的无畸变集成成像三维显示方法
CN107230192B (zh) 2017-05-31 2020-07-21 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和移动终端
CA3066502A1 (en) 2017-06-21 2018-12-27 Vancouver Computer Vision Ltd. Determining positions and orientations of objects
CN107525945B (zh) * 2017-08-23 2019-08-02 南京理工大学 基于集成成像技术的3d-3c粒子图像测速系统及方法
CN108389223B (zh) * 2018-02-06 2020-08-25 深圳市创梦天地科技股份有限公司 一种图像处理方法及终端
CN108337434B (zh) * 2018-03-27 2020-05-22 中国人民解放军国防科技大学 一种针对光场阵列相机的焦外虚化重聚焦方法
US11575865B2 (en) 2019-07-26 2023-02-07 Samsung Electronics Co., Ltd. Processing images captured by a camera behind a display
CN110602397A (zh) * 2019-09-16 2019-12-20 RealMe重庆移动通信有限公司 图像处理方法、装置、终端及存储介质
CN111260561A (zh) * 2020-02-18 2020-06-09 中国科学院光电技术研究所 一种可用于掩模版缺陷检测的快速多图拼接方法
CN113516614A (zh) 2020-07-06 2021-10-19 阿里巴巴集团控股有限公司 脊柱影像的处理方法、模型训练方法、装置及存储介质
CN112241940B (zh) * 2020-09-28 2023-12-19 北京科技大学 一种多张多聚焦图像融合方法及装置
US11721001B2 (en) * 2021-02-16 2023-08-08 Samsung Electronics Co., Ltd. Multiple point spread function based image reconstruction for a camera behind a display
US11722796B2 (en) 2021-02-26 2023-08-08 Samsung Electronics Co., Ltd. Self-regularizing inverse filter for image deblurring
TWI799828B (zh) * 2021-03-31 2023-04-21 中強光電股份有限公司 影像處理裝置、影像處理方法以及3d影像產生系統
US11832001B2 (en) * 2021-12-20 2023-11-28 Visera Technologies Company Limited Image processing method and image processing system
CN116847209B (zh) * 2023-08-29 2023-11-03 中国测绘科学研究院 一种基于Log-Gabor与小波的光场全聚焦影像生成方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080131019A1 (en) * 2006-12-01 2008-06-05 Yi-Ren Ng Interactive Refocusing of Electronic Images
US20100128145A1 (en) * 2008-11-25 2010-05-27 Colvin Pitts System of and Method for Video Refocusing
CN102314683A (zh) * 2011-07-15 2012-01-11 清华大学 一种非平面图像传感器的计算成像方法和成像装置
CN103002218A (zh) * 2011-09-12 2013-03-27 佳能株式会社 图像处理装置及图像处理方法
US20130329068A1 (en) * 2012-06-08 2013-12-12 Canon Kabushiki Kaisha Image processing apparatus and image processing method
CN104281397A (zh) * 2013-07-10 2015-01-14 华为技术有限公司 多深度区间的重聚焦方法、装置及电子设备

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6207202B2 (ja) * 2012-06-08 2017-10-04 キヤノン株式会社 画像処理装置及び画像処理方法
JP5818828B2 (ja) * 2013-01-29 2015-11-18 キヤノン株式会社 画像処理装置、撮像システム、画像処理システム

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080131019A1 (en) * 2006-12-01 2008-06-05 Yi-Ren Ng Interactive Refocusing of Electronic Images
US20100128145A1 (en) * 2008-11-25 2010-05-27 Colvin Pitts System of and Method for Video Refocusing
CN102314683A (zh) * 2011-07-15 2012-01-11 清华大学 一种非平面图像传感器的计算成像方法和成像装置
CN103002218A (zh) * 2011-09-12 2013-03-27 佳能株式会社 图像处理装置及图像处理方法
US20130329068A1 (en) * 2012-06-08 2013-12-12 Canon Kabushiki Kaisha Image processing apparatus and image processing method
CN104281397A (zh) * 2013-07-10 2015-01-14 华为技术有限公司 多深度区间的重聚焦方法、装置及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3101624A4 *

Also Published As

Publication number Publication date
EP3101624A4 (en) 2017-05-17
JP6736471B2 (ja) 2020-08-05
CN105335950B (zh) 2019-02-12
EP3101624B1 (en) 2019-12-04
KR101893047B1 (ko) 2018-08-29
EP3101624A1 (en) 2016-12-07
CN105335950A (zh) 2016-02-17
KR20160121569A (ko) 2016-10-19
US20170076430A1 (en) 2017-03-16
JP2017517794A (ja) 2017-06-29

Similar Documents

Publication Publication Date Title
WO2015180659A1 (zh) 图像处理方法和图像处理装置
TWI554976B (zh) 監控系統及其影像處理方法
US9214013B2 (en) Systems and methods for correcting user identified artifacts in light field images
US20130335535A1 (en) Digital 3d camera using periodic illumination
US20160295108A1 (en) System and method for panoramic imaging
JP6223169B2 (ja) 情報処理装置、情報処理方法およびプログラム
EP2328125A1 (en) Image splicing method and device
US20110273369A1 (en) Adjustment of imaging property in view-dependent rendering
US9813693B1 (en) Accounting for perspective effects in images
JP6452360B2 (ja) 画像処理装置、撮像装置、画像処理方法およびプログラム
US11568555B2 (en) Dense depth computations aided by sparse feature matching
KR102450236B1 (ko) 전자 장치, 그 제어 방법 및 컴퓨터 판독가능 기록 매체
US11436742B2 (en) Systems and methods for reducing a search area for identifying correspondences between images
CN110213491B (zh) 一种对焦方法、装置及存储介质
US11361455B2 (en) Systems and methods for facilitating the identifying of correspondences between images experiencing motion blur
US20230394834A1 (en) Method, system and computer readable media for object detection coverage estimation
US10354399B2 (en) Multi-view back-projection to a light-field
US20230245332A1 (en) Systems and methods for updating continuous image alignment of separate cameras
US11450014B2 (en) Systems and methods for continuous image alignment of separate cameras
CN106604015B (zh) 一种图像处理方法及装置
CN113132715B (zh) 一种图像处理方法、装置、电子设备及其存储介质
Lee et al. A mobile spherical mosaic system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15800270

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2015800270

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015800270

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20167025332

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2016559561

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE