US20160142627A1 - Image capturing device and digital zooming method thereof - Google Patents

Image capturing device and digital zooming method thereof Download PDF

Info

Publication number
US20160142627A1
US20160142627A1 US14/571,021 US201414571021A US2016142627A1 US 20160142627 A1 US20160142627 A1 US 20160142627A1 US 201414571021 A US201414571021 A US 201414571021A US 2016142627 A1 US2016142627 A1 US 2016142627A1
Authority
US
United States
Prior art keywords
image
primary
rectified
generate
rectified image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US14/571,021
Other versions
US9325899B1 (en
Inventor
Hong-Long Chou
Yi-Hong Tseng
Wen-Yan Chang
Yu-Chih Wang
Tsan-Wei Wang
Yi-Yi Yu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Altek Semiconductor Corp
Original Assignee
Altek Semiconductor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Altek Semiconductor Corp filed Critical Altek Semiconductor Corp
Assigned to Altek Semiconductor Corp. reassignment Altek Semiconductor Corp. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHANG, WEN-YAN, CHOU, HONG-LONG, TSENG, YI-HONG, WANG, TSAN-WEI, WANG, YU-CHIH, YU, Yi-yi
Application granted granted Critical
Publication of US9325899B1 publication Critical patent/US9325899B1/en
Publication of US20160142627A1 publication Critical patent/US20160142627A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • H04N5/23229
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/0093Geometric image transformation in the plane of the image for image warping, i.e. transforming by individually repositioning each pixel
    • G06T3/18
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T7/0051
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

An image capturing device and a digital zooming method thereof are proposed in the invention. The method includes the following steps. A scene is captured by a primary lens and a secondary lens to generate a primary image and a secondary image. Image rectification is then performed on the primary and the secondary images to obtain two corresponding rectified images. Feature points detected and matched from the two rectified images are used to determine a corresponding overlapping region, respectively. Pixel displacement and depth map in the two overlapping regions could be calculated and estimated. Image zooming and warping techniques are performed on the two rectified images to obtain corresponding warped images using a recalculated homography matrix according to each specific zooming factor. Finally, the two overlapping regions in the warped image are fused by a weighted blending approach to generate a digital zoomed image.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application claims the priority benefit of Taiwan application serial no. 103139384, filed on Nov. 13, 2014. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The invention generally relates to an image capturing device and a digital zooming method thereof, in particular, to an image capturing device with dual lenses and a digital zooming method thereof.
  • 2. Description of Related Art
  • With development in technology, various smart image capturing devices, such as tablet computers, personal digital assistants and smart phones, have become indispensable tools for people nowadays. Camera lenses equipped in high-end smart image capturing devices provide same or better specifications than those of traditional consumer cameras, and some even provide three-dimensional image capturing features or near-equivalent pixel qualities to those of digital single lens reflex cameras.
  • When such image capturing devices perform digital zoom to enlarge an image, image blur and distortion may occur. In terms of a single lens, digital zoom is performed on a single image via an image processing technique. However, after the image is enlarged, it details may not be preserved. As a zooming factor increases, the image appears more blur and distorted.
  • On the other hand, in terms of dual lenses, the wide-angle lens may be used for capturing a wide-angle image, and the telephoto lens may be used for capturing a narrow-angle image. Either one of the wide-angle image and the narrow-angle image would be set as a target image for digital zoom. However, throughout the digital zooming process, if the target image needs to be switched to the other image, the viewed image may appear flickering or unsmooth.
  • Accordingly, to present an image that meets the user's expectation during an image digital process is one of the problems to be solved.
  • SUMMARY OF THE INVENTION
  • Accordingly, the invention is directed to an image capturing device and a digital zooming method thereof, where a digital zoomed image with high quality would be provided throughout a digital zooming process.
  • The invention is directed to a digital zooming method of an image capturing device, adapted to an image capturing device having a primary lens and a secondary lens. The method includes the following steps: capturing a scene by using the primary lens and the secondary lens to generate a primary image and a secondary image; performing image rectification on the primary image and the secondary image to generate a primary rectified image and a secondary rectified image; performing feature point detection on the primary rectified image and the secondary rectified image so as to detect overlapping regions respectively in the primary rectified image and the secondary rectified image, and further obtaining a plurality of pixel displacements and a depth map of the overlapping regions respectively in the primary rectified image and the secondary rectified image; when a zooming factor is between 1 and a primary-secondary image factor, performing image zooming and image warping on the primary rectified image and the secondary rectified image to generate a primary warped image and a secondary warped image according to the zooming factor, the pixel displacements, and the depth map, wherein the primary-secondary image factor is a ratio of the secondary rectified image to the primary rectified image; and performing image fusion on overlapping regions respectively in the primary warped image and the secondary warped image to generate a digital zoomed image.
  • According to an embodiment of the invention, the step of performing image rectification on the primary image and the secondary image to generate the primary rectified image and the secondary rectified image includes: obtaining a plurality of rectification parameters associated with the primary lens and the secondary lens; and rectifying the primary image and the secondary image to generate the primary rectified image and the secondary rectified image according to the rectification parameters.
  • According to an embodiment of the invention, the step of performing feature point detection on the primary rectified image and the secondary rectified image so as to detect the overlapping regions respectively in the primary rectified image and the secondary rectified image, and further obtaining the pixel displacements and the depth map of the overlapping regions respectively in the primary rectified image and the secondary rectified image includes: detecting a plurality of feature points from the primary rectified image and the secondary rectified image; identifying a plurality of feature point correspondences to calculate a homography matrix according to color information of a plurality of neighboring points of each of the feature points in the primary rectified image and the secondary rectified image; obtaining the overlapping regions respectively in the primary rectified image and the secondary rectified image according to the homography matrix and accordingly obtaining each of the pixel displacements; and performing stereo matching on each of the feature points in the primary rectified image and the secondary rectified image to obtain the depth map corresponding to each of the feature points.
  • According to an embodiment of the invention, the primary lens and the secondary lens have different fields of view and same distortion levels. The field of view of the primary lens is greater than the field of view of the secondary lens. The primary-secondary image factor is fixed and prior known. When the zooming factor is between 1 and the primary-secondary image factor, the step of performing image zooming and image warping on the primary rectified image and the secondary rectified image to generate the primary warped image and the secondary warped image according to the zooming factor, the pixel displacements, and the depth map includes: enlarging the primary rectified image to generate an enlarged primary rectified image according to the zooming factor; shrinking the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor; and performing image warping on the enlarged primary rectified image and the shrunken secondary rectified image to generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map, wherein a warping level is associated with the depth map. When the zooming factor is less than 1, only the primary rectified image would be shrunken. When the zooming factor is greater than the primary-secondary image factor, only the secondary rectified image would be enlarged.
  • According to an embodiment of the invention, the primary lens and the secondary lens have same fields of view and same distortion levels. After the step of performing image rectification on the primary image and the secondary image to generate the primary rectified image and the secondary rectified image, the digital zooming method further includes: performing image binning on the primary rectified image to generate a binned primary rectified image; performing image cropping on the secondary rectified image to generate a cropped secondary rectified image, wherein a size of the binned primary rectified image and a size of the cropped secondary rectified image are the same; and setting the binned primary rectified image and the cropped secondary rectified image respectively as the primary rectified image and the secondary rectified image. When the zooming factor is between 1 and the primary-secondary image factor, the step of performing image zooming and image warping on the primary rectified image and the secondary rectified image to generate the primary warped image and the secondary warped image according to the zooming factor, the pixel displacements, and the depth map includes: enlarging the primary rectified image to generate an enlarged primary rectified image according to the zooming factor; shrinking the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor; and performing image warping on the enlarged primary rectified image and the shrunken secondary rectified image to generate the primary warped image and the secondary warped image according to the pixel displacements and the pixel depth map, wherein a warping level is associated with the depth map. When the zooming factor is less than 1, only the primary rectified image would be shrunken. When the zooming factor is greater than the primary-secondary image factor, only the secondary rectified image would be enlarged.
  • According to an embodiment of the invention, the primary lens and the secondary lens have same fields of view and different distortion levels, and the distortion level of the primary lens is much less than the distortion level of the secondary lens. After the step of performing image rectification on the primary image and the secondary image to generate the primary rectified image and the secondary rectified image, the method further includes: performing image cropping on a center region of the secondary rectified image to generate a cropped secondary rectified image; and setting the cropped secondary rectified image as the secondary rectified image. When the zooming factor is between 1 and the primary-secondary image factor, the step of performing image zooming and image warping on the primary rectified image and the secondary rectified image to generate the primary warped image and the secondary warped image according to the zooming factor, the pixel displacements, and the depth map includes: enlarging the primary rectified image to generate an enlarged primary rectified image according to the zooming factor; shrinking the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor; and performing image warping on the enlarged primary rectified image and the shrunken secondary rectified image to generate the primary warped image and the secondary warped image according to the pixel displacements and the pixel depth map, wherein a warping level is associated with the depth map. When the zooming factor is less than 1, only the primary rectified image would be shrunken. When the zooming factor is greater than the primary-secondary image factor, only the secondary rectified image would be enlarged.
  • According to an embodiment of the invention, the step of performing image fusion on the overlapping regions respectively in the primary warped image and the secondary warped image to generate the digital zoomed image includes: setting a first weight and a second weight respectively corresponding to the primary warped image and the secondary warped image according to the zooming factor; performing image fusion on the overlapping regions respectively in the primary warped image and the secondary warped image to generate a fused overlapping image based on the first weight and the second weight; and substituting the overlapping regions respectively in the primary warped image and the secondary warped image by the fused overlapping image to generate the digital zoomed image.
  • The invention is also directed to an image capturing device including a primary lens, a secondary lens, a storage unit, and at least one processing unit, where the processing unit is coupled to the lens and the storage unit. The storage unit is configured to record a plurality of modules. The processing unit is configured to access and execute the modules recorded in the storage unit. The modules include an image capturing module, an image preprocessing module, a feature analyzing module, an image zooming-warping module, and an image fusion module. The image capturing module is configured to capture a scene by using the primary lens and the secondary lens to generate a primary image and a secondary image. The image preprocessing module is configured to perform image rectification on the primary image and the secondary image to generate a primary rectified image and a secondary rectified image. The feature analyzing module is configured to perform feature point detection on the primary rectified image and the secondary rectified image so as to detect overlapping regions respectively in the primary rectified image and the secondary rectified image, and further obtain a plurality of pixel displacements and a depth map of the overlapping regions respectively in the primary rectified image and the secondary rectified image. The image zooming-warping module is configured to perform image zooming and image warping on the primary rectified image and the secondary rectified image to generate a primary warped image and a secondary warped image according to the zooming factor, the pixel displacements, and the depth map. The image fusion module is configured to perform image fusion on overlapping regions respectively in the primary warped image and the secondary warped image to generate a digital zoomed image.
  • According to an embodiment of the invention, the image preprocessing module obtains a plurality of rectification parameters associated with the primary lens and the secondary lens, and rectifies the primary image and the secondary image to generate the primary rectified image and the secondary rectified image according to the rectification parameters.
  • According to an embodiment of the invention, the feature analyzing module detects a plurality of feature points from the primary rectified image and the secondary rectified image, identifies a plurality of feature point correspondences to calculate a homography matrix according to color information of a plurality of neighboring points of each of the feature points in the primary rectified image and the secondary rectified image, obtains the overlapping regions respectively in the primary rectified image and the secondary rectified image according to the homography matrix and accordingly obtains each of the pixel displacements, and performs stereo matching on each of the feature point correspondences in the primary rectified image and the secondary rectified image to obtain the depth map.
  • According to an embodiment of the invention, the primary lens and the secondary lens have different fields of view and same distortion levels. The field of view of the primary lens is greater than the field of view of the secondary lens. The primary-secondary image factor is fixed and prior known. The image zooming-warping module enlarges the primary rectified image to generate an enlarged primary rectified image according to the zooming factor, shrinks the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor, and performs image warping on the enlarged primary rectified image and the shrunken secondary rectified image to respectively generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map, wherein a warping level is associated with the depth map.
  • According to an embodiment of the invention, the primary lens and the secondary lens have same fields of view and same distortion levels. The image preprocessing module further performs image binning on the primary rectified image to generate a binned primary rectified image, performs image cropping on the secondary rectified image to generate a cropped secondary rectified image, and sets the binned primary rectified image and the cropped secondary rectified image respectively as the primary rectified image and the secondary rectified image, where a size of the binned primary rectified image and a size of the cropped secondary rectified image are the same. The image zooming-warping module enlarges the primary rectified image to generate an enlarged primary rectified image according to the zooming factor, shrinks the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor, and performs image warping on the enlarged primary rectified image and the shrunken secondary rectified image to respectively generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map, wherein a warping level is associated with the depth map.
  • According to an embodiment of the invention, the primary lens and the secondary lens have same fields of view and different distortion levels, and the distortion level of the primary lens is much less than the distortion level of the secondary lens. A primary-secondary center image factor between a center region of the primary rectified image and a center region of the secondary rectified image is fixed and prior known. The image preprocessing module further performs image cropping on the center region of the secondary rectified image to generate a cropped secondary rectified image, and sets the cropped secondary rectified image as the secondary rectified image. The image zooming-warping module enlarges the primary rectified image to generate an enlarged primary rectified image according to the zooming factor, shrinks the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor, and performs image warping on the enlarged primary rectified image and the shrunken secondary rectified image to respectively generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map, wherein a warping level is associated with the depth map.
  • According to an embodiment of the invention, the image fusion module sets a first weight and a second weight respectively corresponding to the primary warped image and the secondary warped image according to the zooming factor, performs image fusion on the overlapping regions respectively in the primary warped image and the secondary warped image to generate a fused overlapping image based on the first weight and the second weight, and substitutes the overlapping regions respectively in the primary warped image and the secondary warped image by the fused overlapping image to generate the digital zoomed image.
  • In summary, in the image capturing device and the digital zooming method proposed in the invention, by analyzing different imaging properties and distortion levels of the dual lenses, image zooming and image warping are automatically performed on images captured by the dual lenses according to a zooming factor to generate two warped images with similar focal lengths, sizes, and fields of view. The two warped images are fused by their weighted sum, and a digital zoomed image corresponding to the zooming factor would be obtained thereafter. As compared with the existing digital zooming techniques, the image capturing device and the digital zooming method proposed in the invention may provide a digital zoomed image with high quality throughout a digital zooming process.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
  • FIG. 1 illustrates a block diagram of an image capturing device according to an embodiment of the invention.
  • FIG. 2 illustrates a flowchart of a digital zooming method of an image capturing device according to an embodiment of the invention.
  • FIG. 3 is a schematic diagram of a primary rectified image and a secondary rectified image according to an embodiment of the invention.
  • FIG. 4 is a schematic diagram of a primary rectified image and a secondary rectified image according to another embodiment of the invention.
  • FIG. 5 is a schematic diagram of a primary image and a secondary image according to an embodiment of the invention.
  • FIG. 6 illustrates a functional block diagram of a digital zooming method of an image capturing device according to an embodiment of the invention.
  • DESCRIPTION OF THE EMBODIMENTS
  • Reference will now be made in detail to the present embodiments of the invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the description to refer to the same or like parts. In addition, the specifications and the like shown in the drawing figures are intended to be illustrative, and not restrictive. Therefore, specific structural and functional detail disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the invention.
  • FIG. 1 illustrates a block diagram of an image capturing device according to an embodiment of the invention. It should, however, be noted that this is merely an illustrative example and the invention is not limited in this regard. All components of the image capturing device and their configurations are first introduced in FIG. 1. The detailed functionalities of the components are disclosed along with FIG. 2.
  • Referring to FIG. 1, an image capturing device 100 includes a primary lens 10 a, a secondary lens 10 b, a storage unit 20, and at least one processing units 30. In the present embodiment, the image capturing device 100 is, for example, a digital camera, a digital camcorder, a digital single lens reflex camera or other devices provided with an image capturing feature such as a smart phone, a tablet computer, a personal digital assistant, and so on. The invention is not limited herein.
  • The primary lens 10 a and the secondary lens 10 b include optical sensing elements for sensing light intensity entering the primary lens 10 a and the secondary lens 10 b to thereby generate images. The optical sensing elements are, for example, charge-coupled-device (CCD) elements, complementary metal-oxide semiconductor (CMOS) elements, and yet the invention is not limited thereto. Moreover, focal lengths, sensing sizes, fields of view, and distortion levels of the primary lens 10 a and the secondary lens 10 b may be the same or different. The invention is not limited herein.
  • The storage unit 20 may be one or a combination of a stationary or mobile random access memory (RAM), a read-only memory (ROM), a flash memory, a hard drive or other similar devices. The storage unit 20 is configured to record a plurality of modules executable by the processing unit 30, where the modules may be loaded into the processing unit 30 for performing digital zoom on an image captured by the image capturing device 100.
  • The processing unit 30 may be, for example, a central processing unit (CPU) or other programmable devices for general purpose or special purpose such as a microprocessor and a digital signal processor (DSP), a programmable controller, an application specific integrated circuit (ASIC), a programmable logic device (PLD) or other similar devices or a combination of above-mentioned devices. The processing unit 30 is coupled to the primary lens 10 a, the secondary lens 10 b, and the storage unit 20, and capable of accessing and executing the modules recorded in the storage unit 20.
  • The aforesaid modules include an image capturing module 121, an image preprocessing module 122, a feature analysing module 123, an image zooming-warping module 124, and an image fusion module 125, where the modules may be loaded into the processing unit 30 for digital zoom.
  • FIG. 2 illustrates a flowchart of a digital zooming method of an image capturing device according to an embodiment of the invention, and the method in FIG. 2 may be implemented by the components of the image capturing device 100 in FIG. 1.
  • Referring to both FIG. 1 and FIG. 2, the image capturing module 121 of the image capturing device 100 captures a scene by using the primary lens and the secondary lens to generate a primary image and a secondary image (Step S202). In other words, when the user desires to capture the scene by using the image capturing device 100, the image capturing module 121 would generate the primary image corresponding to the primary lens 10 a and the secondary image corresponding to the secondary lens 10 b. In the present embodiment, the primary image may be used for previewing purposes; it is an image with a lower quality and a larger field of view. On the other hand, the secondary image may not be used for previewing purposes; it is an image with a higher quality and a smaller field of view, and may be used as an auxiliary for digital zoom in the follow-up steps. More detailed information on the primary image and the secondary image will be given later on.
  • Next, the image preprocessing module 122 performs image rectification on the primary image and the secondary image to respectively generate a primary rectified image and a secondary rectified image (Step S204). To be specific, the image preprocessing module 122 may calibrate the shift amount of the brightness, the color, and the geometric position of each of the primary image and the secondary image respectively due to the primary lens 10 a and the secondary lens 10 b.
  • In the present embodiment, the image preprocessing module 122 may obtain a plurality of rectification parameters associated with the primary lens 10 a and the secondary lens 10 b. Such rectification parameters may be intrinsic parameters and extrinsic parameters of a camera for image rectification. The intrinsic parameters may be used for describing the transformation between camera coordinates and image coordinates. That is, the camera coordinates may be projected onto a projective plane according to the pinhole imaging principle. The intrinsic parameters may be, for example, focal length, image center, principal point, distortion coefficients, and so forth. The extrinsic parameters are used for describing the transformation between world coordinates and camera coordinates. The extrinsic parameters may be, for example, parameters in associated with the position and the viewing angle of the image capturing device 100 in a three-dimensional coordinate system such as a rotation matrix and a translation vector. The rectification parameters may also be parameters in associated with illumination compensation or color correction. The invention is not limited herein. The image preprocessing module 122 may rectify the primary image and the secondary image according to the aforesaid rectification parameters. The primary image and the secondary image being rectified may be referred to as a “primary rectified image” and a “secondary rectified image” respectively.
  • Next, the feature analyzing module 123 performs feature point detection on the primary rectified image and the secondary rectified image so as to detect overlapping regions respectively in the primary rectified image and the secondary rectified image, and further obtains a plurality of pixel displacements and a depth map of the overlapping regions respectively in the primary rectified image and the secondary rectified image (Step S206). Each of the overlapping regions is an overlapping portion between the field of view of the primary rectified image and that of the secondary rectified image.
  • To be specific, the feature analyzing module 123 may detect a plurality of feature points from the primary rectified image and the secondary rectified image by leveraging a feature detection algorithm such as edge detection, corner detection, blob detection, and so forth. Next, the feature analyzing module 123 may identify a plurality of feature point correspondences and obtain the overlapping regions respectively in the primary rectified image and the secondary rectified image.
  • In an embodiment, the feature analyzing module 123 may identify a plurality of feature point correspondences according to color information of a plurality of neighboring points of each of the feature points in the primary rectified image and the secondary rectified image so as to calculate a homography matrix. The feature analyzing module 123 may not only obtain the pixel displacements of the two overlapping regions via the homography matrix, but may also perform stereo matching due to the similar fields of view so as to estimate the depth map.
  • To be specific, the feature analyzing module 123 may determine the displacement and shift properties of each feature point correspondence to obtain the pixel displacements thereof. On the other hand, the feature analyzing module 123 may perform stereo matching on each of the feature point correspondences to obtain the pixel depth map. In other words, the feature analyzing module 123 may calculate the depth information of each of the feature point correspondences in the overlapping regions respectively in the primary rectified image and the secondary rectified image and store the depth information in a form of a depth map.
  • Next, when a zooming factor is between 1 and a primary-secondary image factor, the image zooming-warping module 124 performs image zooming and image warping on the primary rectified image and the secondary rectified image to generate a primary warped image and a secondary warped image according to the zooming factor, the pixel displacements, and the depth map (Step S208). The “primary-secondary image factor” is referred to as a ratio of the secondary rectified image to the primary rectified image, and is fixed and prior known. The “zooming factor” is an enlargement level to be adjusted on the primary rectified image; it may be set by the user or may be a default value of the image capturing device 100. The image zooming-warping module 124 may perform image zooming and image warping on the primary rectified image and the secondary rectified image according to the zooming factor as well as relative displacement, shift, and depth information between the two overlapping regions so as to generate two images with two overlapping regions having similar views and appearances, where the zooming factor of each of the two generated images also meets the user's need. Moreover, a warping level of each of the generated images is associated with the depth map. As the depth value increases, the warping level decreases; as the depth value decreases, the warping level increases. More details on the image zooming and image warping processes will be described in the follow-up embodiments.
  • Next, the image fusion module 125 performs image fusion on overlapping regions respectively in the primary warped image and the secondary warped image to generate a digital zoomed image (Step S210). To be specific, the image fusion module 125 may set a first weight and a second weight respectively corresponding to the primary warped image and the secondary warped image according to the zooming factor. Next, the image fusion module 125 may perform image fusion on each color pixel of the overlapping regions respectively in the primary warped image and the secondary warped image by a weighted sum based on the first weight and the second weight, where the resulting image is referred to as a “fused overlapping region.” Then, the image fusion module 125 may substitute the overlapping region in the primary warped image by the fused overlapping image so as to generate the digital zoomed image with high quality.
  • In another embodiment, when the zooming factor is less than 1, the image zooming-warping module 124 may only shrink the primary rectified image to generate a shrunken primary rectified image. Next, the image zooming-warping module 124 may perform image warping on the shrunken primary rectified image to generate the primary warped image and set the primary warped image as the digital zoomed image. On the other hand, when the zooming factor is greater than the primary-secondary image factor, the image zooming-warping module 124 may enlarge the secondary rectified image to generate an enlarged secondary rectified image according to the zooming factor. Next, the image zooming-warping module 124 may perform image warping on the shrunken secondary rectified image to generate the secondary warped image and set the secondary rectified image as the digital zoomed image.
  • The image capturing device proposed in the invention may be adapted to different sets of lenses. The digital zooming method corresponding to three different sets of lenses will be illustrated hereinafter.
  • FIG. 3 is a schematic diagram of a primary rectified image and a secondary rectified image according to an embodiment of the invention. It should be noted that, the primary lens 10 a and the secondary lens 10 b have different fields of view and same distortion levels, where the field of view of the primary lens is greater than that of the secondary lens. Herein, the field of view of the primary image is greater than that of the secondary image, and yet an image quality of the secondary image surpasses that of the primary image. In other words, the primary image and the secondary image are respectively a wide-angle image and a narrow-angle image, and thus the secondary image is larger and presented with clarity. However, the secondary image is used as an auxiliary for digital zoom but not used for previewing purposes.
  • Referring to FIG. 3, after the image capturing module 122 captures a primary image and a secondary image by using the primary lens 10 a and the secondary lens 10 b, the image preprocessing module 122 may perform image rectification on the primary image and the secondary image to generate a primary rectified image 310 a and a secondary rectified image 310 b. A region 315 a and a region 315 b are two overlapping regions detected respectively from the primary rectified image 310 a and the secondary image 310 b by the feature analyzing module 123. In the present embodiment, since the secondary rectified image 310 b is a narrow-angle image, the secondary rectified image 310 and the overlapping region 315 b in the secondary rectified image 310 b are the same. That is, the entire secondary rectified image 310 b and the overlapping region 315 a in the primary rectified image 310 a would be identical.
  • In the present embodiment, the image zooming-warping module 124 may enlarge the primary rectified image to generate an enlarged primary rectified image according to the zooming factor. Throughout the enlargement process, a center region of the primary rectified image would gradually alike to the secondary rectified image. The image zooming-warping module 124 may further shrink the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor. Next, the image zooming-warping module 124 may perform image warping on the enlarged primary rectified image and the shrunken secondary rectified image to generate a primary warped image and a secondary warped image according to the pixel displacements and the depth map.
  • The image fusion module 125 may set a first weight and a second weight respectively corresponding to the primary warped image and the secondary warped image according to a zooming factor. The first weight and the second weight are allocated based on a zooming factor and a factor corresponding to each of the primary image and the secondary image. In the present embodiment, assume that the factors corresponding to the primary image and the secondary image are respectively 1 and 2. When the zooming factor is 1 (i.e., the median of 1 and 2), the image fusion module 125 would set the first weight and the second weight respectively to 0.5. In another embodiment, when the zooming factor is 1.2, the image fusion module 125 would set the first weight and the second weight respectively to 0.8 and 0.2. However, the image fusion module 125 is not restricted to set the two weights based on a linear relationship. In other embodiments, the image fusion module 125 may set the two weights based on other formulas. The invention is not limited herein. After performing image fusion, the image fusion module 125 may generate a digital zooming image which is a relatively smooth, clear, and enlarged image.
  • FIG. 4 is a schematic diagram of a primary rectified image and a secondary rectified image according to another embodiment of the invention.
  • Referring to FIG. 4, in the present embodiment, the primary lens 10 a and the secondary lens 10 b of the image capturing device 100 have same fields of view and distortion levels. Hence, the field of view in the primary image and that in the secondary image captured by the primary lens 10 a and the secondary lens 10 b are the same. In the present embodiment, the image preprocessing module 122 of the image capturing device 100 may perform image binning and image cropping on a primary rectified image 400 a and a secondary rectified image 400 b to generate two images with different fields of view. Additionally, the embodiment is adapted to a digital photo-shooting equipment with a thumbnail image preview feature.
  • To be specific, the image preprocessing module 122 may perform image binning on the primary rectified image to generate a binned primary rectified image 410 a with a smaller size. In an embodiment, the size of the binned primary rectified image 410 a may be ¼ of the size of the primary rectified image 400 a. In other words, the image preprocessing 122 may perform 2×2 pixel binning on the primary rectified image 400 a to bin each four neighboring pixels of the primary image 400 a into one and thereby generate the binned primary rectified image 410 a. As compared with the primary rectified image 400 a, the binned primary rectified image 410 a may be transferred faster and yet with a lower resolution.
  • On the other hand, the image preprocessing module 122 may perform image cropping on the secondary rectified image to generate a cropped secondary rectified image 410 a. In the present embodiment, the size of the cropped secondary rectified image 410 b may also be ¼ of the size of the secondary rectified image 400 b. In other words, the image preprocessing module 122 may crop a center region 405 b with ¼ of the size of the secondary rectified image 400 b and thereby generate the cropped secondary rectified image 410 b.
  • Hence, the image preprocessing module 122 may simulate the binned primary rectified image 410 a and the secondary rectified image 400 b with same sizes and different field of views, and further set the binned primary rectified image and the cropped secondary rectified image respectively as the primary rectified image and the secondary rectified image. Next, similar image processing steps as illustrated in FIG. 3 may be performed on the redefined primary rectified image and secondary rectified image to generate a digital zoomed image, where a region 415 a in the binned primary rectified image 410 a and the cropped secondary rectified image 410 b are two overlapping regions. Details on the image processing steps may refer to the related description in the previous paragraphs and may not be repeated herein.
  • FIG. 5 is a schematic diagram of a primary image and a secondary image according to an embodiment of the invention. It should be noted that, in the present embodiment, the primary lens 10 a and the secondary lens 10 b have same fields of view and different distortion levels, where the distortion level of the primary lens is much less than that of the secondary lens. In the present embodiment, the primary lens 10 a is a lens with no special distortion while the secondary lens 10 b is a distorted lens with a special design.
  • Referring to FIG. 5, in the present embodiment, a primary image 500 a captured by the primary lens 10 a of the image capturing module 121 is a normal image. A secondary image 500 b captured by the secondary lens 10 b of the image capturing module 121 is an image with distortion. What is special is that a center region of an original scene (e.g., a region 505 a in the primary image 500 a) may be projected onto a center region of the sensing elements of the secondary lens 10 b with a larger proportion. Such center region may result in a region 505 b in the secondary image 500 b having a lower distortion level. On the other hand, an outer region of the original scene (e.g., a region 508 a in the primary image 500 a) may be projected onto a remaining outer region of the sensing elements of the secondary lens 10 b. Such outer region may result in a region 508 b in the secondary image 500 b having a higher distortion level.
  • Next, the image preprocessing module 122 may perform image rectification on the primary 500 a and the secondary image 500 b to generate a primary rectified image and a secondary rectified image. The image preprocessing module 122 may crop the larger proportion of the center region in the primary rectified image (i.e., the region corresponding to the region 505 b of the secondary image) to generate a cropped secondary rectified image, and further set the cropped secondary rectified image as the secondary rectified image. The secondary rectified image has a relatively smaller field of view as compared with the primary rectified image, and yet has a higher resolution. The image zooming-warping module 124 may enlarge the primary rectified image to generate an enlarged primary rectified image and shrink the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor. Next, the image zoom-warping module 124 may perform image warping on the enlarged primary rectified image and the shrunken secondary rectified image to respectively generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map of the overlapping regions obtained from the feature analyzing module 123.
  • It should be noted that, since the secondary rectified image and the primary rectified image have the same fields of view, after lens distortion correction is performed on the outer region 508 b in the secondary image 500 b, the image capturing device 100 may perform depth estimation on the entire primary rectified image by using the secondary rectified image and the corrected secondary image.
  • The aforesaid digital zooming method of the image processing device may be summarized by a functional block diagram as illustrated in FIG. 6 according to an embodiment of the invention.
  • Referring to FIG. 6, in the proposed method, the image capturing device 100 may capture a scene by using the primary lens 10 a and the secondary lens 10 b to generate a primary image 601 a and a secondary image 601 b. Next, image rectification S603 may be performed on the primary image 601 a and the secondary image 610 b to respectively generate a primary rectified image 603 a and a secondary rectified image 603 b. Feature point detection S605 may then be performed on the primary rectified image 603 a and the secondary rectified image 603 b to obtain pixel displacements and a depth map of overlapping regions respectively in the two images. Next, image zooming and image warping 5607 may be performed on the primary rectified image 603 a and the secondary rectified image 603 b according to a zooming factor 606 as well as the pixel displacements and the pixel depth map obtained from feature detection S605 to generate a primary warped image 607 a and a secondary warped image 607 b. Finally, image fusion may be performed on the primary warped image 607 a and the secondary warped image 607 b, and a smooth and clear digital zoomed image 611 may be output thereafter.
  • In summary, in the image capturing device and the digital zooming method proposed in the invention, by analyzing different imaging properties and distortion levels of the dual lenses, image zooming and image warping are automatically performed on images captured by the dual lenses according to a zooming factor to generate two warped images with similar focal lengths, sizes, and fields of view. The two warped images are fused by their weighted sum, and a digital zoomed image corresponding to the zooming factor would be obtained thereafter. As compared with the existing digital zooming techniques, the image capturing device and the digital zooming method proposed in the invention may provide a digital zoomed image with high quality throughout a digital zooming process.
  • It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present invention without departing from the scope or spirit of the invention. In view of the foregoing, it is intended that the present invention cover modifications and variations of this invention provided they fall within the scope of the following claims and their equivalents.

Claims (22)

What is claimed is:
1. A digital zooming method, adapted to an image capturing device having a primary lens and a secondary lens, comprising:
capturing a scene by using the primary lens and the secondary lens to generate a primary image and a secondary image;
performing image rectification on the primary image and the secondary image to generate a primary rectified image and a secondary rectified image;
performing feature point detection on the primary rectified image and the secondary rectified image so as to detect overlapping regions respectively in the primary rectified image and the secondary rectified image, and further obtaining a plurality of pixel displacements and a depth map of the overlapping regions respectively in the primary rectified image and the secondary rectified image;
when a zooming factor is between 1 and a primary-secondary image factor, performing image zooming and image warping on the primary rectified image and the secondary rectified image to generate a primary warped image and a secondary warped image according to the zooming factor, the pixel displacements, and the depth map, wherein the primary-secondary image factor is a ratio of the secondary rectified image to the primary rectified image; and
performing image fusion on overlapping regions respectively in the primary warped image and the secondary warped image to generate a digital zoomed image.
2. The digital zooming method of claim 1, wherein the step of performing image rectification on the primary image and the secondary image to generate the primary rectified image and the secondary rectified image comprises:
obtaining a plurality of rectification parameters associated with the primary lens and the secondary lens; and
rectifying the primary image and the secondary image to generate the primary rectified image and the secondary rectified image according to the rectification parameters.
3. The digital zooming method of claim 1, wherein the step of performing feature point detection on the primary rectified image and the secondary rectified image so as to detect the overlapping regions respectively in the primary rectified image and the secondary rectified image, and further obtaining the pixel displacements and the depth map of the overlapping regions respectively in the primary rectified image and the secondary rectified image comprises:
detecting a plurality of feature points from the primary rectified image and the secondary rectified image;
identifying a plurality of feature point correspondences to calculate a homography matrix according to color information of a plurality of neighboring points of each of the feature points in the primary rectified image and the secondary rectified image;
obtaining the overlapping regions respectively in the primary rectified image and the secondary rectified image according to the homography matrix and accordingly obtaining each of the pixel displacements; and
performing stereo matching on each of the feature point correspondences in the primary rectified image and the secondary rectified image to obtain the depth map.
4. The digital zooming method of claim 1, wherein the primary lens and the secondary lens have different fields of view and same distortion levels, wherein the field of view of the primary lens is greater than the field of view of the secondary lens, and wherein when the zooming factor is between 1 and the primary-secondary image factor, the step of performing image zooming and image warping on the primary rectified image and the secondary rectified image to generate the primary warped image and the secondary warped image according to the zooming factor, the pixel displacements, and the depth map comprises:
enlarging the primary rectified image to generate an enlarged primary rectified image according to the zooming factor;
shrinking the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor; and
performing image warping on the enlarged primary rectified image and the shrunken secondary rectified image to generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map, wherein a warping level is associated with the depth map.
5. The digital zooming method of claim 1, wherein the primary lens and the secondary lens have same fields of view and same distortion levels, and wherein after the step of performing image rectification on the primary image and the secondary image to generate the primary rectified image and the secondary rectified image, the digital zooming method further comprises:
performing image binning on the primary rectified image to generate a binned primary rectified image;
performing image cropping on the secondary rectified image to generate a cropped secondary rectified image, wherein a size of the binned primary rectified image and a size of the cropped secondary rectified image are the same; and
setting the binned primary rectified image and the cropped secondary rectified image respectively as the primary rectified image and the secondary rectified image.
6. The digital zooming method of claim 1, wherein when the zooming factor is between 1 and the primary-secondary image factor, the step of performing image zooming and image warping on the primary rectified image and the secondary rectified image to generate the primary warped image and the secondary warped image according to the zooming factor, the pixel displacements, and the depth map comprises:
enlarging the primary rectified image to generate an enlarged primary rectified image according to the zooming factor;
shrinking the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor; and
performing image warping on the enlarged primary rectified image and the shrunken secondary rectified image to generate the primary warped image and the secondary warped image according to the pixel displacements and the pixel depth map, wherein a warping level is associated with the depth map.
7. The digital zooming method of claim 1, wherein the primary lens and the secondary lens have same fields of view and different distortion levels, wherein the distortion level of the primary lens is much less than the distortion level of the secondary lens, and wherein after the step of performing image rectification on the primary image and the secondary image to generate the primary rectified image and the secondary rectified image, the digital zooming method further comprises:
performing image cropping on a center region of the secondary rectified image to generate a cropped secondary rectified image; and
setting the cropped secondary rectified image as the secondary rectified image.
8. The digital zooming method of claim 7, wherein when the zooming factor is between 1 and the primary-secondary image factor, the step of performing image zooming and image warping on the primary rectified image and the secondary rectified image to generate the primary warped image and the secondary warped image according to the zooming factor, the pixel displacements, and the depth map comprises:
enlarging the primary rectified image to generate an enlarged primary rectified image according to the zooming factor;
shrinking the secondary rectified image to generate a shrunken secondary rectified image; and
performing image warping on the enlarged primary rectified image and the shrunken secondary rectified image to generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map, wherein a warping level is associated with the depth map.
9. The digital zooming method of claim 1, wherein the step of performing image fusion on the overlapping regions respectively in the primary warped image and the secondary warped image to generate the digital zoomed image comprises:
setting a first weight and a second weight respectively corresponding to the primary warped image and the secondary warped image according to the zooming factor;
performing image fusion on the overlapping regions respectively in the primary warped image and the secondary warped image to generate a fused overlapping image based on the first weight and the second weight; and
substituting the overlapping regions respectively in the primary warped image and the secondary warped image by the fused overlapping image to generate the digital zoomed image.
10. The digital zooming method of claim 1, wherein when the zooming factor is greater than the primary-secondary image factor, the digital zooming method further comprises:
enlarging the secondary rectified image to generate an enlarged secondary rectified image according to the zooming factor;
performing image warping on the shrunken secondary rectified image to generate the secondary warped image according to the pixel displacements and the depth map, wherein a warping level is associated with the depth map; and
setting the secondary warped image as the digital zoomed image.
11. The digital zooming method of claim 1, wherein when the zooming factor is less than 1, the digital zooming method further comprises:
shrinking the primary rectified image to generate a shrunken primary rectified image;
performing image warping on the shrunken primary rectified image to generate the primary warped image according to the pixel displacements and the pixel depth map, wherein a warping level is associated with the depth map; and
setting the primary warped image as the digital zoomed image.
12. An image capturing device comprising:
a primary lens;
a secondary lens;
a storage unit, recording a plurality of modules; and
one or more processing unit, coupled to the primary lens, the secondary lens and the storage unit, and accessing and executing the modules recorded in the storage unit, wherein the modules comprise:
an image capturing module, capturing a scene by using the primary lens and the secondary lens to generate a primary image and a secondary image;
an image preprocessing module, performing image rectification on the primary image and the secondary image to generate a primary rectified image and a secondary rectified image;
a feature analyzing module, performing feature point detection on the primary rectified image and the secondary rectified image so as to detect overlapping regions respectively in the primary rectified image and the secondary rectified image, and further obtaining a plurality of pixel displacements and a depth map of the overlapping regions respectively in the primary rectified image and the secondary rectified image;
an image zooming-warping module, when a zooming factor is between 1 and a primary-secondary image factor, performing image zooming and image warping on the primary rectified image and the secondary rectified image to generate a primary warped image and a secondary warped image according to the zooming factor, the pixel displacements, and the depth map, wherein the primary-secondary image factor is a ratio of the secondary rectified image to the primary rectified image; and
an image fusion module, performing image fusion on overlapping regions respectively in the primary warped image and the secondary warped image to generate a digital zoomed image.
13. The image capturing device of claim 12, wherein the image preprocessing module obtains a plurality of rectification parameters associated with the primary lens and the secondary lens, and rectifies the primary image and the secondary image to generate the primary rectified image and the secondary rectified image according to the rectification parameters.
14. The image capturing device of claim 12, wherein the feature analyzing module detects a plurality of feature points from the primary rectified image and the secondary rectified image, identifies a plurality of feature point correspondences to calculate a homography matrix according to color information of a plurality of neighboring points of each of the feature points in the primary rectified image and the secondary rectified image, obtains the overlapping regions respectively in the primary rectified image and the secondary rectified image according to the homography matrix and accordingly obtains each of the pixel displacements, and performs stereo matching on each of the feature point correspondences in the primary rectified image and the secondary rectified image to obtain the depth map.
15. The image capturing device of claim 12, wherein the primary lens and the secondary lens have different fields of view and same distortion levels, wherein the field of view of the primary lens is greater than the field of view of the secondary lens, and wherein when the zooming factor is between 1 and the primary-secondary image factor, the image zooming-warping module enlarges the primary rectified image to generate an enlarged primary rectified image according to the zooming factor, shrinks the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor, and performs image warping on the enlarged primary rectified image and the shrunken secondary rectified image to respectively generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map, wherein a warping level is associated with the depth map.
16. The image capturing device of claim 12, wherein the primary lens and the secondary lens have same fields of view and same distortion levels, and the image preprocessing module further performs image binning on the primary rectified image to generate a binned primary rectified image, performs image cropping on the secondary rectified image to generate a cropped secondary rectified image, and sets the binned primary rectified image and the cropped secondary rectified image respectively as the primary rectified image and the secondary rectified image, wherein a size of the binned primary rectified image and a size of the cropped secondary rectified image are the same.
17. The image capturing device according to claim 16, wherein when the zooming factor is between 1 and the primary-secondary image factor, the image zooming-warping module enlarges the primary rectified image to generate an enlarged primary rectified image according to the zooming factor, shrinks the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor, and performs image warping on the enlarged primary rectified image and the shrunken secondary rectified image to generate the primary warped image and the secondary warped image according to the pixel displacements and the pixel depth map, wherein a warping level is associated with the depth map.
18. The image capturing device according to claim 11, wherein the primary lens and the secondary lens have same fields of view and different distortion levels, wherein the distortion level of the primary lens is much less than the distortion level of the secondary lens, and wherein the image preprocessing module further performs image cropping on a center region of the secondary rectified image to generate a cropped secondary rectified image, and sets the cropped secondary rectified image as the secondary rectified image.
19. The image capturing device according to claim 18, wherein when the zooming factor is between 1 and the primary-secondary image factor, the image zooming-warping module enlarges the primary rectified image to generate an enlarged primary rectified image according to the zooming factor, shrinks the secondary rectified image to generate a shrunken secondary rectified image, and performs image warping on the enlarged primary rectified image and the shrunken secondary rectified image to generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map, wherein a warping level is associated with the depth map.
20. The image capturing device according to claim 12, wherein the image fusion module sets a first weight and a second weight respectively corresponding to the primary warped image and the secondary warped image according to the zooming factor, performs image fusion on the overlapping regions respectively in the primary warped image and the secondary warped image to generate a fused overlapping image based on the first weight and the second weight, and substitutes the overlapping regions respectively in the primary warped image and the secondary warped image by the fused overlapping image to generate the digital zoomed image.
21. The image capturing device according to claim 12, wherein when the zooming factor is greater than the primary-secondary image factor, the image zooming-warping module further enlarges the secondary rectified image to generate an enlarged secondary rectified image according to the zooming factor, performs image warping on the shrunken secondary rectified image to generate the secondary warped image according to the pixel displacements and the depth map, and sets the secondary warped image as the digital zoomed image, wherein a warping level is associated with the depth map.
22. The image capturing device according to claim 12, wherein when the zooming factor is less than 1, the image zooming-warping module shrinks the primary rectified image to generate a shrunken primary rectified image, performs image warping on the shrunken primary rectified image to generate the primary warped image according to the pixel displacements and the depth map, and sets the primary warped image as the digital zoomed image, wherein a warping level is associated with the depth map.
US14/571,021 2014-11-13 2014-12-15 Image capturing device and digital zooming method thereof Active US9325899B1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
TW103139384A TWI554103B (en) 2014-11-13 2014-11-13 Image capturing device and digital zooming method thereof
TW103139384A 2014-11-13
TW103139384 2014-11-13

Publications (2)

Publication Number Publication Date
US9325899B1 US9325899B1 (en) 2016-04-26
US20160142627A1 true US20160142627A1 (en) 2016-05-19

Family

ID=55754849

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/571,021 Active US9325899B1 (en) 2014-11-13 2014-12-15 Image capturing device and digital zooming method thereof

Country Status (2)

Country Link
US (1) US9325899B1 (en)
TW (1) TWI554103B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160150211A1 (en) * 2014-11-20 2016-05-26 Samsung Electronics Co., Ltd. Method and apparatus for calibrating image
US20160360183A1 (en) * 2015-06-02 2016-12-08 Etron Technology, Inc. Monitor system and operation method thereof
US10317646B2 (en) 2016-10-14 2019-06-11 Largan Precision Co., Ltd. Optical imaging module, image capturing apparatus and electronic device
US10321112B2 (en) * 2016-07-18 2019-06-11 Samsung Electronics Co., Ltd. Stereo matching system and method of operating thereof
WO2019148996A1 (en) * 2018-01-31 2019-08-08 Oppo广东移动通信有限公司 Image processing method and device, storage medium, and electronic apparatus
CN111641775A (en) * 2020-04-14 2020-09-08 北京迈格威科技有限公司 Multi-shooting zoom control method, device and electronic system
US10848746B2 (en) 2018-12-14 2020-11-24 Samsung Electronics Co., Ltd. Apparatus including multiple cameras and image processing method
US10957029B2 (en) * 2016-11-17 2021-03-23 Sony Corporation Image processing device and image processing method
CN113055592A (en) * 2021-03-11 2021-06-29 Oppo广东移动通信有限公司 Image display method and device, electronic equipment and computer readable storage medium
US11050915B2 (en) * 2017-07-17 2021-06-29 Huizhou Tcl Mobile Communication Co., Ltd. Method for zooming by switching between dual cameras, mobile terminal, and storage apparatus
US11430089B2 (en) 2019-07-10 2022-08-30 Samsung Electronics Co., Ltd. Image processing method and image processing system for generating a corrected image

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9940541B2 (en) 2015-07-15 2018-04-10 Fyusion, Inc. Artificially rendering images using interpolation of tracked control points
US10726593B2 (en) 2015-09-22 2020-07-28 Fyusion, Inc. Artificially rendering images using viewpoint interpolation and extrapolation
US10262426B2 (en) 2014-10-31 2019-04-16 Fyusion, Inc. System and method for infinite smoothing of image sequences
US10176592B2 (en) 2014-10-31 2019-01-08 Fyusion, Inc. Multi-directional structured image array capture on a 2D graph
US10275935B2 (en) 2014-10-31 2019-04-30 Fyusion, Inc. System and method for infinite synthetic image generation from multi-directional structured image array
US10852902B2 (en) 2015-07-15 2020-12-01 Fyusion, Inc. Automatic tagging of objects on a multi-view interactive digital media representation of a dynamic entity
US10147211B2 (en) 2015-07-15 2018-12-04 Fyusion, Inc. Artificially rendering images using viewpoint interpolation and extrapolation
US10222932B2 (en) 2015-07-15 2019-03-05 Fyusion, Inc. Virtual reality environment based manipulation of multilayered multi-view interactive digital media representations
US11006095B2 (en) 2015-07-15 2021-05-11 Fyusion, Inc. Drone based capture of a multi-view interactive digital media
US10242474B2 (en) * 2015-07-15 2019-03-26 Fyusion, Inc. Artificially rendering images using viewpoint interpolation and extrapolation
US11095869B2 (en) 2015-09-22 2021-08-17 Fyusion, Inc. System and method for generating combined embedded multi-view interactive digital media representations
US11783864B2 (en) 2015-09-22 2023-10-10 Fyusion, Inc. Integration of audio into a multi-view interactive digital media representation
TWI639338B (en) 2016-08-15 2018-10-21 香港商立景創新有限公司 Image capturing apparatus and image smooth zooming method thereof
KR102560780B1 (en) 2016-10-05 2023-07-28 삼성전자주식회사 Image processing system including plurality of image sensors and electronic device including thereof
US11202017B2 (en) 2016-10-06 2021-12-14 Fyusion, Inc. Live style transfer on a mobile device
EP3531689B1 (en) * 2016-11-03 2021-08-18 Huawei Technologies Co., Ltd. Optical imaging method and apparatus
US10356300B2 (en) * 2016-12-23 2019-07-16 Mediatek Inc. Seamless zooming on dual camera
US10437879B2 (en) 2017-01-18 2019-10-08 Fyusion, Inc. Visual search using multi-view interactive digital media representations
CN106713772B (en) * 2017-03-31 2018-08-17 维沃移动通信有限公司 A kind of photographic method and mobile terminal
TWI672677B (en) 2017-03-31 2019-09-21 鈺立微電子股份有限公司 Depth map generation device for merging multiple depth maps
DE102017205630A1 (en) * 2017-04-03 2018-10-04 Conti Temic Microelectronic Gmbh Camera apparatus and method for detecting a surrounding area of a vehicle
US10410314B2 (en) * 2017-04-27 2019-09-10 Apple Inc. Systems and methods for crossfading image data
US10313651B2 (en) 2017-05-22 2019-06-04 Fyusion, Inc. Snapshots at predefined intervals or angles
US10373290B2 (en) * 2017-06-05 2019-08-06 Sap Se Zoomable digital images
US11069147B2 (en) 2017-06-26 2021-07-20 Fyusion, Inc. Modification of multi-view interactive digital media representation
US10834310B2 (en) 2017-08-16 2020-11-10 Qualcomm Incorporated Multi-camera post-capture image processing
US10404916B2 (en) 2017-08-30 2019-09-03 Qualcomm Incorporated Multi-source video stabilization
US10516830B2 (en) 2017-10-11 2019-12-24 Adobe Inc. Guided image composition on mobile devices
US10257436B1 (en) * 2017-10-11 2019-04-09 Adobe Systems Incorporated Method for using deep learning for facilitating real-time view switching and video editing on computing devices
US10497122B2 (en) 2017-10-11 2019-12-03 Adobe Inc. Image crop suggestion and evaluation using deep-learning
CN107835372A (en) * 2017-11-30 2018-03-23 广东欧珀移动通信有限公司 Imaging method, device, mobile terminal and storage medium based on dual camera
US10764512B2 (en) * 2018-03-26 2020-09-01 Mediatek Inc. Method of image fusion on camera device equipped with multiple cameras
US10592747B2 (en) 2018-04-26 2020-03-17 Fyusion, Inc. Method and apparatus for 3-D auto tagging
TWI680436B (en) * 2018-12-07 2019-12-21 財團法人工業技術研究院 Depth camera calibration device and method thereof
CN110072047B (en) 2019-01-25 2020-10-09 北京字节跳动网络技术有限公司 Image deformation control method and device and hardware device
US11620832B2 (en) * 2019-06-22 2023-04-04 Hendrik J. Volkerink Image based locationing
CN113808510B (en) * 2020-06-15 2024-04-09 明基智能科技(上海)有限公司 Image adjusting method
WO2022076483A1 (en) 2020-10-05 2022-04-14 Trackonomy Systems, Inc. System and method of utilizing 3d vision for asset management and tracking
US11405563B2 (en) * 2020-11-10 2022-08-02 Qualcomm Incorporated Spatial alignment transform without FOV loss
EP4092572A1 (en) * 2021-05-20 2022-11-23 Wooptix S.L. Method for depth estimation for a variable focus camera

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6650368B1 (en) * 1999-10-26 2003-11-18 Hewlett-Packard Development Company, Lp. Digital camera and method of enhancing zoom effects
JP2007028283A (en) * 2005-07-19 2007-02-01 Matsushita Electric Ind Co Ltd Image sensing device
JP4573724B2 (en) * 2005-08-01 2010-11-04 イーストマン コダック カンパニー Imaging apparatus having a plurality of optical systems
JP4624245B2 (en) * 2005-11-29 2011-02-02 イーストマン コダック カンパニー Imaging device
JP4692770B2 (en) * 2006-12-27 2011-06-01 富士フイルム株式会社 Compound eye digital camera
US7683962B2 (en) 2007-03-09 2010-03-23 Eastman Kodak Company Camera using multiple lenses and image sensors in a rangefinder configuration to provide a range map
US7859588B2 (en) 2007-03-09 2010-12-28 Eastman Kodak Company Method and apparatus for operating a dual lens camera to augment an image
KR101441586B1 (en) * 2008-10-06 2014-09-23 삼성전자 주식회사 Apparatus and method for capturing image
US8553106B2 (en) 2009-05-04 2013-10-08 Digitaloptics Corporation Dual lens digital zoom
TWI435160B (en) * 2010-10-29 2014-04-21 Altek Corp Method for composing three dimensional image with long focal length and three dimensional imaging system
CN102918858B (en) * 2010-12-24 2014-09-03 富士胶片株式会社 3-D panoramic image creating apparatus, 3-D panoramic image creating method,3-D panoramic image replay apparatus, and 3-D panoramic image replay method
US8508649B2 (en) * 2011-02-14 2013-08-13 DigitalOptics Corporation Europe Limited Compact distorted zoom lens for small angle of view
TWI516110B (en) * 2012-01-02 2016-01-01 華晶科技股份有限公司 Image capturing device and image capturing method thereof
CN103814567B (en) * 2012-07-10 2017-02-22 松下电器产业株式会社 Display control apparatus
TW201427412A (en) * 2012-12-19 2014-07-01 Sintai Optical Shenzhen Co Ltd Image capture device and anti-shake control method thereof

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160150211A1 (en) * 2014-11-20 2016-05-26 Samsung Electronics Co., Ltd. Method and apparatus for calibrating image
US10506213B2 (en) * 2014-11-20 2019-12-10 Samsung Electronics Co., Ltd. Method and apparatus for calibrating image
US20200053338A1 (en) * 2014-11-20 2020-02-13 Samsung Electronics Co., Ltd. Method and apparatus for calibrating image
US11140374B2 (en) * 2014-11-20 2021-10-05 Samsung Electronics Co., Ltd. Method and apparatus for calibrating image
US20160360183A1 (en) * 2015-06-02 2016-12-08 Etron Technology, Inc. Monitor system and operation method thereof
US10382744B2 (en) * 2015-06-02 2019-08-13 Eys3D Microelectronics, Co. Monitor system and operation method thereof
US10321112B2 (en) * 2016-07-18 2019-06-11 Samsung Electronics Co., Ltd. Stereo matching system and method of operating thereof
US10317646B2 (en) 2016-10-14 2019-06-11 Largan Precision Co., Ltd. Optical imaging module, image capturing apparatus and electronic device
US11226472B2 (en) 2016-10-14 2022-01-18 Largan Precision Co., Ltd. Optical imaging module, image capturing apparatus and electronic device
US10957029B2 (en) * 2016-11-17 2021-03-23 Sony Corporation Image processing device and image processing method
US11050915B2 (en) * 2017-07-17 2021-06-29 Huizhou Tcl Mobile Communication Co., Ltd. Method for zooming by switching between dual cameras, mobile terminal, and storage apparatus
WO2019148996A1 (en) * 2018-01-31 2019-08-08 Oppo广东移动通信有限公司 Image processing method and device, storage medium, and electronic apparatus
US10848746B2 (en) 2018-12-14 2020-11-24 Samsung Electronics Co., Ltd. Apparatus including multiple cameras and image processing method
US11430089B2 (en) 2019-07-10 2022-08-30 Samsung Electronics Co., Ltd. Image processing method and image processing system for generating a corrected image
CN111641775A (en) * 2020-04-14 2020-09-08 北京迈格威科技有限公司 Multi-shooting zoom control method, device and electronic system
CN113055592A (en) * 2021-03-11 2021-06-29 Oppo广东移动通信有限公司 Image display method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
TWI554103B (en) 2016-10-11
TW201618531A (en) 2016-05-16
US9325899B1 (en) 2016-04-26

Similar Documents

Publication Publication Date Title
US9325899B1 (en) Image capturing device and digital zooming method thereof
CN108600576B (en) Image processing apparatus, method and system, and computer-readable recording medium
US10306141B2 (en) Image processing apparatus and method therefor
TWI599809B (en) Lens module array, image sensing device and fusing method for digital zoomed images
JP5846172B2 (en) Image processing apparatus, image processing method, program, and imaging system
JP2010211255A (en) Imaging apparatus, image processing method, and program
JP5392198B2 (en) Ranging device and imaging device
CN108513057B (en) Image processing method and device
CN103379267A (en) Three-dimensional space image acquisition system and method
CN112261292B (en) Image acquisition method, terminal, chip and storage medium
TW202236840A (en) Image fusion for scenes with objects at multiple depths
US20120057747A1 (en) Image processing system and image processing method
JP6222205B2 (en) Image processing device
US8908012B2 (en) Electronic device and method for creating three-dimensional image
WO2018196854A1 (en) Photographing method, photographing apparatus and mobile terminal
US9743007B2 (en) Lens module array, image sensing device and fusing method for digital zoomed images
JP5796611B2 (en) Image processing apparatus, image processing method, program, and imaging system
JP6665917B2 (en) Image processing device
JP2017157043A (en) Image processing device, imaging device, and image processing method
JP6579764B2 (en) Image processing apparatus, image processing method, and program
JP6079838B2 (en) Image processing apparatus, program, image processing method, and imaging system
CN109214983B (en) Image acquisition device and image splicing method thereof
KR20120039855A (en) Method for processing image of camera module
JP6439845B2 (en) Image processing device
TWI390965B (en) Method for stimulating the depth of field of an image

Legal Events

Date Code Title Description
AS Assignment

Owner name: ALTEK SEMICONDUCTOR CORP., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHOU, HONG-LONG;TSENG, YI-HONG;CHANG, WEN-YAN;AND OTHERS;REEL/FRAME:034510/0924

Effective date: 20141127

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2552); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 8