US9270977B2 - 3D photo creation system and method - Google Patents

3D photo creation system and method Download PDF

Info

Publication number
US9270977B2
US9270977B2 US14/172,888 US201414172888A US9270977B2 US 9270977 B2 US9270977 B2 US 9270977B2 US 201414172888 A US201414172888 A US 201414172888A US 9270977 B2 US9270977 B2 US 9270977B2
Authority
US
United States
Prior art keywords
image
module
pixel
view angle
eye image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/172,888
Other versions
US20140219551A1 (en
Inventor
Sy Sen TANG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CITY IMAGE TECHNOLOGY Ltd
Original Assignee
CITY IMAGE TECHNOLOGY Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CITY IMAGE TECHNOLOGY Ltd filed Critical CITY IMAGE TECHNOLOGY Ltd
Priority to US14/172,888 priority Critical patent/US9270977B2/en
Assigned to CITY IMAGE TECHNOLOGY LTD. reassignment CITY IMAGE TECHNOLOGY LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TANG, SY SEN
Publication of US20140219551A1 publication Critical patent/US20140219551A1/en
Priority to US14/995,208 priority patent/US9544576B2/en
Application granted granted Critical
Publication of US9270977B2 publication Critical patent/US9270977B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • H04N13/0282
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • H04N13/0011
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/282Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/293Generating mixed stereoscopic images; Generating mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
    • H04N13/0271
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the present application is directed to a photo processing system and method. Specifically, it relates to a 3D photo processing system and method.
  • 3D photos are commonly created with the application of the lenticular technique.
  • the transparent lens of the lenticular lens is an array of magnifying lenses. Such magnifying lenses array is designed in a way such that when being perceived at slightly different angles, different images are magnified.
  • multi-view angle images such as in 12 or more multi-view angle images, must first be created. Subsequently, the multi-view angle images will be combined into a mixed image.
  • the combination of multi-view angle images is a process of acquiring, from the multi-view angle images, suitable pixels for combining into and forming a new image.
  • the new image comprises the multi-view angle information of the original image.
  • the transparent lens of the lenticular lens is used to reveal the multi-view viewing angles from different viewing angles. Finally, the left and right eyes of the viewer can see different images through observing from the lenticular lens which produces a 3D effect.
  • the most common method is to convert 2D image into multi-view angle images using manual operation. Such method requires a processing time of several hours to several days. Normally, the operator is required to create a mask for extracting a subject from the target image. Then, the operator needs to assign a depth information to the mask based on his own judgment.
  • the depth information is an independent grayscale image possessing the same dimensions as the original 2D image.
  • the grayscale image applies the various shades of gray color to indicate the depth of every part of the image.
  • the manually created depth information leads the computer to shift the pixel of the original 2D image for forming a new view angle map.
  • the depthmap can produce a conspicuous 3D visual effect.
  • Another method is to photo-shoot the subject from multi-view angles.
  • Such method requires the set up of one or multiple cameras to capture the multi-view angle images.
  • the image capturing device must be positioned with scrutiny so that the view angle of the image outputted would not be overly wide.
  • the multi-view angle image is used to reconstruct the mixed image.
  • the majority of systems construct the mixed image directly from the data obtained in the multi-view angle images. Since the final image is a sub-sample of each multi-view angle image, the image obtained from such method cannot preserve the quality of the original image.
  • the present patent application is directed to a 3D photo creation system and method.
  • the 3D photo creation system includes:
  • a stereo image input module configurated to input a stereo image; wherein the stereo image comprises a left eye image and a right eye image;
  • a depth estimation module configurated to estimate a depth information of the stereo image and create a depthmap
  • a multi-view angle image reconstructing module configurated to create a multi-view angle image according to the depthmap and the stereo image
  • an image spaced scanning module configurated to adjust the multi-view angle image and form a mixed image.
  • the depth estimation module may include:
  • a pixel matching module configurated to compare the left eye image and the right eye image of the stereo image and find a matching pixel between the left eye image and the right eye image, and find an optical flow of the pixel according to an optical flow constraint formula
  • a depth information confirmation module configurated to find a pixel shifting according to the optical flow of the left eye image and the right eye image to confirm the depth information of the pixel
  • (b3) a depthmap creation module configurated to create the depthmap according to the depth information.
  • the multi-view angle image reconstructing module may include:
  • (c1) a base image selection module configurated to select the left eye image, the right eye image or the left eye image and the right eye image of the stereo image as a base image;
  • (c2) an image number confirmation module configurated to confirm a number and disparity of a required image according to demand;
  • (c3) a pixel shifting module configurated to shift pixels of the base image according to the depthmap to form a new image
  • a multi-view angle image creation module configurated to create the multi-view angle image.
  • the image spaced scanning module may include:
  • (d2) a contrast adjusting module configurated to adjust a contrast ratio of the adjusted multi-view angle image outputted by the image adjusting module;
  • the hole filling module applies the interpolation method to fill the holes formed from the loss of pixels in the new image.
  • the 3D photo creation method includes the following steps:
  • the step S 2 may include the following steps:
  • the step S 3 may include:
  • the step S 4 may include:
  • the step 34 applies interpolation method to fill the holes formed from the loss of pixel in the new image.
  • FIG. 1 is a diagram of a 3D photo creation system of the present application
  • FIG. 2 is a diagram of a depth estimation module in the 3D photo creation system of the present application.
  • FIG. 3 is a diagram of a multi-view angle image reconstructing module in the 3D photo creation system of the present application
  • FIG. 4 is a diagram of an image spaced scanning module in the 3D photo creation system of the present application.
  • FIG. 5 is a flow chart of the 3D photo creation method of the present application.
  • FIG. 6 is a flow chart of procedure S 2 in the 3D photo creation method of the present application.
  • FIG. 7 is a flow chart of procedure S 3 in the 3D photo creation method of the present application.
  • FIG. 8 is a flow chart of procedure S 4 in the 3D photo creation method of the present application.
  • FIG. 9 is an illustrative view of a stereo image inputted by the 3D photo creation system of the present application.
  • FIG. 10 is an illustrative view of a comparison between a depthmap formed from the 3D photo creation system of the present application and an original image;
  • FIG. 11 is a multi-view angle image after adjustment
  • FIG. 12 is an illustrative view of a mixed image.
  • FIGS. 1 to 4 illustrate a diagram of an embodiment of the 3D photo creation system of the present application.
  • Such 3D photo creation system includes a stereo image input module 1 , a depth estimation module 2 , a multi-view angle image reconstructing module 3 and an image spaced scanning module 4 .
  • the stereo image input module 1 is used to input stereo image.
  • the stereo image includes left eye image and right eye image;
  • the depth estimation module 2 is used to evaluate the depth information of the stereo image and create a depthmap;
  • the multi-view angle image reconstructing module 3 is used to create multi-view angle image according to depthmap and stereo image;
  • the image spaced scanning module 4 is used to adjust the multi-view angle image and form a mixed image.
  • the depth estimation module 2 further including: a pixel matching module 21 , a depth information confirmation module 22 and a depthmap creation module 23 .
  • the pixel matching module 21 is used to compare the left eye image and right eye image of the stereo image and find the matching pixel between the left eye image and the right eye image, and calculate the optical flow of the pixel according to the optical flow constraint formula.
  • matching pixel refers to the pixel at the same pixel location of the left eye image and right eye image.
  • the depth information confirmation module 22 is used to find the shifted location of the pixel according to the optical flow of the left eye image and right eye image for confirming the depth information of the pixel.
  • the depthmap creation module 23 is used to create depthmap according to the depth information.
  • the multi-view angle image reconstructing module 3 further includes: a base image selection module 31 , an image number confirmation module 32 , a pixel shifting module 33 , a hole filling module 34 and a multi-view angle image creation module 35 .
  • the base image selection module 31 is used to select the left eye image, right eye image or left eye image and right eye image of the stereo image being the base image.
  • the image number confirmation module 32 is used to confirm the required number and disparity of the images according to demand.
  • the pixel shifting module 33 is used to shift the pixel of the base image according to the depthmap for forming a new image.
  • the hole filling module 34 is used to fill the holes formed from loss of pixels in the new image.
  • the multi-view angle image creation module 35 is used to create multi-view angle images.
  • the image spaced scanning module 4 further includes: an image adjusting module 41 , a contrast adjusting module 42 , an image interlacing module 43 and a mixed image output module 44 .
  • the image adjusting module 41 is used to adjust the size of the multi-view angle image.
  • the contrast adjusting module 42 is used to adjust the contrast ratio of the adjusted multi-view angle image as outputted by the image adjusting module.
  • the image interlacing module 43 is used to combine the multi-view angle images after contrast adjustment for forming a mixed image.
  • the mixed image output module 44 is used to output the mixed image.
  • FIGS. 5 to 8 are flow charts of the 3D photo creation method of the present application, which include the following steps:
  • S 1 inputs a stereo image, the stereo image includes a left eye image and a right eye image;
  • S 2 estimates the depth information of the stereo image and creates a depthmap
  • S 3 creates a multi-view angle image according to the depthmap and the stereo image
  • S 4 adjusts the multi-view angle image and forms a mixed image.
  • step S 2 further includes the following steps:
  • S 21 compares the left eye image and right eye image of the stereo image and finds the matching pixel between the left eye image and the right eye image; and calculates the optical flow according to the optical flow constraint formula.
  • S 22 finds the pixel shifting according to the optical flow of the left eye image and the right eye image for confirming the depth information of the pixel;
  • S 23 creates depthmap according to depth information.
  • Procedure S 3 further includes:
  • S 31 selects the left eye image, right eye image or left eye image and right eye image of the stereo image being the base image;
  • S 32 confirms the required number and disparity of the images according to demand
  • S 33 forms a new image from shifting the pixel of the base image according to depthmap
  • Step S 4 further includes:
  • S 42 adjusts the contrast ratio of the multi-view angle image adjusted in step S 41 ;
  • S 43 combines the multi-view angle images after contrast adjustment and forms a mixed image
  • the above introduces the formation of the 3D photo creation system of the present application and the specific steps of the 3D photo creation method of the present application.
  • the 3D photo creation system of the present application applies stereo image as input. It will automatically undergo comparison according to the stereo image and then calculates the 3D information (also known as depthmap). Then, a multi-view angle image is created according to the shifting of the pixel of the original input image by the depth information.
  • the 3D photo creation system of the present application would adjust the created image for forming a suitable size. Then, the image after adjustment would combine together.
  • the mixed image formed can be displayed on a glasses-free 3D display device, or be combined with any lenticular sheet to form a 3D photo.
  • the stereo image input module 1 is used to input stereo image.
  • the stereo image is the stereomap, which can produce 3D visual effect. It is an image that can bring about depth sensing experience to the observer through stereo observation with his eyes. Such stereomap can be obtained from one or many techniques.
  • the stereo image can also directly apply 3D image.
  • the input of the stereo image is a stereo image comprising left eye image and right eye image, with the specific image as illustrated in FIG. 9 .
  • the depth estimation module 2 is used to analyze the depth information of the stereo image inputted by the stereo image input module 1 , for reconstructing the multi-view angle image.
  • the depth estimation step is illustrated in FIG. 6 .
  • the depth estimation module 2 includes pixel matching module 21 , depth information confirmation module 22 and depthmap creation module 23 .
  • pixel matching module 21 is used to compare the left eye image and right eye image of the stereo image as inputted for finding the matching pixel of the two, that is, the pixel at the same pixel location of the left eye image and right eye image.
  • the subject in the left eye image and right eye image of the stereo image exists displacement, also known as disparity.
  • optical flow and stereo matching as such matching methods would be applied to find the pixel shifting between the left eye image and the right eye image.
  • Optical flow is a pattern of apparent motion of subject, surface or edges in the visual scene caused by the relative motion between an observer (such as glasses and camera) and the scene.
  • Optical flow estimation calculates the optical flow using the optical flow constraint formula. In order to find the matching pixel, image must be compared and the famous optical flow constraint formula be abided:
  • V x , V y are respectively the x and y components of the velocity or optical flow of I(x, y, t) and
  • ⁇ I ⁇ x , ⁇ I ⁇ y ⁇ ⁇ and ⁇ ⁇ ⁇ I ⁇ t are the derivatives of the image at (x, y, t) in the corresponding directions.
  • a coarse-to-fine strategy can be adopted to determine the optical flow of the pixel.
  • There exists different robust methods for enhancing the disparity estimation such as the “high accuracy optic flow estimation based on a theory for warping.”
  • the depth information can be transmitted from the disparity information and the camera configuration.
  • the displacement of the pixel can indicate the depth formation.
  • most 3D stereo capturing device convert the camera or lens to a point. In other word, the direction of the optical flow must be considered in the calculation of every depth of the pixel.
  • the depth information confirms module 22 which confirms the depth information of the pixel.
  • maxdisplacement is the maximum displacement of the pixel
  • direction is the direction of the optical flow
  • u and v are respectively the optical flow vectors of each pixel in the x and y directions.
  • depth information can be used to reconstruct the 3D environment (i.e. depthmap).
  • the depthmap is represented by a grey scale image recognized by the computer.
  • Depthmap creation module 23 is used to create depthmap. Normally the depth value of the pixel is 0 to 255. The higher the depth value of the pixel, the closer the distance with the observer.
  • the 3D photo creation system of the present application separates the foreground scene and background scene in the depthmap.
  • the system uses depth value of the pixel ranging within 99 to 255 to represent foreground scene and uses depth value of the pixel ranging within 0 to 128 to represent background scene.
  • the foreground scene depth information and the background scene depth information possess certain overlapping. In the present embodiment, the overlapping ranges from 99 to 128.
  • the range of overlapping of the foreground scene depth information and the background scene depth information can be adjusted by the user. Such process can increase the contrast between the foreground scene and the background scene. Furthermore, the main subject in the foreground and the depth detail of the background can be enhanced.
  • FIG. 10 is an illustrative view of an image separating the foreground scene and the background scene.
  • the multi-view angle image reconstructing module 3 is used to reconstruct multi-view angle images, including: the base image selection module 31 , image number confirmation module 32 , pixel shifting module 33 , hole filling module 34 and multi-view angle image creation module 35 .
  • the base image selection module 31 can select the left eye image, right eye image or left eye image and right eye image of the stereo image as the base image for producing the multi-view angle image.
  • the multi-view angle image reconstruction process is illustrated in FIG. 7 . If a single image is selected, such as the left eye image or the right eye image, then the image created will be the left eye image and right eye image of the selected image. If 2N+1 images must be created, then the image selected will be the N+1 th image. The images created will be the 1 to N images and the N+2 to 2N+1 images. For example, if 9 images must be created, then the image selected will be the 5 th image. The image created will be the 1 to 4 images and the 6 to 9 images.
  • the image number confirmation module 32 is used to confirm the number of image according to need. On the other hand, if two images are selected (left eye image and right eye image) as the base image for creating multi-view angle images, that is, the stereo image is selected as the base image, then the system would use the disparity and the number of image required to determine the location of the two images selected. In the present embodiment, the system would first confirm the number of image required to be created.
  • the number of multi-view angle image is dependent on the LPI (line per inch) of the lenticular lens and DPI (dot per inch) of the printer.
  • each line per inch of the lenticular lens is 50, each dot per inch is 600.
  • the position of the original stereo image is determined by the following equations:
  • N is the number of multi-view angle image
  • D is the disparity of the original stereo image
  • d is the disparity of each view angle of the multi-view angle image created.
  • the original stereo image will be inserted into a suitable position of the multi-view angle image.
  • the other view angle images will be created from the original stereo image. This method will evenly distribute the multi-view angle images, that is, these images possess similar disparity. Such method can also enhance the quality of the final mixed image.
  • the system After determining the number of image required and the location of all image, the system would manipulate the depthmap to create multi-view angle images.
  • the depthmap of the left eye image and right eye image is already formed at the front part.
  • These base images, such as the left eye image or the right eye image will shift the pixel according to their own depthmaps.
  • the pixel shifting module 33 is used to shift the pixel of the base image for forming a new image.
  • the depth value of the depthmap has the mid-value from 0 to 255,128, which is a converging point of the base image.
  • the pixel at the depth value ranging within 128 to 255 is shifted to the right side.
  • the pixel at the depth value ranging within 0 to 127 is shifted to the left side.
  • the pixel at the depth value ranging within 128 to 255 is shifted to the left side.
  • the pixel at the depth value ranging within 0 to 127 is shifted to the right side. From 128 to 255, the greater the depth value of the pixel, the greater the shifting distance of the pixel. From 0 to 127, the smaller the depth value of the pixel, the greater the shifting distance of the pixel.
  • the pixel at the new left eye image (lx, y) is the pixel at the base image (x, y).
  • the pixel at the new right eye image (rx, y) is the pixel at the base image (x, y).
  • Hole filling module 34 is used to fill these holes. These holes produced from the shifting of the pixels can be re-filled by manipulating the neighboring pixels with the method of interpolation, or be re-filled using other suitable methods of hole filling. The formula for calculating the pixel value of the holes using the interpolation method is shown below:
  • startx and endx are the starting and ending positions of the holes in the row
  • length is length of the holes
  • holex is x position of the holes
  • weight is the weight value of the holes
  • pixelvalue is the pixel value of the holes.
  • the image spaced scanning module 4 is used to adjust the multi-view angle image created upfront and form a mixed image, and includes an image adjustment module 41 , contrast adjustment module 42 , image interlacing module 43 and mixed image output module 44 .
  • FIG. 8 is a process of forming mixed image.
  • the system can enhance the quality of the final mixed image. In order to enhance the quality of the final mixed image, the system will first adjust each image to a suitable width.
  • the contrast adjustment module 42 is used to adjust the contrast.
  • the final image formed is a mixed image of the 12 images.
  • the image interlacing module 43 is used to form a mixed image.
  • the image is reconstructed at 600 pixel per inch.
  • Each ban contains 12 pixels.
  • FIG. 12 is an illustrative view of a mixed ban of the image. The pixels are extracted from 12 images in the order from 12 to 1. Normally, the right eye view image is the first row of these bans, that is, the 12 th image. In FIG. 12 , the first ban is a combination of the first row of each image, and so on.
  • the second ban is a combination of the second row of each image.
  • the LPI of the lenticular lens is 50 and the width is 10 inches.
  • the LPI of the lenticular lens is 50.1 and the width is 10 inches.
  • the width of the final image is 5988. This can be calculated from the following equation:
  • Width actual LPI ideal LPI actual ⁇ Width ideal
  • LPI ideal is the LPI of the lenticular lens at the ideal situation.
  • the value is 50.
  • LPI actual is the actual LPI of the lenticular lens.
  • the value is 50.1.
  • Width ideal is the ideal width under the situation of having a 50 LPI of the image, which is 6000.
  • Width actual is the actual width under the situation of having a 50.1 LPI of the image, which is 5988.
  • the mixed image output module 44 is used to form a mixed image.
  • the mixed image can combine with the lenticular lens to form a 3D photo. There are different methods to realize it.
  • the image can directly be printed on the lenticular lens.
  • the printed image can also be laminated on the lenticular lens, or be placed inside the lenticular lens frame. It is also possible to combine the mixed image with the lenticular lens via other suitable methods.
  • the 3D photo creation system and method of the present application outstandingly simplified the process of 3D photo creation and enhanced the quality of 3D photo.
  • the 3D photo creation system and method of the present application utilize stereo images as input.
  • the currently available 3D photo camera and 3D lens can be used as the shooting device of the stereo image.
  • the application of image processing technology can reconstruct 3D information from the stereo image and the quality of the 3D photo can be enhanced. This can very quickly and efficiently create multi-view angle images and enhance the quality of the image created.
  • the 3D photo creation system and method of the present application would first adjust the size of the multi-view angle image. This will emphasize the color details of the mixed image outputted.
  • the 3D photo creation system and method of the present application can be widely used in various theme parks, tourists attraction spots and photo galleries, and bring about pleasure to more consumers with the 3D photos.

Abstract

The present application is directed to a 3D photo creation system and method, wherein the 3D photo creation system including: a stereo image input module configurated to input a stereo image; wherein the stereo image comprises a left eye image and a right eye image; a depth estimation module configurated to estimate a depth information of the stereo image and create a depthmap; a multi-view angle image reconstructing module configurated to create a multi-view angle image according to the depthmap and the stereo image; and an image spaced scanning module configurated to adjust the multi-view angle image and form a mixed image. The system and method outstandingly simplified the process of 3D photo creation and enhanced the quality of 3D photo. The system and method can be widely used in various theme parks, tourists attraction spots and photo galleries and bring about pleasure to more consumers with the 3D photos.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This present application claims the benefit of U.S. Provisional Patent Application No. 61/761,250 filed on Feb. 6, 2013; the contents of which are hereby incorporated by reference.
FIELD OF THE TECHNOLOGY
The present application is directed to a photo processing system and method. Specifically, it relates to a 3D photo processing system and method.
BACKGROUND
3D photos are commonly created with the application of the lenticular technique. The transparent lens of the lenticular lens is an array of magnifying lenses. Such magnifying lenses array is designed in a way such that when being perceived at slightly different angles, different images are magnified. In order to create a 3D photo, multi-view angle images, such as in 12 or more multi-view angle images, must first be created. Subsequently, the multi-view angle images will be combined into a mixed image. The combination of multi-view angle images is a process of acquiring, from the multi-view angle images, suitable pixels for combining into and forming a new image. The new image comprises the multi-view angle information of the original image. The transparent lens of the lenticular lens is used to reveal the multi-view viewing angles from different viewing angles. Finally, the left and right eyes of the viewer can see different images through observing from the lenticular lens which produces a 3D effect.
Presently, different methods of creating 3D photos exist. In particular, the most common method is to convert 2D image into multi-view angle images using manual operation. Such method requires a processing time of several hours to several days. Normally, the operator is required to create a mask for extracting a subject from the target image. Then, the operator needs to assign a depth information to the mask based on his own judgment. The depth information is an independent grayscale image possessing the same dimensions as the original 2D image. The grayscale image applies the various shades of gray color to indicate the depth of every part of the image. The manually created depth information leads the computer to shift the pixel of the original 2D image for forming a new view angle map. The depthmap can produce a conspicuous 3D visual effect.
Another method is to photo-shoot the subject from multi-view angles. However, such method is not feasible when applying to subject in motion. Such method requires the set up of one or multiple cameras to capture the multi-view angle images. The image capturing device must be positioned with scrutiny so that the view angle of the image outputted would not be overly wide.
The multi-view angle image is used to reconstruct the mixed image. The majority of systems construct the mixed image directly from the data obtained in the multi-view angle images. Since the final image is a sub-sample of each multi-view angle image, the image obtained from such method cannot preserve the quality of the original image.
Based on the above, current 3D photo creation method and system contain deficiencies such as long processing time and poor photo quality.
SUMMARY
The present patent application is directed to a 3D photo creation system and method. In one aspect, the 3D photo creation system includes:
(a) a stereo image input module configurated to input a stereo image; wherein the stereo image comprises a left eye image and a right eye image;
(b) a depth estimation module configurated to estimate a depth information of the stereo image and create a depthmap;
(c) a multi-view angle image reconstructing module configurated to create a multi-view angle image according to the depthmap and the stereo image; and
(d) an image spaced scanning module configurated to adjust the multi-view angle image and form a mixed image.
The depth estimation module may include:
(b1) a pixel matching module configurated to compare the left eye image and the right eye image of the stereo image and find a matching pixel between the left eye image and the right eye image, and find an optical flow of the pixel according to an optical flow constraint formula;
(b2) a depth information confirmation module configurated to find a pixel shifting according to the optical flow of the left eye image and the right eye image to confirm the depth information of the pixel; and
(b3) a depthmap creation module configurated to create the depthmap according to the depth information.
The multi-view angle image reconstructing module may include:
(c1) a base image selection module configurated to select the left eye image, the right eye image or the left eye image and the right eye image of the stereo image as a base image;
(c2) an image number confirmation module configurated to confirm a number and disparity of a required image according to demand;
(c3) a pixel shifting module configurated to shift pixels of the base image according to the depthmap to form a new image;
(c4) a hole filling module configurated to fill holes formed from a loss of pixels in the new image; and
(c5) a multi-view angle image creation module configurated to create the multi-view angle image.
The image spaced scanning module may include:
(d1) an image adjusting module configurated to adjust a size of the multi-view angle image;
(d2) a contrast adjusting module configurated to adjust a contrast ratio of the adjusted multi-view angle image outputted by the image adjusting module;
(d3) an image interlacing module configurated to combine the multi-view angle images after the contrast adjustment into a mixed image; and
(d4) a mixed image output module configurated to output the mixed image.
The hole filling module applies the interpolation method to fill the holes formed from the loss of pixels in the new image.
In another aspect, the 3D photo creation method includes the following steps:
S1) inputting a stereo image; wherein the stereo image comprises a left eye image and a right eye image;
S2) estimating a depth information of the stereo image and creating a depthmap;
S3) creating a multi-view angle image according to the depthmap and the stereo image; and
S4) adjusting the multi-view angle image to form a mixed image.
The step S2 may include the following steps:
S21) comparing the left eye image and the right eye image of the stereo image and finding a matching pixel between the left eye image and the right eye image, and calculating an optical flow of the pixel according to an optical flow constraint formula;
S22) finding a shifting of the pixel according to the optical flow of the left eye image and the right eye image to confirm the depth information of the pixel; and
S23) creating the depthmap according to the depth information.
The step S3 may include:
S31) selecting the left eye image, the right eye image or the left eye image and the right eye image of the stereo image as a base image;
S32) confirming a number and disparity of a required image according to demand;
S33) shifting pixels of the base image according to the depthmap to form a new image;
S34) filling holes formed from a loss of pixels in the new image; and
S35) creating the multi-view angle image.
The step S4 may include:
S41) adjusting a size of the multi-view angle image;
S42) adjusting a contrast ratio of the multi-view angle image adjusted by the step S41;
S43) combining the multi-view angle image after the contrast adjustment and forming a mixed image;
S44) outputting the mixed image.
The step 34 applies interpolation method to fill the holes formed from the loss of pixel in the new image.
BRIEF DESCRIPTION OF THE DRAWINGS
Below is a further description of the present application with reference to the drawings and embodiments, in the drawings:
FIG. 1 is a diagram of a 3D photo creation system of the present application;
FIG. 2 is a diagram of a depth estimation module in the 3D photo creation system of the present application;
FIG. 3 is a diagram of a multi-view angle image reconstructing module in the 3D photo creation system of the present application;
FIG. 4 is a diagram of an image spaced scanning module in the 3D photo creation system of the present application;
FIG. 5 is a flow chart of the 3D photo creation method of the present application;
FIG. 6 is a flow chart of procedure S2 in the 3D photo creation method of the present application;
FIG. 7 is a flow chart of procedure S3 in the 3D photo creation method of the present application;
FIG. 8 is a flow chart of procedure S4 in the 3D photo creation method of the present application;
FIG. 9 is an illustrative view of a stereo image inputted by the 3D photo creation system of the present application;
FIG. 10 is an illustrative view of a comparison between a depthmap formed from the 3D photo creation system of the present application and an original image;
FIG. 11 is a multi-view angle image after adjustment;
FIG. 12 is an illustrative view of a mixed image.
DETAILED DESCRIPTION
In order to have a more lucid understanding on the technical feature, purpose and effect of the present application, a detailed description of the embodiments of the present application with reference to the drawings is hereby provided.
FIGS. 1 to 4 illustrate a diagram of an embodiment of the 3D photo creation system of the present application. Such 3D photo creation system includes a stereo image input module 1, a depth estimation module 2, a multi-view angle image reconstructing module 3 and an image spaced scanning module 4. In particular, the stereo image input module 1 is used to input stereo image. The stereo image includes left eye image and right eye image; the depth estimation module 2 is used to evaluate the depth information of the stereo image and create a depthmap; the multi-view angle image reconstructing module 3 is used to create multi-view angle image according to depthmap and stereo image; the image spaced scanning module 4 is used to adjust the multi-view angle image and form a mixed image.
In the 3D photo creation system of the present application, the depth estimation module 2 further including: a pixel matching module 21, a depth information confirmation module 22 and a depthmap creation module 23. In particular, the pixel matching module 21 is used to compare the left eye image and right eye image of the stereo image and find the matching pixel between the left eye image and the right eye image, and calculate the optical flow of the pixel according to the optical flow constraint formula. In particular, matching pixel refers to the pixel at the same pixel location of the left eye image and right eye image. The depth information confirmation module 22 is used to find the shifted location of the pixel according to the optical flow of the left eye image and right eye image for confirming the depth information of the pixel. The depthmap creation module 23 is used to create depthmap according to the depth information.
In the 3D photo creation system of the present application, the multi-view angle image reconstructing module 3 further includes: a base image selection module 31, an image number confirmation module 32, a pixel shifting module 33, a hole filling module 34 and a multi-view angle image creation module 35. In particular, the base image selection module 31 is used to select the left eye image, right eye image or left eye image and right eye image of the stereo image being the base image. The image number confirmation module 32 is used to confirm the required number and disparity of the images according to demand. The pixel shifting module 33 is used to shift the pixel of the base image according to the depthmap for forming a new image. The hole filling module 34 is used to fill the holes formed from loss of pixels in the new image. The multi-view angle image creation module 35 is used to create multi-view angle images.
In the 3D photo creation system of the present application, the image spaced scanning module 4 further includes: an image adjusting module 41, a contrast adjusting module 42, an image interlacing module 43 and a mixed image output module 44. In particular, the image adjusting module 41 is used to adjust the size of the multi-view angle image. The contrast adjusting module 42 is used to adjust the contrast ratio of the adjusted multi-view angle image as outputted by the image adjusting module. The image interlacing module 43 is used to combine the multi-view angle images after contrast adjustment for forming a mixed image. The mixed image output module 44 is used to output the mixed image.
FIGS. 5 to 8 are flow charts of the 3D photo creation method of the present application, which include the following steps:
S1 inputs a stereo image, the stereo image includes a left eye image and a right eye image;
S2 estimates the depth information of the stereo image and creates a depthmap;
S3 creates a multi-view angle image according to the depthmap and the stereo image;
S4 adjusts the multi-view angle image and forms a mixed image.
In particular, step S2 further includes the following steps:
S21 compares the left eye image and right eye image of the stereo image and finds the matching pixel between the left eye image and the right eye image; and calculates the optical flow according to the optical flow constraint formula.
S22 finds the pixel shifting according to the optical flow of the left eye image and the right eye image for confirming the depth information of the pixel;
S23 creates depthmap according to depth information.
Procedure S3 further includes:
S31 selects the left eye image, right eye image or left eye image and right eye image of the stereo image being the base image;
S32 confirms the required number and disparity of the images according to demand;
S33 forms a new image from shifting the pixel of the base image according to depthmap;
S34 fills the hole formed from loss of pixel in the new image;
S35 creates multi-view angle image.
Step S4 further includes:
S41 adjusts the size of the multi-view angle image;
S42 adjusts the contrast ratio of the multi-view angle image adjusted in step S41;
S43 combines the multi-view angle images after contrast adjustment and forms a mixed image;
S44 outputs the mixed image.
The above introduces the formation of the 3D photo creation system of the present application and the specific steps of the 3D photo creation method of the present application. Below is a description of the working concept of the 3D photo creation system and method of the present application in combination with specific examples. The 3D photo creation system of the present application applies stereo image as input. It will automatically undergo comparison according to the stereo image and then calculates the 3D information (also known as depthmap). Then, a multi-view angle image is created according to the shifting of the pixel of the original input image by the depth information. In order to enhance the quality of the final mixed image, the 3D photo creation system of the present application would adjust the created image for forming a suitable size. Then, the image after adjustment would combine together. Lastly, the mixed image formed can be displayed on a glasses-free 3D display device, or be combined with any lenticular sheet to form a 3D photo.
In the 3D photo creation system in the present application, the stereo image input module 1 is used to input stereo image. The stereo image is the stereomap, which can produce 3D visual effect. It is an image that can bring about depth sensing experience to the observer through stereo observation with his eyes. Such stereomap can be obtained from one or many techniques.
The stereo image can also directly apply 3D image. In the present embodiment, the input of the stereo image is a stereo image comprising left eye image and right eye image, with the specific image as illustrated in FIG. 9.
The depth estimation module 2 is used to analyze the depth information of the stereo image inputted by the stereo image input module 1, for reconstructing the multi-view angle image. The depth estimation step is illustrated in FIG. 6. The depth estimation module 2 includes pixel matching module 21, depth information confirmation module 22 and depthmap creation module 23. In particular, pixel matching module 21 is used to compare the left eye image and right eye image of the stereo image as inputted for finding the matching pixel of the two, that is, the pixel at the same pixel location of the left eye image and right eye image. The subject in the left eye image and right eye image of the stereo image exists displacement, also known as disparity. In order to extract the disparity, optical flow and stereo matching as such matching methods would be applied to find the pixel shifting between the left eye image and the right eye image. Optical flow is a pattern of apparent motion of subject, surface or edges in the visual scene caused by the relative motion between an observer (such as glasses and camera) and the scene. Optical flow estimation calculates the optical flow using the optical flow constraint formula. In order to find the matching pixel, image must be compared and the famous optical flow constraint formula be abided:
I x V x + I y V y + I t = 0
Wherein Vx, Vy are respectively the x and y components of the velocity or optical flow of I(x, y, t) and
I x , I y and I t
are the derivatives of the image at (x, y, t) in the corresponding directions. A coarse-to-fine strategy can be adopted to determine the optical flow of the pixel. There exists different robust methods for enhancing the disparity estimation, such as the “high accuracy optic flow estimation based on a theory for warping.”
After matching the pixel, the depth information can be transmitted from the disparity information and the camera configuration. The displacement of the pixel can indicate the depth formation. Yet, most 3D stereo capturing device convert the camera or lens to a point. In other word, the direction of the optical flow must be considered in the calculation of every depth of the pixel. The depth information confirms module 22 which confirms the depth information of the pixel.
The manipulation of the following equation enables the depth information of each pixel to be calculated.
maxdisplacement−direction√{square root over (u 2 +v 2)}
Wherein maxdisplacement is the maximum displacement of the pixel, direction is the direction of the optical flow, u and v are respectively the optical flow vectors of each pixel in the x and y directions. Such depth information can be used to reconstruct the 3D environment (i.e. depthmap). The depthmap is represented by a grey scale image recognized by the computer. Depthmap creation module 23 is used to create depthmap. Normally the depth value of the pixel is 0 to 255. The higher the depth value of the pixel, the closer the distance with the observer. In order to enhance the quality of the 3D photo, the 3D photo creation system of the present application separates the foreground scene and background scene in the depthmap. The system uses depth value of the pixel ranging within 99 to 255 to represent foreground scene and uses depth value of the pixel ranging within 0 to 128 to represent background scene. The foreground scene depth information and the background scene depth information possess certain overlapping. In the present embodiment, the overlapping ranges from 99 to 128. The range of overlapping of the foreground scene depth information and the background scene depth information can be adjusted by the user. Such process can increase the contrast between the foreground scene and the background scene. Furthermore, the main subject in the foreground and the depth detail of the background can be enhanced. FIG. 10 is an illustrative view of an image separating the foreground scene and the background scene.
The multi-view angle image reconstructing module 3 is used to reconstruct multi-view angle images, including: the base image selection module 31, image number confirmation module 32, pixel shifting module 33, hole filling module 34 and multi-view angle image creation module 35. The base image selection module 31 can select the left eye image, right eye image or left eye image and right eye image of the stereo image as the base image for producing the multi-view angle image. The multi-view angle image reconstruction process is illustrated in FIG. 7. If a single image is selected, such as the left eye image or the right eye image, then the image created will be the left eye image and right eye image of the selected image. If 2N+1 images must be created, then the image selected will be the N+1th image. The images created will be the 1 to N images and the N+2 to 2N+1 images. For example, if 9 images must be created, then the image selected will be the 5th image. The image created will be the 1 to 4 images and the 6 to 9 images.
The image number confirmation module 32 is used to confirm the number of image according to need. On the other hand, if two images are selected (left eye image and right eye image) as the base image for creating multi-view angle images, that is, the stereo image is selected as the base image, then the system would use the disparity and the number of image required to determine the location of the two images selected. In the present embodiment, the system would first confirm the number of image required to be created. The number of multi-view angle image is dependent on the LPI (line per inch) of the lenticular lens and DPI (dot per inch) of the printer. The number of multi-view angle image is N=DPI/LPI, wherein DPI is the dot per inch of the printer, LPI is the line per inch of the lenticular lens. For example, each line per inch of the lenticular lens is 50, each dot per inch is 600. The number of image required is 600/50=12. Therefore, 12 images are required to construct a suitable 3D image. The position of the original stereo image is determined by the following equations:
original left image position = N - D d 2 original right image position = left image position + D d
wherein N is the number of multi-view angle image, D is the disparity of the original stereo image and d is the disparity of each view angle of the multi-view angle image created. The original stereo image will be inserted into a suitable position of the multi-view angle image. The other view angle images will be created from the original stereo image. This method will evenly distribute the multi-view angle images, that is, these images possess similar disparity. Such method can also enhance the quality of the final mixed image.
After determining the number of image required and the location of all image, the system would manipulate the depthmap to create multi-view angle images. The depthmap of the left eye image and right eye image is already formed at the front part. These base images, such as the left eye image or the right eye image will shift the pixel according to their own depthmaps. The pixel shifting module 33 is used to shift the pixel of the base image for forming a new image. Normally, the depth value of the depthmap has the mid-value from 0 to 255,128, which is a converging point of the base image. In order to simulate the left eye image from the base image, the pixel at the depth value ranging within 128 to 255 is shifted to the right side. The pixel at the depth value ranging within 0 to 127 is shifted to the left side. In order to simulate the left eye image from the base image, the pixel at the depth value ranging within 128 to 255 is shifted to the left side. The pixel at the depth value ranging within 0 to 127 is shifted to the right side. From 128 to 255, the greater the depth value of the pixel, the greater the shifting distance of the pixel. From 0 to 127, the smaller the depth value of the pixel, the greater the shifting distance of the pixel. Below is an equation for the pixel shifting.
lx=x+parallax; rx=x−parallax
wherein parallax is a disparity parameter of the depth information of the image, lx is the x-coordinate of the left eye image pixel, rx is the x-coordinate of the right eye image pixel. The pixel at the new left eye image (lx, y) is the pixel at the base image (x, y). The pixel at the new right eye image (rx, y) is the pixel at the base image (x, y). After suitable shifting of the pixel, the left eye image and right eye image will be finally created.
When the system created a new image, the new image will lose some pixels. The process of handling these lost pixels is known as hole filling. Hole filling module 34 is used to fill these holes. These holes produced from the shifting of the pixels can be re-filled by manipulating the neighboring pixels with the method of interpolation, or be re-filled using other suitable methods of hole filling. The formula for calculating the pixel value of the holes using the interpolation method is shown below:
length = endx - startx weight = ( holex - startx ) length pixelvalue = ( ( source Image ( endx , y ) - sourceImage ( startx , y ) ) × weight + sourceImage ( startx , y )
Wherein startx and endx are the starting and ending positions of the holes in the row, length is length of the holes, holex is x position of the holes, weight is the weight value of the holes and pixelvalue is the pixel value of the holes. After the holes are filled, the view angle image newly created is prepared and the next step can be proceeded. The multi-view angle image creation module 35 can create multi-view angle image according to the original image and the newly created image.
The image spaced scanning module 4 is used to adjust the multi-view angle image created upfront and form a mixed image, and includes an image adjustment module 41, contrast adjustment module 42, image interlacing module 43 and mixed image output module 44. FIG. 8 is a process of forming mixed image. The system can enhance the quality of the final mixed image. In order to enhance the quality of the final mixed image, the system will first adjust each image to a suitable width. The image adjustment module 41 is used to adjust the size of the multi-view angle image. Taking the aforementioned 12 images as an example, the image finally printed is 600 pixel width per inch of the image. As there are 12 images, each image per inch is adjusted to 600/12=50 pixel width per inch. The image after adjustment and original image possess the same height. As FIG. 11 illustrates, the 12 images after adjustment and the original image possess the same height, but possess different width. Subsequently, the system will increase the contrast of these adjusted images. The contrast adjustment module 42 is used to adjust the contrast. These two processes can emphasize the color details of the final mixed image.
The final image formed is a mixed image of the 12 images. The image interlacing module 43 is used to form a mixed image. In the embodiment, the image is reconstructed at 600 pixel per inch. In order to fit with the lenticular lens of 50 LPI, the mixed image includes 600/12=50 bans per inch. Each ban contains 12 pixels. FIG. 12 is an illustrative view of a mixed ban of the image. The pixels are extracted from 12 images in the order from 12 to 1. Normally, the right eye view image is the first row of these bans, that is, the 12th image. In FIG. 12, the first ban is a combination of the first row of each image, and so on. The second ban is a combination of the second row of each image.
In reality, most lenticular lens do not possess the ideal line per inch (LPI) value. For example, sometimes the LPI is 50.1 or 49.9 instead of 50. This will lead to distortion of the final 3D image. Therefore, the system will finally adjust the scale of the image to fit the actual lenticular lens. For example, under an ideal situation, the LPI of the lenticular lens is 50 and the width is 10 inches. The width of the image is 50×12×10=6000. Yet if the LPI of the lenticular lens is 50.1 and the width is 10 inches. The width of the final image is 5988. This can be calculated from the following equation:
Width actual = LPI ideal LPI actual × Width ideal
Wherein LPIideal is the LPI of the lenticular lens at the ideal situation. In such embodiment, the value is 50. LPIactual is the actual LPI of the lenticular lens. In such embodiment, the value is 50.1. Widthideal is the ideal width under the situation of having a 50 LPI of the image, which is 6000. Widthactual is the actual width under the situation of having a 50.1 LPI of the image, which is 5988. The mixed image output module 44 is used to form a mixed image.
The mixed image can combine with the lenticular lens to form a 3D photo. There are different methods to realize it. The image can directly be printed on the lenticular lens. The printed image can also be laminated on the lenticular lens, or be placed inside the lenticular lens frame. It is also possible to combine the mixed image with the lenticular lens via other suitable methods.
The 3D photo creation system and method of the present application outstandingly simplified the process of 3D photo creation and enhanced the quality of 3D photo. The 3D photo creation system and method of the present application utilize stereo images as input. The currently available 3D photo camera and 3D lens can be used as the shooting device of the stereo image. The application of image processing technology can reconstruct 3D information from the stereo image and the quality of the 3D photo can be enhanced. This can very quickly and efficiently create multi-view angle images and enhance the quality of the image created. In order to further enhance the quality of the mixed image, the 3D photo creation system and method of the present application would first adjust the size of the multi-view angle image. This will emphasize the color details of the mixed image outputted. The 3D photo creation system and method of the present application can be widely used in various theme parks, tourists attraction spots and photo galleries, and bring about pleasure to more consumers with the 3D photos.
The above is a description of the embodiments of the present application with reference to the drawings. However, the present application is not limited to the above specific embodiments. The above specific embodiments are merely illustrative, rather than limitative, in nature. The skilled in the art, under the inspiration of the present application and without departing from the purpose of the present application and the protection scope of the claims, can also perform many forms. These all belong within the protection scope of the present application.

Claims (14)

What is claimed is:
1. A 3D photo creation system, comprising:
(a) a stereo image input module configurated to input a stereo image; wherein the stereo image comprises a left eye image and a right eye image;
(b) a depth estimation module configurated to estimate a depth information of the stereo image and create a depthmap;
(c) a multi-view angle image reconstructing module configurated to create a set of multi-view angle images comprising a plurality of images having different disparities according to the depthmap and the stereo image; and
(d) an image spaced scanning module configurated to adjust the set of multi-view angle images and form a mixed image;
wherein the depth estimation module comprising:
(b1) a pixel matching module configurated to compare the left eye image and the right eye image of the stereo image and find a matching pixel between the left eye image and the right eye image, and find an optical flow of the pixel according to an optical flow constraint formula;
(b2) a depth information confirmation module configurated to find a pixel shifting according to the optical flow of the left eye image and the right eye image to confirm the depth information of the pixel; and
(b3) a depthmap creation module configurated to create the depthmap according to the depth information;
wherein a scale of the mixed image is adjusted and the scale is calculated by an equation

Widthactual=LPIideal/LPIactual×Widthideal
where the LPIideal is a line per inch of a lenticular lens at an ideal situation, the LPIactual is an actual line per inch of the lenticular lens; the Widthideal is an ideal width of the lenticular lens, and the Widthactual is an actual width of the lenticular lens.
2. The 3D photo creation system according to claim 1, wherein the multi-view angle image reconstructing module comprising:
(c1) a base image selection module configurated to select the left eye image, the right eye image or the left eye image and the right eye image of the stereo image as a base image;
(c2) an image number confirmation module configurated to confirm a number and disparity of a required image according to demand;
(c3) a pixel shifting module configurated to shift pixels of the base image according to the depthmap to form a new image;
(c4) a hole filling module configurated to fill holes formed from a loss of pixels in the new image; and
(c5) a multi-view angle image creation module configurated to create the set of multi-view angle images.
3. The 3D photo creation system according to claim 2, wherein the image spaced scanning module comprising:
(d1) an image adjusting module configurated to adjust a size of the set of multi-view angle images;
(d2) a contrast adjusting module configurated to adjust a contrast ratio of the adjusted set of multi-view angle images outputted by the image adjusting module;
(d3) an image interlacing module configurated to combine the set of multi-view angle images after the contrast adjustment into a mixed image; and
(d4) a mixed image output module configurated to output the mixed image.
4. The 3D photo creation system according to claim 2, wherein the hole filling module applies the interpolation method to fill the holes formed from the loss of pixels in the new image.
5. The 3D photo creation system according to claim 2, wherein the number of the required image is determined by an equation N=DPI/LPI, where the DPI is a dot per inch of a printer, and the LPI is a line per inch of a lenticular lens.
6. The 3D photo creation system according to claim 1, wherein the optical flow of the pixel is determined by a coarse-to-fine strategy.
7. The 3D photo creation system according to claim 1, wherein a foreground scene and a background scene are separated in the depthmap, the pixel at a depth value ranging within 99 to 255 represents the foreground scene and the pixel at the depth value ranging within 0 to 128 represents the background scene.
8. A 3D photo creation method, wherein comprising the following steps:
S1) inputting a stereo image; wherein the stereo image comprises a left eye image and a right eye image;
S2) estimating a depth information of the stereo image and creating a depthmap;
S3) creating a set of multi-view angle images comprising a plurality of images having different disparities according to the depthmap and the stereo image; and
S4) adjusting the set of multi-view angle images to form a mixed image;
wherein the step S2 comprising the following steps:
S21) comparing the left eye image and the right eye image of the stereo image and finding a matching pixel between the left eye image and the right eye image, and calculating an optical flow of the pixel according to an optical flow constraint formula;
S22) finding a shifting of the pixel according to the optical flow of the left eye image and the right eye image to confirm the depth information of the pixel; and
S23) creating the depthmap according to the depth information;
wherein the step S3 comprising:
S31) selecting the left eye image, the right eye image or the left eye image and the right eye image of the stereo image as a base image;
S32) confirming a number and disparity of a required image according to demand;
S33) shifting pixels of the base image according to the depthmap to form a new image;
S34) filling holes formed from a loss of pixels in the new image; and
S35) creating the set of multi-view angle images;
wherein a scale of the mixed image is adjusted and the scale is calculated by an equation

Widthactual=LPIideal/LPIactual×Widthideal
where the LPIideal is a line per inch of a lenticular lens at an ideal situation, the LPIactual is an actual line per inch of the lenticular lens; the Widthideal is an ideal width of the lenticular lens, and the Widthactual is an actual width of the lenticular lens.
9. The 3D photo creation method according to claim 8, wherein the step S3 comprising:
S31) selecting the left eye image, the right eye image or the left eye image and the right eye image of the stereo image as a base image;
S32) confirming a number and disparity of a required image according to demand;
S33) shifting pixels of the base image according to the depthmap to form a new image;
S34) filling holes formed from a loss of pixels in the new image; and
S35) creating the set of multi-view angle images.
10. The 3D photo creation method according to claim 9, wherein the step S4 comprising:
S41) adjusting a size of the set of multi-view angle images;
S42) adjusting a contrast ratio of the set of multi-view angle images adjusted by the step S41;
S43) combining the set of multi-view angle images after the contrast adjustment and forming a mixed image;
S44) outputting the mixed image.
11. The 3D photo creation method according to claim 9, wherein the step 34 applies interpolation method to fill the holes formed from the loss of pixel in the new image.
12. The 3D photo creation method according to claim 9, wherein the number of the required image is determined by an equation N=DPI/LPI, where the DPI is a dot per inch of a printer, and the LPI is a line per inch of a lenticular lens.
13. The 3D photo creation method according to claim 8, wherein the optical flow of the pixel is determined by a coarse-to-fine strategy.
14. The 3D photo creation method according to claim 8, wherein a foreground scene and a background scene are separated in the depthmap, the pixel having a depth value ranging within 99 to 255 represents the foreground scene and the pixel having the depth value ranging within 0 to 128 represents the background scene.
US14/172,888 2013-02-06 2014-02-04 3D photo creation system and method Active 2034-02-14 US9270977B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/172,888 US9270977B2 (en) 2013-02-06 2014-02-04 3D photo creation system and method
US14/995,208 US9544576B2 (en) 2013-02-06 2016-01-14 3D photo creation system and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361761250P 2013-02-06 2013-02-06
US14/172,888 US9270977B2 (en) 2013-02-06 2014-02-04 3D photo creation system and method

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/995,208 Continuation US9544576B2 (en) 2013-02-06 2016-01-14 3D photo creation system and method

Publications (2)

Publication Number Publication Date
US20140219551A1 US20140219551A1 (en) 2014-08-07
US9270977B2 true US9270977B2 (en) 2016-02-23

Family

ID=50896635

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/172,888 Active 2034-02-14 US9270977B2 (en) 2013-02-06 2014-02-04 3D photo creation system and method
US14/995,208 Active US9544576B2 (en) 2013-02-06 2016-01-14 3D photo creation system and method

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/995,208 Active US9544576B2 (en) 2013-02-06 2016-01-14 3D photo creation system and method

Country Status (3)

Country Link
US (2) US9270977B2 (en)
CN (1) CN103974055B (en)
HK (3) HK1189451A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160134859A1 (en) * 2013-02-06 2016-05-12 City Image Technology Ltd. 3D Photo Creation System and Method

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104270627A (en) * 2014-09-28 2015-01-07 联想(北京)有限公司 Information processing method and first electronic equipment
US9773155B2 (en) 2014-10-14 2017-09-26 Microsoft Technology Licensing, Llc Depth from time of flight camera
US11189043B2 (en) 2015-03-21 2021-11-30 Mine One Gmbh Image reconstruction for virtual 3D
US11550387B2 (en) * 2015-03-21 2023-01-10 Mine One Gmbh Stereo correspondence search
WO2016154123A2 (en) 2015-03-21 2016-09-29 Mine One Gmbh Virtual 3d methods, systems and software
US10373366B2 (en) 2015-05-14 2019-08-06 Qualcomm Incorporated Three-dimensional model generation
US9911242B2 (en) 2015-05-14 2018-03-06 Qualcomm Incorporated Three-dimensional model generation
US10304203B2 (en) * 2015-05-14 2019-05-28 Qualcomm Incorporated Three-dimensional model generation
CN105100778A (en) * 2015-08-31 2015-11-25 深圳凯澳斯科技有限公司 Method and device for converting multi-view stereoscopic video
US10341568B2 (en) 2016-10-10 2019-07-02 Qualcomm Incorporated User interface to assist three dimensional scanning of objects
CN109509146B (en) 2017-09-15 2023-03-24 腾讯科技(深圳)有限公司 Image splicing method and device and storage medium
CN107580207A (en) * 2017-10-31 2018-01-12 武汉华星光电技术有限公司 The generation method and generating means of light field 3D display cell picture
CN115222793A (en) * 2017-12-22 2022-10-21 展讯通信(上海)有限公司 Method, device and system for generating and displaying depth image and readable medium
US10986325B2 (en) * 2018-09-12 2021-04-20 Nvidia Corporation Scene flow estimation using shared features
TWI683136B (en) * 2019-01-03 2020-01-21 宏碁股份有限公司 Video see-through head mounted display and control method thereof
WO2022060387A1 (en) * 2020-09-21 2022-03-24 Leia Inc. Multiview display system and method with adaptive background

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060056679A1 (en) * 2003-01-17 2006-03-16 Koninklijke Philips Electronics, N.V. Full depth map acquisition
US20070024614A1 (en) * 2005-07-26 2007-02-01 Tam Wa J Generating a depth map from a two-dimensional source image for stereoscopic and multiview imaging
US20080247670A1 (en) * 2007-04-03 2008-10-09 Wa James Tam Generation of a depth map from a monoscopic color image for rendering stereoscopic still and video images
CN101512601A (en) 2006-09-04 2009-08-19 皇家飞利浦电子股份有限公司 Method for determining a depth map from images, device for determining a depth map
US20100046837A1 (en) * 2006-11-21 2010-02-25 Koninklijke Philips Electronics N.V. Generation of depth map for an image
US20100183236A1 (en) * 2009-01-21 2010-07-22 Samsung Electronics Co., Ltd. Method, medium, and apparatus of filtering depth noise using depth information
US20110188773A1 (en) * 2010-02-04 2011-08-04 Jianing Wei Fast Depth Map Generation for 2D to 3D Conversion
US20110254834A1 (en) * 2010-04-14 2011-10-20 Lg Chem, Ltd. Stereoscopic image display device
US20110274366A1 (en) * 2010-05-07 2011-11-10 Microsoft Corporation Depth map confidence filtering
US8248410B2 (en) * 2008-12-09 2012-08-21 Seiko Epson Corporation Synthesizing detailed depth maps from images
US20120219236A1 (en) * 2011-02-28 2012-08-30 Sony Corporation Method and apparatus for performing a blur rendering process on an image
US20120237114A1 (en) * 2011-03-16 2012-09-20 Electronics And Telecommunications Research Institute Method and apparatus for feature-based stereo matching
US20130187910A1 (en) * 2012-01-25 2013-07-25 Lumenco, Llc Conversion of a digital stereo image into multiple views with parallax for 3d viewing without glasses

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7085409B2 (en) * 2000-10-18 2006-08-01 Sarnoff Corporation Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery
US8983175B2 (en) * 2005-08-17 2015-03-17 Entropic Communications, Inc. Video processing method and device for depth extraction
KR101114911B1 (en) * 2010-04-14 2012-02-14 주식회사 엘지화학 A stereoscopic image display device
CN103974055B (en) * 2013-02-06 2016-06-08 城市图像科技有限公司 3D photo generation system and method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060056679A1 (en) * 2003-01-17 2006-03-16 Koninklijke Philips Electronics, N.V. Full depth map acquisition
US20070024614A1 (en) * 2005-07-26 2007-02-01 Tam Wa J Generating a depth map from a two-dimensional source image for stereoscopic and multiview imaging
CN101512601A (en) 2006-09-04 2009-08-19 皇家飞利浦电子股份有限公司 Method for determining a depth map from images, device for determining a depth map
US20090324059A1 (en) * 2006-09-04 2009-12-31 Koninklijke Philips Electronics N.V. Method for determining a depth map from images, device for determining a depth map
US20100046837A1 (en) * 2006-11-21 2010-02-25 Koninklijke Philips Electronics N.V. Generation of depth map for an image
US20080247670A1 (en) * 2007-04-03 2008-10-09 Wa James Tam Generation of a depth map from a monoscopic color image for rendering stereoscopic still and video images
US8248410B2 (en) * 2008-12-09 2012-08-21 Seiko Epson Corporation Synthesizing detailed depth maps from images
US20100183236A1 (en) * 2009-01-21 2010-07-22 Samsung Electronics Co., Ltd. Method, medium, and apparatus of filtering depth noise using depth information
US20110188773A1 (en) * 2010-02-04 2011-08-04 Jianing Wei Fast Depth Map Generation for 2D to 3D Conversion
US20110254834A1 (en) * 2010-04-14 2011-10-20 Lg Chem, Ltd. Stereoscopic image display device
US20110274366A1 (en) * 2010-05-07 2011-11-10 Microsoft Corporation Depth map confidence filtering
US20120219236A1 (en) * 2011-02-28 2012-08-30 Sony Corporation Method and apparatus for performing a blur rendering process on an image
US20120237114A1 (en) * 2011-03-16 2012-09-20 Electronics And Telecommunications Research Institute Method and apparatus for feature-based stereo matching
US20130187910A1 (en) * 2012-01-25 2013-07-25 Lumenco, Llc Conversion of a digital stereo image into multiple views with parallax for 3d viewing without glasses

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
1st Office Action of counterpart Chinese Patent Application No. 201410029673.X issued on Jun. 3, 2015.
Cheng Lei; Yee-Hong Yang, "Optical flow estimation on coarse-to-fine region-trees using discrete optimization," in Computer Vision, 2009 IEEE 12th International Conference on , vol., no., pp. 1562-1569, Sep. 29-Oct. 2, 2009. *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160134859A1 (en) * 2013-02-06 2016-05-12 City Image Technology Ltd. 3D Photo Creation System and Method
US9544576B2 (en) * 2013-02-06 2017-01-10 City Image Technology Ltd. 3D photo creation system and method

Also Published As

Publication number Publication date
HK1189451A2 (en) 2014-06-06
HK1192107A2 (en) 2014-08-08
US9544576B2 (en) 2017-01-10
CN103974055B (en) 2016-06-08
CN103974055A (en) 2014-08-06
US20140219551A1 (en) 2014-08-07
HK1200254A1 (en) 2015-07-31
US20160134859A1 (en) 2016-05-12

Similar Documents

Publication Publication Date Title
US9544576B2 (en) 3D photo creation system and method
US8953023B2 (en) Stereoscopic depth mapping
US7557824B2 (en) Method and apparatus for generating a stereoscopic image
US9094675B2 (en) Processing image data from multiple cameras for motion pictures
EP1143747B1 (en) Processing of images for autostereoscopic display
US9031356B2 (en) Applying perceptually correct 3D film noise
TWI497980B (en) System and method of processing 3d stereoscopic images
US20090219383A1 (en) Image depth augmentation system and method
US20100085423A1 (en) Stereoscopic imaging
US20100091093A1 (en) Optimal depth mapping
WO2011132422A1 (en) Three-dimensional video display device and three-dimensional video display method
JP6060329B2 (en) Method for visualizing 3D image on 3D display device and 3D display device
US8094148B2 (en) Texture processing apparatus, method and program
KR20120053536A (en) Image display device and image display method
JP6585938B2 (en) Stereoscopic image depth conversion apparatus and program thereof
CN102547350A (en) Method for synthesizing virtual viewpoints based on gradient optical flow algorithm and three-dimensional display device
KR101377960B1 (en) Device and method for processing image signal
JP2006254240A (en) Stereoscopic image display apparatus, and method and program therefor
KR101994472B1 (en) Method, apparatus and recording medium for generating mask and both-eye images for three-dimensional image
KR101794492B1 (en) System for displaying multiview image
JP5088973B2 (en) Stereo imaging device and imaging method thereof
JP6200316B2 (en) Image generation method, image generation apparatus, and image generation program
CN108769662B (en) Multi-view naked eye 3D image hole filling method and device and electronic equipment
JP4892105B1 (en) Video processing device, video processing method, and video display device
JP6768431B2 (en) Image generator and program

Legal Events

Date Code Title Description
AS Assignment

Owner name: CITY IMAGE TECHNOLOGY LTD., HONG KONG

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TANG, SY SEN;REEL/FRAME:032165/0858

Effective date: 20140204

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, MICRO ENTITY (ORIGINAL EVENT CODE: M3551); ENTITY STATUS OF PATENT OWNER: MICROENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, MICRO ENTITY (ORIGINAL EVENT CODE: M3552); ENTITY STATUS OF PATENT OWNER: MICROENTITY

Year of fee payment: 8