US8908012B2 - Electronic device and method for creating three-dimensional image - Google Patents

Electronic device and method for creating three-dimensional image Download PDF

Info

Publication number
US8908012B2
US8908012B2 US13/548,224 US201213548224A US8908012B2 US 8908012 B2 US8908012 B2 US 8908012B2 US 201213548224 A US201213548224 A US 201213548224A US 8908012 B2 US8908012 B2 US 8908012B2
Authority
US
United States
Prior art keywords
image
dimensional
captured
outline
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US13/548,224
Other versions
US20130271572A1 (en
Inventor
Mao-Kuo Hsu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hon Hai Precision Industry Co Ltd
Original Assignee
Hon Hai Precision Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hon Hai Precision Industry Co Ltd filed Critical Hon Hai Precision Industry Co Ltd
Assigned to HON HAI PRECISION INDUSTRY CO., LTD. reassignment HON HAI PRECISION INDUSTRY CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HSU, MAO-KUO
Publication of US20130271572A1 publication Critical patent/US20130271572A1/en
Application granted granted Critical
Publication of US8908012B2 publication Critical patent/US8908012B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/564Depth or shape recovery from multiple images from contours
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the present disclosure relates to the field of image capturing, and particularly, to an electronic device and method for creating a three-dimensional (3D) image.
  • a stereoscopic photographic camera as well-known utilizes two separate objective lenses separated by a fixed distance to capture a left-eye image and a right-eye image of the object or the scene being photographed, and then the two images are synthesized together to form a 3D image.
  • Other such cameras use a single objective lens moved from one location to another to obtain the 2D images which are then synthesized to a 3D image.
  • FIG. 1 is a block diagram of an electronic device, in accordance with an embodiment.
  • FIG. 2 is a schematic view showing an arrangement of three imaging unit of the electronic device of FIG. 1 with respect to a three-dimensional scene.
  • FIG. 3 is a schematic view showing two images captured by one imaging unit of FIG. 2 .
  • FIG. 4 is a schematic view showing a three-dimensional image synthesized by the two-dimensional images of FIG. 3 .
  • FIG. 5 is a flowchart of a method for creating a three-dimensional image, in accordance with an embodiment.
  • FIG. 1 is a block diagram of an electronic device 10 according to an exemplary embodiment.
  • the electronic device 10 may be a digital camera, a tablet computer, or a mobile phone that has image capturing function.
  • the electronic device 10 includes more than one image capturing unit 11 , an outline detecting unit 12 , a coordinate determining unit 13 , and an image synthesizing unit 14 .
  • Each image capturing unit 11 including a camera lens and an image sensor such as a CCD image sensor or a CMOS image sensor for example, captures the object located in one corresponding direction of a three-dimensional scene with different focal length and then captures a variety of images of the object in each direction.
  • the focal length may be determined by various known technologies, like auto-focusing.
  • the electronic device 10 includes three image capturing units 11 , and each image capturing unit 11 includes a camera lens.
  • the optical axis of the camera lenses are arranged perpendicular to each other and respectively parallel with the X, Y, and Z axis of the three-dimensional scene, such that the three camera lenses can capture images of objects along the X, Y, and Z axis of the three-dimensional scene.
  • the three camera lenses are capable of rotating with respect to each other by 180 degrees, such that the three camera lenses can capture images of objects along the ⁇ X, ⁇ Y, and ⁇ Z axis of the three-dimensional scene.
  • the three camera lenses can capture images of objects located in all directions of the three-dimensional scene.
  • the number of the image capturing units 11 can be varied according to the need, and the optical axis of the camera lenses may not necessarily be coaxial with the X, Y, and Z axis of the three-dimensional scene, for example, the optical axis of the three camera lenses may angled with respect to each other by 120 degrees.
  • the outline detecting unit 12 detects the outline of the object in each captured image.
  • each captured image includes a background region and an object region of the object located within the background region and appearing as being placed over the background region.
  • the outline detecting unit 12 detects the outline of the object according to the brightness difference between the object region and the background region.
  • the electronic device 10 further includes an image processing unit 15 .
  • the image processing unit 15 converts each captured image into a binary image that has only black and white colors for each pixel based on the original grayscale of the pixel, and the outline detecting unit 12 identifies the boundary between two regions respectively displayed in white and black as the outline of the object.
  • FIG. 3 shows a three-dimensional scene with a T-shaped object connected to the wall surface perpendicular to the X axis of the three-dimensional scene as an example.
  • a variety of images are captured.
  • Each captured image includes a background region of the wall surface and an object region of the T-shape object located within the background region.
  • the imaging unit 11 respectively focuses on the wall surface and the T-shaped object to capture a first captured image and a second captured image.
  • the image processing unit 15 controls the object region M 1 to be displayed in white and the background region M 2 to be displayed in black (see FIG. 3A ).
  • the object region of the second captured image has a brightness smaller than the background region, thus the image processing unit 15 controls the object region M 1 to be displayed in black and the background region M 2 to be displayed in white (see FIG. 3B ).
  • the outline detecting unit 12 can obtain the T-shaped boundary according to the brightness different between the regions respectively in white and black in each captured image, and identify the obtain T-shaped boundary to be the outline of the T-shaped object in the captured image.
  • the coordinate determining unit 13 determines the three-dimensional coordinates of each point of the detected outline, and the three-dimensional coordinates includes two plane coordinates of the point in the captured image and a depth coordinate of the point along the image captured direction.
  • the depth coordinate of the point is associated with the distance between the image capturing unit 11 and the objects (namely the object distance).
  • the coordinate determining unit 13 determines the corresponding focal length of the image capturing unit 11 when the object is captured, and then calculates the object distance according to the determined focal length such that the depth coordinate of each point is obtained according to the calculated object distance.
  • the image synthesizing unit 14 synthesizes the detected outlines of the object from the captured images captured in the same direction together according to the three-dimensional coordinates of the outlines, creates a three-dimensional image along each direction with the corresponding synthesized outlines which can present a three-dimensional effect of the object, and then stitches the three-dimensional images of different directions together to obtain a combined image of the three-dimensional scene.
  • FIG. 4 illustrates the imaging synthesizing unit 14 in accordance with one embodiment.
  • the image synthesizing unit 14 synthesizes the two outlines respectively detected from the first and the second captured images together according to the three-dimensional coordinates of the outlines, thus, a three-dimensional image including the three-dimensional T-shaped object along X axis is obtained.
  • other three-dimensional images respectively along Y, Z, ⁇ X, ⁇ Y, and ⁇ Z axis are obtained, such that a combined image are created by stitching the three-dimensional images along different directions together.
  • FIG. 5 is a flowchart of a method of creating a three-dimensional image implemented by the electronic device 10 of FIG. 1 according to an exemplary embodiment.
  • each image capturing unit 11 captures the object located in one corresponding direction of a three-dimensional scene with different focal length and then captures a variety of images of the object in each direction.
  • step S 502 the image processing unit 15 converts each captured image into a binary image that has only black and white colors for each pixel based on the original grayscale of the pixel.
  • step S 503 the outline detecting unit 12 identifies the boundary between two regions respectively displayed in white and black as the outline of the object.
  • step S 504 the coordinate determining unit 13 determines the three-dimensional coordinates of each point of the detected outline.
  • step S 505 the image synthesizing unit 14 synthesizes the detected outlines of the object from the captured images captured in the same direction together according to the three-dimensional coordinates of the outlines, and creates a three-dimensional image along each direction with the corresponding synthesized outlines which can present a three-dimensional effect of the object.
  • step S 506 the image synthesizing unit 14 stitches the three-dimensional images of different directions together to obtain a combined image of the three-dimensional scene.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

An electronic device for creating a three-dimensional image includes a number of image capturing units, an outline detecting unit, a coordinate determining unit, and an image synthesizing unit. Each image capturing unit captures an object located in one corresponding direction of a three-dimensional scene with different focal length and then captures a number of images of the object in each direction. The outline detecting unit detects an outline of the object in each captured image. The coordinate determining unit determines three-dimensional coordinates of each point on the detected outline. The image synthesizing unit synthesizes the detected outlines of objects from the captured images captured in the same direction together according to the three-dimensional coordinates of the outlines, creates a three-dimensional image along each direction with the corresponding synthesized outlines, and stitches the three-dimensional images of different directions together to obtain a combined image.

Description

BACKGROUND
1. Technical Field
The present disclosure relates to the field of image capturing, and particularly, to an electronic device and method for creating a three-dimensional (3D) image.
2. Description of Related Art
Since two-dimensional (2D) images lack realism due to the absence of depth queues, many techniques have been devised for producing images capable of presenting three-dimension effect. A stereoscopic photographic camera as well-known utilizes two separate objective lenses separated by a fixed distance to capture a left-eye image and a right-eye image of the object or the scene being photographed, and then the two images are synthesized together to form a 3D image. Other such cameras use a single objective lens moved from one location to another to obtain the 2D images which are then synthesized to a 3D image.
Although these types of cameras are somewhat useful, a new image capturing device is still needed.
BRIEF DESCRIPTION OF THE DRAWINGS
Many aspects of the embodiments can be better understood with references to the following drawings. The components in the drawings are not necessarily drawn to scale, the emphasis instead being placed upon clearly illustrating the principles of the embodiments. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
FIG. 1 is a block diagram of an electronic device, in accordance with an embodiment.
FIG. 2 is a schematic view showing an arrangement of three imaging unit of the electronic device of FIG. 1 with respect to a three-dimensional scene.
FIG. 3 is a schematic view showing two images captured by one imaging unit of FIG. 2.
FIG. 4 is a schematic view showing a three-dimensional image synthesized by the two-dimensional images of FIG. 3.
FIG. 5 is a flowchart of a method for creating a three-dimensional image, in accordance with an embodiment.
DETAILED DESCRIPTION
FIG. 1 is a block diagram of an electronic device 10 according to an exemplary embodiment. The electronic device 10 may be a digital camera, a tablet computer, or a mobile phone that has image capturing function. In the embodiment, the electronic device 10 includes more than one image capturing unit 11, an outline detecting unit 12, a coordinate determining unit 13, and an image synthesizing unit 14.
Each image capturing unit 11, including a camera lens and an image sensor such as a CCD image sensor or a CMOS image sensor for example, captures the object located in one corresponding direction of a three-dimensional scene with different focal length and then captures a variety of images of the object in each direction. The focal length may be determined by various known technologies, like auto-focusing.
In the embodiment, the electronic device 10 includes three image capturing units 11, and each image capturing unit 11 includes a camera lens. As shown in FIG. 2, the optical axis of the camera lenses are arranged perpendicular to each other and respectively parallel with the X, Y, and Z axis of the three-dimensional scene, such that the three camera lenses can capture images of objects along the X, Y, and Z axis of the three-dimensional scene. Furthermore, the three camera lenses are capable of rotating with respect to each other by 180 degrees, such that the three camera lenses can capture images of objects along the −X, −Y, and −Z axis of the three-dimensional scene. Thus, the three camera lenses can capture images of objects located in all directions of the three-dimensional scene. However, it is notable that the number of the image capturing units 11 can be varied according to the need, and the optical axis of the camera lenses may not necessarily be coaxial with the X, Y, and Z axis of the three-dimensional scene, for example, the optical axis of the three camera lenses may angled with respect to each other by 120 degrees.
The outline detecting unit 12 detects the outline of the object in each captured image. In the embodiment, each captured image includes a background region and an object region of the object located within the background region and appearing as being placed over the background region. The outline detecting unit 12 detects the outline of the object according to the brightness difference between the object region and the background region. In the embodiment, the electronic device 10 further includes an image processing unit 15. The image processing unit 15 converts each captured image into a binary image that has only black and white colors for each pixel based on the original grayscale of the pixel, and the outline detecting unit 12 identifies the boundary between two regions respectively displayed in white and black as the outline of the object.
FIG. 3 shows a three-dimensional scene with a T-shaped object connected to the wall surface perpendicular to the X axis of the three-dimensional scene as an example. When captured by one image capturing unit 11 with optical axis coaxial with the X axis several times with different focal length, a variety of images are captured. Each captured image includes a background region of the wall surface and an object region of the T-shape object located within the background region.
For example, the imaging unit 11 respectively focuses on the wall surface and the T-shaped object to capture a first captured image and a second captured image. It is known that when focusing on the wall surface, the object region of the first captured image has a greater brightness than the background region, thus the image processing unit 15 controls the object region M1 to be displayed in white and the background region M2 to be displayed in black (see FIG. 3A). Similarly, when focusing on the T-shaped object, the object region of the second captured image has a brightness smaller than the background region, thus the image processing unit 15 controls the object region M1 to be displayed in black and the background region M2 to be displayed in white (see FIG. 3B). Then the outline detecting unit 12 can obtain the T-shaped boundary according to the brightness different between the regions respectively in white and black in each captured image, and identify the obtain T-shaped boundary to be the outline of the T-shaped object in the captured image.
The coordinate determining unit 13 determines the three-dimensional coordinates of each point of the detected outline, and the three-dimensional coordinates includes two plane coordinates of the point in the captured image and a depth coordinate of the point along the image captured direction. In this embodiment, the depth coordinate of the point is associated with the distance between the image capturing unit 11 and the objects (namely the object distance). In this case, the coordinate determining unit 13 determines the corresponding focal length of the image capturing unit 11 when the object is captured, and then calculates the object distance according to the determined focal length such that the depth coordinate of each point is obtained according to the calculated object distance.
The image synthesizing unit 14 synthesizes the detected outlines of the object from the captured images captured in the same direction together according to the three-dimensional coordinates of the outlines, creates a three-dimensional image along each direction with the corresponding synthesized outlines which can present a three-dimensional effect of the object, and then stitches the three-dimensional images of different directions together to obtain a combined image of the three-dimensional scene. FIG. 4 illustrates the imaging synthesizing unit 14 in accordance with one embodiment. The image synthesizing unit 14 synthesizes the two outlines respectively detected from the first and the second captured images together according to the three-dimensional coordinates of the outlines, thus, a three-dimensional image including the three-dimensional T-shaped object along X axis is obtained. Similarly, other three-dimensional images respectively along Y, Z, −X, −Y, and −Z axis are obtained, such that a combined image are created by stitching the three-dimensional images along different directions together.
FIG. 5 is a flowchart of a method of creating a three-dimensional image implemented by the electronic device 10 of FIG. 1 according to an exemplary embodiment.
In step S501, each image capturing unit 11 captures the object located in one corresponding direction of a three-dimensional scene with different focal length and then captures a variety of images of the object in each direction.
In step S502, the image processing unit 15 converts each captured image into a binary image that has only black and white colors for each pixel based on the original grayscale of the pixel.
In step S503, the outline detecting unit 12 identifies the boundary between two regions respectively displayed in white and black as the outline of the object.
In step S504, the coordinate determining unit 13 determines the three-dimensional coordinates of each point of the detected outline.
In step S505, the image synthesizing unit 14 synthesizes the detected outlines of the object from the captured images captured in the same direction together according to the three-dimensional coordinates of the outlines, and creates a three-dimensional image along each direction with the corresponding synthesized outlines which can present a three-dimensional effect of the object.
In step S506, the image synthesizing unit 14 stitches the three-dimensional images of different directions together to obtain a combined image of the three-dimensional scene.
It is to be understood, even though information and advantages of the present embodiments have been set forth in the foregoing description, together with details of the structures and functions of the present embodiments, the disclosure is illustrative only; and that changes may be made in detail, especially in matters of shape, size, and arrangement of parts within the principles of the present embodiments to the full extent indicated by the broad general meaning of the terms in which the appended claims are expressed.

Claims (9)

What is claimed is:
1. An electronic device with three-dimensional image creating function, the electronic device comprising:
a plurality of image capturing units, each image unit to capture an object located in one corresponding direction of a three-dimensional scene with different focal length and capture a plurality of images of the object in each direction;
an outline detecting unit to detect an outline of the object in each captured image;
a coordinate determining unit to determine three-dimensional coordinates of each point of the detected outline; and
an image synthesizing unit configured to:
synthesize the detected outlines of objects from the captured images captured in the same direction together according to the three-dimensional coordinates of the outlines;
create a three-dimensional image along each direction with the corresponding synthesized outlines, and
stitch the three-dimensional images of different directions together to obtain a combined image of the three-dimensional scene.
2. The electronic device of claim 1, wherein the number of the plurality of image capturing units is three, each image capturing unit comprises a camera lens, and optical axis of the three camera lenses are arranged perpendicular to each other, and the three camera lenses are capable of rotating with respect to each other by 180 degrees.
3. The electronic device of claim 1, wherein the number of the plurality of image capturing units is three, each image capturing unit comprises a camera lens, and optical axis of the three camera lenses are angled with respect to each other by 120 degrees.
4. The electronic device of claim 1, wherein each captured image comprises a background region and an object region of the object located within the background region, and the outline detecting unit is configured for detecting the outline of the object according to a brightness difference between the object region and the background region.
5. The electronic device of claim 4, further comprising an image processing unit, wherein the image processing unit is configured for converting each captured image into a binary image that has only black and white colors for each pixel based on an original grayscale of the pixel, and the outline detecting unit is further configured for identifies a boundary between two regions respectively displayed in white and black as the outline of the object.
6. The electronic device of claim 1, wherein the three-dimensional coordinates of each point comprise a depth coordinate of the point along the image captured direction, and the depth coordinate of the point is determined according to a distance between the image capturing unit and the object when the object is captured.
7. A method for creating a three-dimensional image, comprising:
capturing an object located in each direction of a three-dimensional scene with different focal length and capturing a plurality of images of the object in each direction;
detecting an outline of the object in each captured image;
determining three-dimensional coordinates of each point of the detected outline;
synthesizing the detected outlines of objects from the captured images captured in the same direction together according to the three-dimensional coordinates of the outlines;
creating a three-dimensional image along each direction with the corresponding synthesized outlines; and
stitching the three-dimensional images of different directions together to obtain a combined image of the three-dimensional scene.
8. The method of claim 7, wherein each captured image comprises a background region and an object region of the object located within the background region, and step “detecting an outline of the object in each captured image” comprises:
detecting the outline of the object according to a brightness difference between the object region and the background region.
9. The method of claim 7, wherein the three-dimensional coordinates of each point comprise a depth coordinate of the point along the image captured direction, and the depth coordinate of the point is determined according to a distance between the image capturing unit and the object when the object is captured.
US13/548,224 2012-04-13 2012-07-13 Electronic device and method for creating three-dimensional image Expired - Fee Related US8908012B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
TW101113221A TW201342303A (en) 2012-04-13 2012-04-13 Three-dimensional image obtaining system and three-dimensional image obtaining method
TW101113221 2012-04-13
TW101113221A 2012-04-13

Publications (2)

Publication Number Publication Date
US20130271572A1 US20130271572A1 (en) 2013-10-17
US8908012B2 true US8908012B2 (en) 2014-12-09

Family

ID=49324709

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/548,224 Expired - Fee Related US8908012B2 (en) 2012-04-13 2012-07-13 Electronic device and method for creating three-dimensional image

Country Status (2)

Country Link
US (1) US8908012B2 (en)
TW (1) TW201342303A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10380750B2 (en) * 2017-07-13 2019-08-13 Hon Hai Precision Industry Co., Ltd. Image depth calculating device and method
CN111862302A (en) * 2019-04-12 2020-10-30 北京城市网邻信息技术有限公司 Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10313656B2 (en) 2014-09-22 2019-06-04 Samsung Electronics Company Ltd. Image stitching for three-dimensional video
US11205305B2 (en) 2014-09-22 2021-12-21 Samsung Electronics Company, Ltd. Presentation of three-dimensional video
CN111381357B (en) * 2018-12-29 2021-07-20 中国科学院深圳先进技术研究院 Image three-dimensional information extraction method, object imaging method, device and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100231711A1 (en) * 2009-03-13 2010-09-16 Omron Corporation Method for registering model data for optical recognition processing and optical sensor
US8254667B2 (en) * 2007-02-16 2012-08-28 Samsung Electronics Co., Ltd. Method, medium, and system implementing 3D model generation based on 2D photographic images
US8577176B2 (en) * 2009-07-28 2013-11-05 Canon Kabushiki Kaisha Position and orientation calibration method and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8254667B2 (en) * 2007-02-16 2012-08-28 Samsung Electronics Co., Ltd. Method, medium, and system implementing 3D model generation based on 2D photographic images
US20100231711A1 (en) * 2009-03-13 2010-09-16 Omron Corporation Method for registering model data for optical recognition processing and optical sensor
US8577176B2 (en) * 2009-07-28 2013-11-05 Canon Kabushiki Kaisha Position and orientation calibration method and apparatus

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10380750B2 (en) * 2017-07-13 2019-08-13 Hon Hai Precision Industry Co., Ltd. Image depth calculating device and method
CN111862302A (en) * 2019-04-12 2020-10-30 北京城市网邻信息技术有限公司 Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium

Also Published As

Publication number Publication date
TW201342303A (en) 2013-10-16
US20130271572A1 (en) 2013-10-17

Similar Documents

Publication Publication Date Title
US9325899B1 (en) Image capturing device and digital zooming method thereof
CN108600576B (en) Image processing apparatus, method and system, and computer-readable recording medium
JP6223169B2 (en) Information processing apparatus, information processing method, and program
CN104363377B (en) Display methods, device and the terminal of focus frame
JP2015197745A (en) Image processing apparatus, imaging apparatus, image processing method, and program
EP3241348A1 (en) Method and system of sub-pixel accuracy 3d measurement using multiple images
CN103379267A (en) Three-dimensional space image acquisition system and method
WO2018121401A1 (en) Splicing method for panoramic video images, and panoramic camera
JP5755571B2 (en) Virtual viewpoint image generation device, virtual viewpoint image generation method, control program, recording medium, and stereoscopic display device
US8908012B2 (en) Electronic device and method for creating three-dimensional image
CN105190229A (en) Three-dimensional shape measurement device, three-dimensional shape measurement method, and three-dimensional shape measurement program
CN102739958B (en) Imaging processing apparatus and image processing method
US20140192163A1 (en) Image pickup apparatus and integrated circuit therefor, image pickup method, image pickup program, and image pickup system
JP3990271B2 (en) Simple stereo image input device, method, program, and recording medium
KR20160125715A (en) 3d scanner and 3d scanning method
JP2013025649A (en) Image processing device, image processing method, and program
JP2013093836A (en) Image capturing apparatus, image processing apparatus, and method thereof
KR20180000696A (en) A method and apparatus for creating a pair of stereoscopic images using least one lightfield camera
US20130076868A1 (en) Stereoscopic imaging apparatus, face detection apparatus and methods of controlling operation of same
JP5924833B2 (en) Image processing apparatus, image processing method, image processing program, and imaging apparatus
TWI504936B (en) Image processing device
JP6161874B2 (en) Imaging apparatus, length measurement method, and program
CN104519332B (en) Method for generating view angle translation image and portable electronic equipment thereof
JP5689693B2 (en) Drawing processor
JP2017073620A (en) Image processing apparatus, image processing method, and program for image processing

Legal Events

Date Code Title Description
AS Assignment

Owner name: HON HAI PRECISION INDUSTRY CO., LTD., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HSU, MAO-KUO;REEL/FRAME:028541/0055

Effective date: 20120712

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.)

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20181209