US9615081B2 - Method and multi-camera portable device for producing stereo images - Google Patents
Method and multi-camera portable device for producing stereo images Download PDFInfo
- Publication number
- US9615081B2 US9615081B2 US14/523,902 US201414523902A US9615081B2 US 9615081 B2 US9615081 B2 US 9615081B2 US 201414523902 A US201414523902 A US 201414523902A US 9615081 B2 US9615081 B2 US 9615081B2
- Authority
- US
- United States
- Prior art keywords
- image
- image sensor
- initial
- portable device
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased, expires
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H04N13/0239—
-
- H04N13/0296—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/296—Synchronisation thereof; Control thereof
Definitions
- the invention generally relates to the production of stereo images. More particularly, the present invention relates to a method and a multi-camera portable device for producing stereo images with using an external mirror.
- the distance between a camera and a spatial point in a scene can be determined or well estimated from the position of the point within two or more associated images showing the same point, wherein the associated images are captured simultaneously.
- the distance calculation is still possible if one or more mirrors are arranged in the scene, and some of the images are captured in the mirror.
- the three dimensional (3D) position of a point can be computed from basic geometric relationships when the spatial relationship between the image recording device and the position and the parameters of the reflecting surfaces (e.g. mirrors) are known.
- the challenge in computing an unknown distance from multiple images using reflecting surfaces is called catadioptric stereo vision.
- the U.S. Pat. No. 8,189,100 discloses a portable device comprising a first image sensor, a second image sensor configured to change position with respect to the first image sensor, a controller configured to control the position of the second image sensor, and an image processing module configured to process and combine images captured by the first and second image sensors.
- this device is equipped with two image sensors to produce, for example, a stereo image, both of the image sensors directly capture an image of the real object, and no external mirror is used in the image generation process.
- a multi-camera portable device comprising at least two image sensors at opposite faces thereof, and further comprises at least a processor unit, a memory unit, a non-transitory data storage medium, a display unit and an input unit, wherein the multi-camera portable device is adapted to substantially simultaneously record, by its first image sensor, a first initial image containing a picture of an object arranged within the field of view of the first image sensor, and by its second image sensor, a second initial image containing (i) a picture of a mirrored view of the object appearing in a mirror and (ii) a picture of at least a portion of a mirrored view of the multi-camera portable device itself appearing in the mirror, thereby producing an initial pair of images, wherein said mirrored view of the second image sensor and said portion of the mirrored view of the multi-camera portable device are arranged within the field of view of the second image sensor.
- non-transitory data storage medium comprising a set of processor-readable instructions, said instructions being adapted, when executed by a processor of a multi-camera portable device, to carry out the steps of:
- FIG. 1 schematically illustrates the optical scheme of the image capturing arrangement including an object, a dual-camera portable device and a mirror.
- FIG. 2 is a flow diagram showing the major steps of the method according to the invention.
- FIG. 3 shows an exemplary pair of the initial images captured by two cameras of a multi-camera portable device.
- FIG. 4 shows a sub-image of the second initial image that is most similar to the first initial image and a search region in the second initial image where the picture of the mirrored view of the second camera is expected to be found.
- FIG. 5 shows a typical edge map of a portable device, the edge map showing predefined edge boxes.
- FIG. 6 shows the regions that are defined on the second initial image as being close to the second camera's center and the most similar image regions in the first initial image.
- FIG. 7 shows the corresponding epipolar lines of the initial images.
- FIG. 8 is a schematic block diagram of the multi-camera portable device according to the present invention.
- image means the product of image capturing performed by an imaging device, such as an image sensor or a camera
- picture means a visual representation of an object or person within a captured image.
- An image may be a still image or a frame of a video sequence (also referred to as video image).
- the term “virtual” is used in optical sense for any object that is apparently located in a mirrored space behind a mirror.
- FIG. 1 schematically illustrates the image capturing arrangement according to the present invention, which includes a multi-camera portable device 100 equipped with at least two cameras 102 , 104 , a mirror 110 and an object 120 for which a stereo image is to be produced.
- the portable device 100 has at least two cameras (or image sensors), at least one being arranged on a first face of the multi-camera portable device 100 and at least one other one being arranged on a second face of the portable device 100 , the second face being opposite to the first face. It is particularly preferred that for the multi-camera portable device 100 , a dual-camera portable device equipped with a front camera 102 and a rear camera 104 is used.
- the multi-camera portable device 100 may be any kind of portable communication or computing device equipped with a front camera and a rear camera, such as a mobile phone, a smart phone, a phablet, a tablet PC, a notebook, or the like, or any kind of other multi-camera device with more than two cameras and adapted for producing a stereo image.
- the cameras 102 , 104 of the portable device 100 may capture still image snapshots and/or video sequences.
- An object 120 for which a stereo image is to be produced, is arranged on either side of the portable device 100 so that a first one of the cameras 102 can directly record a first image showing said object.
- a mirror 110 is arranged on the opposite side of the portable device 100 so that the second camera 104 can record a second image showing a mirrored view of the object that appears in the mirror 110 .
- the portable device 100 , the mirror 110 and the object 120 must be arranged with respect to each other so that the real object 120 falls within the field of view of the first camera 102 , while a virtual counterpart of the object 120 accommodating in the mirrored space, i.e. the virtual object 121 , and a virtual counterpart of the portable device 100 accommodating in the mirrored space, i.e. the virtual device 101 , fall within the field of view of the second camera 104 .
- FIG. 2 The major steps of the method of producing stereo images from an initial image pair of images captured by the first and second cameras are shown in the flow diagram of FIG. 2 and will be explained with reference to FIG. 1 , which also presents the basic geometric concept applied in the stereo image generation scheme according to the present invention, as well as to FIGS. 3 to 7 illustrating various stages of the image processing.
- a mirror 110 and an object 120 are arranged with respect to the multi-camera portable device 100 having at least two image sensors or cameras as described above and shown in FIG. 1 , wherein the mirror 110 is arranged in front of said second camera 104 of the portable device 100 and the object 120 is arranged in front of the first camera 102 of the portable device 100 , said first and second cameras 102 , 104 being on the opposite faces of the device 100
- step S 204 still images are captured by both of the first and second cameras 102 , 104 simultaneously, or at time points substantially close to each other, in order to record an initial pair of associated images I 1 and I 2 .
- the two cameras 102 , 104 may be synchronized.
- a first initial image I 1 containing at least a picture O 1 of the real object 120 and a second initial image I 2 containing a picture D 2 of the mirrored view of the portable device 100 a picture O 2 of the mirrored view of the real object 120 are recorded.
- An example of such an initial pair of images I 1 , I 2 can be seen in FIG.
- the coordinate transformation that maps the coordinate system of the first camera 102 into the coordinate system of the virtual second camera 105 (shown in FIG. 1 ) is determined to allow to match the initial images I 1 and I 2 for stereo image generation.
- This coordinate transformation is performed by a so called fundamental matrix F.
- Construction of the fundamental matrix F is based on the following definitions and equations. For projecting a picture point shown in a captured image into the coordinate system of the capturing camera the following equations are used.
- f the focal length of the capturing camera
- s the pixel size on the image sensors of the portable device.
- this parameter is specified by the manufacturer of the image sensor and its value is typically about 1 micron.
- a relative focal length H is defined as the ratio of the focal length and the pixel size:
- a first matrix transformation that maps the coordinate system of the first camera 102 into the coordinate system of the second camera 104 and a second matrix transformation that maps the coordinate system of the real second camera 104 into the coordinate system of the virtual second camera 105 are to be obtained.
- the first transformation is defined by a device matrix K, which is a constant matrix characteristic to any particular portable device 100
- the second one is defined by a mirror transformation matrix M that should be determined using the captured initial images I 1 , I 2 .
- the mirror transformation matrix M depends only on a vector m that points from the focal point of the second camera 104 to the focal point of the virtual second camera 105 .
- the mirror transformation matrix M has the following well-known form:
- the matrix M ( I - 2 ⁇ m ⁇ 2 ⁇ mm T m 0 1 ) wherein I is a 3 by 3 identity matrix.
- the matrix M which is a 4 by 4 homogenous transformation matrix, is mapping in the 3D homogenous coordinate system.
- the vector m depends on four parameters, i.e.
- the fundamental matrix F (that is mapping from the first camera 102 to the virtual second camera 105 ) is a matrix product of the mirror transformation matrix M and the device matrix K.
- step S 206 the centre (c x ,c y ) of the picture C 2 of the mirrored view of the second camera 104 is found in the image I 2 by the following steps.
- step S 208 the distance ratio of a first distance between two points of the portable device 100 in reality and a second distance of the respective points in the second initial image is determined to obtain the direction and the length of a mirroring vector pointing from the center of the second camera 104 to the center of the virtual second camera 105 (see FIG. 1 ).
- p 1 and p 2 be arbitrary points shown in the first initial image I 1 . Assuming that both arbitrary points p 1 , p 2 have the same distance d from the second camera 104 in the first camera's coordinate system, their actual coordinates P 1 , P 2 in the coordinate system of the first camera 102 can be readily calculated.
- q 1 and q 2 be two (mirrored) points of to p 1 and p 2 , respectively, shown in the second initial image I 2 . It is noted that these points q 1 , q 2 can be approximately identified in the second image I 2 by using the position of the best matching sub-image Is within the second initial image I 2 .
- the actual coordinates Q 1 , Q 2 of the points q 1 , q 2 depend on their z-coordinate (depth) in the coordinate system of the virtual second camera 105 , but since they coincide with P 1 and P 2 and the distance therebetween is known, the coordinates Q 1 , Q 2 in the coordinate system of the virtual second camera 105 can also be readily calculated.
- FIG. 4 shows the sub-image Is of the second initial image I 2 that is most similar to the first initial image I 1 and the search region 150 estimating the expected location of the picture C 2 of the mirrored view of the second camera 104 in the second initial image I 2 .
- U 1 and U 2 be the image coordinates of the two selected points u 1 , u 2 of the device 100 .
- the coordinates of these points u 1 , u 2 are denoted by R 1 and R 2 , respectively, in the coordinate system of the virtual second camera 105 .
- R 1 and R 2 respectively, in the coordinate system of the virtual second camera 105 .
- the depth (z-coordinate) of these points u 1 , u 2 in the coordinate system of the virtual second camera 105 is roughly equal. Accordingly,
- R z H 2 ⁇ d u 1 , u 2 d U 1 , U 2
- R z is equal to the length of m. (If one of the selected points u 1 , u 2 on the portable device 100 was the second camera's center point, then R z would coincide with the end point of vector m.)
- the pictures of the two selected points u 1 , u 2 of the portable device 100 in the second initial image I 2 are to be found.
- one of the possible techniques is to find the pictures of the corners of the portable device 100 , or for example, the distance d between the picture of flash and picture of the second camera's lens in the second initial image I 2 , which can be performed by any appropriate edge-detection technique in the second initial image I 2 .
- a filtering is carried out for all of the possible positions of the picture C 2 of the mirrored view of the second camera 104 in the search area 150 of the second initial image I 2 (this search region is defined by the output of step S 206 ) to find a limited set of likely positions and distance ratios thereof. This can be done, for example, by averaging edge-strength values in various regions of the search area that are specific to the particular portable device 100 . For speeding up this processing, a pre-stored picture of an edge map of the portable device may be used.
- FIG. 5 a typical edge map of a portable device is shown, wherein the edge map contains a picture D 2 of the mirrored view of the portable device with marked side edge boxes 140 , 141 , a frame box 142 of the second camera lens and a box 143 surrounding the flash lamp of the portable device.
- specific camera positions center and size of picture C 2
- the previously defined search region 150 in the second initial image I 2 can be located.
- the estimated position of the picture C 2 of the mirrored view of the second camera 104 in the second initial image I 2 is determined by searching for the specific rounded square contour of the frame of the second camera 104 . This may be done using a standard generalized Hough transform, resulting in the recognition of the picture of said specific portion of the portable device 100 in the second initial image I 2 .
- step S 210 After having obtained the direction and the length of the vector m, only the focal length (or the corresponding relative focal length H 2 ) of the second camera 104 should be determined in step S 210 for the mirror transformation matrix M.
- the actual relative focal length H 2 of the second camera 104 may be obtained from a controller circuit of the second camera 104 , and therefore it may be regarded known. However, when the relative focal length H 2 is not available, a good estimation for the focal length thereof may be given by various estimation techniques.
- the focal length f 2 of the second camera 104 may be estimated in the following way. In general, it is carried out by creating several fundamental matrices F, with different values of the relative focal length H 2 (assuming that the pixel size s of the second camera is known), generating a series of candidate stereo image pairs, in which the first initial image I 1 is mapped into the coordinate system of the virtual second camera 105 with the various fundamental matrices F i , followed by choosing the stereo image pair that corresponds to the best stereo result. The capturing focal length of the second camera 104 is then determined from the relative focal length belonging to the best fundamental matrix F i of the best matching stereo image pair.
- One possible technique to find the best estimate for the relative focal length H 2 of the second camera 104 uses the fact that the stereo disparity has a very low resolution near the epipole. Since the position of the epipole (i.e. the image of the focal point of the first camera 102 ) in the second initial image I 2 is known when the picture C 2 of the mirrored view of the second camera 104 has been found within the second initial image I 2 , the corresponding region of the first initial image I 1 that best matches to the picture near the epipole where the portable device 100 does not occlude the object 120 .
- the picture of the portable device 100 does not appear on the first initial image I 1
- the corresponding region can be still found, for example, by using various fundamental matrices F i and checking the correlation between associated sub-windows of the initial image pair.
- search regions 162 , 163 are shown in FIG. 6 , wherein the search regions 162 are the regions in the second image I 2 for which the best matching search regions 163 in the first image I 1 was previously found. Regardless of scale (defined by the fundamental matrix) the search region 163 of image I 1 is expected to appear quite exactly (the color image correlation between the windows 162 and 163 are expected to be high) in the first initial image I 1 . These regions 163 are searched in the first image I 1 with fundamental matrices defined by different values of the focal length of the second camera 104 .
- the focal length that refers to the best match, i.e. the highest correlation between the associated sub-windows, via the corresponding calibration matrix F i is the estimate for the actual focal length.
- This algorithm results in a stereo calibration matrix F that can be visualized by the epipolar lines shown in FIG. 7 , in which the lines with the same reference sign (e.g. a, a′; or b, b′, etc.) are mutually corresponding epipolar lines, i.e. any point on one of the initial images I 1 , I 2 should belong to a real 3D point that has its picture on the other one of the images I 1 , I 2 along the corresponding epipolar line (assuming that the mentioned point is not occluded).
- a depth estimation for a captured object may be performed to generate a depth image of the object.
- the object arranged in front of the portable device i.e. in front of its first camera
- several characterizing parameters of the face can be estimated using the depth image thereof, including facial expression, position of the nose, the mouth, the eyes, lips, etc.
- augmented reality For example, a software application running on the portable device may be used to take synchronized photos or video sequences by means of the front and rear cameras of a multi-camera portable device with using a mirror, and additional virtual objects may be displayed by the software over those images or video sequences, such as virtual glasses over the face of a user.
- the present invention relates to a multi-camera portable device for producing stereo images with using an external mirror.
- the multi-camera portable device has at least two cameras 302 , 304 at opposite faces thereof.
- the multi-camera portable device 300 has at least one front camera 302 on its front side and at least one rear camera 304 on its backside.
- the multi-camera portable device 300 is adapted for performing the above described method of producing stereo images when an object to be captured is arranged in front of its front-side camera 302 , and a mirror is arranged in front of its back-side camera 304 .
- the multi-camera portable device 300 further comprises all of those common components that are needed for the normal operation of such a device, including at least a processor unit 306 , a memory unit 308 , a non-transitory data storage medium 309 , a display unit 310 and an input unit 312 , wherein the multi-camera portable device 300 is configured to be capable of carrying out the steps of the methods according to the present invention.
- the device 300 comprises a set of computer readable instructions either in its memory unit 308 or its non-transitory data storage medium 309 , said instructions being adapted, when executed by the processor unit 306 , to carry out the portable device-related steps of the method of the present invention.
- the present invention relates to a non-transitory data storage medium comprising a set of processor-readable instructions, said instructions being adapted, when executed by a processor of a multi-camera portable device, to carry out the steps of:
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
Abstract
Description
-
- arranging the object in front of the first image sensor so that the object falls within the field of view of the first image sensor,
- arranging an external mirror in front of the second image sensor so that a mirrored view of the object and a mirrored view of the portable device fall within the field of view of the second image sensor,
- substantially simultaneously recording, by the first image sensor, a first initial image containing a picture of the object, and by the second image sensor, a second initial image containing (i) a picture of the mirrored view of the object appearing in the mirror and (ii) a picture of at least a portion of the mirrored view of the portable device appearing in the mirror, thereby producing an initial pair of images,
- finding the center of the picture of the mirrored view of the second image sensor within the second initial image,
- determining the distance ratio of a first distance between two points of the portable device in reality and a second distance of the respective points in the second initial image for obtaining the direction and the length of a mirroring vector, which points from the center of the second image sensor to the center of the virtual second image sensor,
- obtaining a capturing focal length of the second image sensor, and
- by using said mirroring vector and the capturing focal length of the second image sensor, transforming the coordinate system of the first image sensor into the coordinate system of the virtual second image sensor to generate a stereo image pair from the first initial image and the second initial image for the object.
-
- instructing a first image sensor of the device to record a first initial image containing a picture of an object arranged within the field of view of the first image sensor, and a second image sensor of the device to substantially simultaneously record a second initial image containing (i) a picture of a mirrored view of the object appearing in a mirror and (ii) a picture of at least a portion of a mirrored view of the multi-camera portable device itself appearing in the mirror, thereby to produce an initial pair of images, wherein said mirrored view of the second image sensor and said portion of the mirrored view of the device are arranged within the field of view of the second image sensor,
- finding the center of the picture of the mirrored view of the second image sensor within the second initial image,
- determining a distance ratio of a first distance between two points of the device in reality and a second distance of the respective points in the second initial image for obtaining a direction and a length of a mirroring vector, which points from the center of the second image sensor to the center of the virtual second image sensor,
- obtaining a capturing focal length of the second image sensor, and
- by using said mirroring vector and the capturing focal length of the second image sensor, transforming the coordinate system of the first image sensor into the coordinate system of the virtual second image sensor to generate a stereo image pair from the first initial image and the second initial image for the object.
where f is the focal length of the capturing camera and s is the pixel size on the image sensors of the portable device. Generally, this parameter is specified by the manufacturer of the image sensor and its value is typically about 1 micron.
wherein I is a 3 by 3 identity matrix. The matrix M, which is a 4 by 4 homogenous transformation matrix, is mapping in the 3D homogenous coordinate system. The vector m depends on four parameters, i.e. the center point (cx, cy) of the picture C2 of the mirrored view of the
of two points and the relative focal length H2 of the
-
- (i) A sub-image Is of the second initial image I2 that is most similar to the first image I1 is found within the second image I2. The sub-image Is may be sought by measuring the quality of the match between the sub-image Is and the selected portion of the second initial image I2. For quality measurement of the matching, a simple colour correlation between several appropriately selected sub-images Is and the first initial image I1 may be calculated. This calculation is preferably performed in a multi-scale approach; first finding the good matches on a rough scale, and then checking and adjusting the highest quality matches on finer scales. This matching step results in at least one sub-image Is, but preferably a plurality of candidate sub-images Is.
- (ii) For each of the candidate sub-images Is obtained in the previous step (i), the center of the picture C2 of the mirrored view of the second camera in the second image I2 is roughly estimated using various selected values of the relative focal length H2 of the
second camera 104, and various selected distance values d between thesecond camera 104 and theobject 120.
and hence the vector m is still not known.
is determined as follows.
and the same stands for R2 and U2. Here f/s is the relative focal length H2 of the
U 1,x −U 2,x=(R 1,x −R 2,x)H 2 /R z
U 1,y −U 2,y=(R 1,y −R 2,y)H 2 /R z
Since
d u
therefore
That means
that are obvious for those skilled in the art, and therefore it no way means any limitation of the present invention to such a specific search technique. The length of the vector m can then be calculated using the above equations.
-
- instructing a first image sensor of the device to record a first initial image containing a picture of an object arranged within the field of view of the first image sensor, and a second image sensor of the device to substantially simultaneously record a second initial image containing (i) a picture of a mirrored view of the object appearing in a mirror and (ii) a picture of at least a portion of a mirrored view of the multi-camera portable device itself appearing in the mirror, thereby to produce an initial pair of images, wherein said mirrored view of the second image sensor and said portion of the mirrored view of the device are arranged within the field of view of the second image sensor,
- finding the center of the picture of the mirrored view of the second image sensor within the second initial image,
- determining a distance ratio of a first distance between two points of the device in reality and a second distance of the respective points in the second initial image for obtaining a direction and a length of a mirroring vector, which points from the center of the second image sensor to the center of the virtual second image sensor,
- obtaining a capturing focal length of the second image sensor, and
- by using said mirroring vector and the capturing focal length of the second image sensor, transforming the coordinate system of the first image sensor into the coordinate system of the virtual second image sensor to generate a stereo image pair from the first initial image and the second initial image for the object.
Claims (8)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/523,902 US9615081B2 (en) | 2013-10-28 | 2014-10-26 | Method and multi-camera portable device for producing stereo images |
US16/283,214 USRE47925E1 (en) | 2013-10-28 | 2019-02-22 | Method and multi-camera portable device for producing stereo images |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361896168P | 2013-10-28 | 2013-10-28 | |
US14/523,902 US9615081B2 (en) | 2013-10-28 | 2014-10-26 | Method and multi-camera portable device for producing stereo images |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150116463A1 US20150116463A1 (en) | 2015-04-30 |
US9615081B2 true US9615081B2 (en) | 2017-04-04 |
Family
ID=52994938
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/523,902 Ceased US9615081B2 (en) | 2013-10-28 | 2014-10-26 | Method and multi-camera portable device for producing stereo images |
US16/283,214 Active 2035-06-11 USRE47925E1 (en) | 2013-10-28 | 2019-02-22 | Method and multi-camera portable device for producing stereo images |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/283,214 Active 2035-06-11 USRE47925E1 (en) | 2013-10-28 | 2019-02-22 | Method and multi-camera portable device for producing stereo images |
Country Status (1)
Country | Link |
---|---|
US (2) | US9615081B2 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170324949A1 (en) * | 2016-05-04 | 2017-11-09 | Apple Inc. | Resolving Three Dimensional Spatial Information using Time-shared Structured Lighting that Embeds Digital Communication |
US10373337B2 (en) | 2016-03-07 | 2019-08-06 | Lateral Reality Kft. | Methods and computer program products for calibrating stereo imaging systems by using a planar mirror |
US11036987B1 (en) | 2019-06-27 | 2021-06-15 | Facebook Technologies, Llc | Presenting artificial reality content using a mirror |
US11055920B1 (en) * | 2019-06-27 | 2021-07-06 | Facebook Technologies, Llc | Performing operations using a mirror in an artificial reality environment |
US11145126B1 (en) | 2019-06-27 | 2021-10-12 | Facebook Technologies, Llc | Movement instruction using a mirror in an artificial reality environment |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10146299B2 (en) * | 2013-11-08 | 2018-12-04 | Qualcomm Technologies, Inc. | Face tracking for additional modalities in spatial interaction |
US9086582B1 (en) | 2014-08-20 | 2015-07-21 | David Kind, Inc. | System and method of providing custom-fitted and styled eyewear based on user-provided images and preferences |
US20160349509A1 (en) * | 2015-05-26 | 2016-12-01 | Microsoft Technology Licensing, Llc | Mixed-reality headset |
US20180192031A1 (en) * | 2017-01-03 | 2018-07-05 | Leslie C. Hardison | Virtual Reality Viewing System |
US10839578B2 (en) * | 2018-02-14 | 2020-11-17 | Smarter Reality, LLC | Artificial-intelligence enhanced visualization of non-invasive, minimally-invasive and surgical aesthetic medical procedures |
CN109753930B (en) * | 2019-01-03 | 2021-12-24 | 京东方科技集团股份有限公司 | Face detection method and face detection system |
US11115512B1 (en) | 2020-12-12 | 2021-09-07 | John G. Posa | Smartphone cases with integrated electronic binoculars |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110001793A1 (en) | 2008-07-11 | 2011-01-06 | Takaaki Moriyama | Three-dimensional shape measuring apparatus, integrated circuit, and three-dimensional shape measuring method |
US20110007205A1 (en) * | 2009-07-08 | 2011-01-13 | Dechnia, LLC | Rear to forward facing camera adapter |
US20120026298A1 (en) | 2010-03-24 | 2012-02-02 | Filo Andrew S | Apparatus and method for producing images for stereoscopic viewing |
US20120098938A1 (en) | 2010-10-25 | 2012-04-26 | Jin Elaine W | Stereoscopic imaging systems with convergence control for reducing conflicts between accomodation and convergence |
US8189100B2 (en) | 2006-07-25 | 2012-05-29 | Qualcomm Incorporated | Mobile device with dual digital camera sensors and methods of using the same |
US20120320152A1 (en) * | 2010-03-12 | 2012-12-20 | Sang Won Lee | Stereoscopic image generation apparatus and method |
US20130135445A1 (en) | 2010-12-27 | 2013-05-30 | 3Dmedia Corporation | Primary and auxiliary image capture devices for image processing and related methods |
US20140198976A1 (en) * | 2007-04-24 | 2014-07-17 | 21 Ct, Inc. | Method and system for fast dense stereoscopic ranging |
US20140307101A1 (en) * | 2013-04-16 | 2014-10-16 | Tout, Inc. | Method and apparatus for displaying and recording images using multiple image capturing devices integrated into a single mobile device |
US20150103146A1 (en) * | 2013-10-16 | 2015-04-16 | Qualcomm Incorporated | Conversion of at least one non-stereo camera into a stereo camera |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5886735A (en) * | 1997-01-14 | 1999-03-23 | Bullister; Edward T | Video telephone headset |
-
2014
- 2014-10-26 US US14/523,902 patent/US9615081B2/en not_active Ceased
-
2019
- 2019-02-22 US US16/283,214 patent/USRE47925E1/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8189100B2 (en) | 2006-07-25 | 2012-05-29 | Qualcomm Incorporated | Mobile device with dual digital camera sensors and methods of using the same |
US20140198976A1 (en) * | 2007-04-24 | 2014-07-17 | 21 Ct, Inc. | Method and system for fast dense stereoscopic ranging |
US20110001793A1 (en) | 2008-07-11 | 2011-01-06 | Takaaki Moriyama | Three-dimensional shape measuring apparatus, integrated circuit, and three-dimensional shape measuring method |
US20110007205A1 (en) * | 2009-07-08 | 2011-01-13 | Dechnia, LLC | Rear to forward facing camera adapter |
US20120320152A1 (en) * | 2010-03-12 | 2012-12-20 | Sang Won Lee | Stereoscopic image generation apparatus and method |
US20120026298A1 (en) | 2010-03-24 | 2012-02-02 | Filo Andrew S | Apparatus and method for producing images for stereoscopic viewing |
US20120098938A1 (en) | 2010-10-25 | 2012-04-26 | Jin Elaine W | Stereoscopic imaging systems with convergence control for reducing conflicts between accomodation and convergence |
US20130135445A1 (en) | 2010-12-27 | 2013-05-30 | 3Dmedia Corporation | Primary and auxiliary image capture devices for image processing and related methods |
US20140307101A1 (en) * | 2013-04-16 | 2014-10-16 | Tout, Inc. | Method and apparatus for displaying and recording images using multiple image capturing devices integrated into a single mobile device |
US20150103146A1 (en) * | 2013-10-16 | 2015-04-16 | Qualcomm Incorporated | Conversion of at least one non-stereo camera into a stereo camera |
Non-Patent Citations (2)
Title |
---|
Gluckman et al.: "Catadioptric Stereo Using Planar Mirrors", International Journal on Computer Vision, 2001, vol. 44 (1), pp. 65-79. |
Wu et al.: "Epipolar geometry of catadioptric stereo systems with planar solutions", Image and Vision Computing, 2009, vol. 27, pp. 1047-1061. |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10373337B2 (en) | 2016-03-07 | 2019-08-06 | Lateral Reality Kft. | Methods and computer program products for calibrating stereo imaging systems by using a planar mirror |
US11010925B2 (en) | 2016-03-07 | 2021-05-18 | Lateral Reality Kft. | Methods and computer program products for calibrating stereo imaging systems by using a planar mirror |
US20170324949A1 (en) * | 2016-05-04 | 2017-11-09 | Apple Inc. | Resolving Three Dimensional Spatial Information using Time-shared Structured Lighting that Embeds Digital Communication |
US9930320B2 (en) * | 2016-05-04 | 2018-03-27 | Apple Inc. | Resolving three dimensional spatial information using time-shared structured lighting that embeds digital communication |
US11036987B1 (en) | 2019-06-27 | 2021-06-15 | Facebook Technologies, Llc | Presenting artificial reality content using a mirror |
US11055920B1 (en) * | 2019-06-27 | 2021-07-06 | Facebook Technologies, Llc | Performing operations using a mirror in an artificial reality environment |
US11145126B1 (en) | 2019-06-27 | 2021-10-12 | Facebook Technologies, Llc | Movement instruction using a mirror in an artificial reality environment |
Also Published As
Publication number | Publication date |
---|---|
US20150116463A1 (en) | 2015-04-30 |
USRE47925E1 (en) | 2020-03-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
USRE47925E1 (en) | Method and multi-camera portable device for producing stereo images | |
US11830141B2 (en) | Systems and methods for 3D facial modeling | |
JP7002056B2 (en) | 3D model generator and 3D model generation method | |
US9094672B2 (en) | Stereo picture generating device, and stereo picture generating method | |
US8928736B2 (en) | Three-dimensional modeling apparatus, three-dimensional modeling method and computer-readable recording medium storing three-dimensional modeling program | |
US20190311497A1 (en) | Methods and computer program products for calibrating stereo imaging systems by using a planar mirror | |
CN106960454B (en) | Depth of field obstacle avoidance method and equipment and unmanned aerial vehicle | |
CN106981078B (en) | Sight line correction method and device, intelligent conference terminal and storage medium | |
WO2018032841A1 (en) | Method, device and system for drawing three-dimensional image | |
US20210125307A1 (en) | System and method for providing dolly zoom view synthesis | |
GB2588441A (en) | Method and system for estimating the geometry of a scene | |
JP4193342B2 (en) | 3D data generator | |
EP2866446B1 (en) | Method and multi-camera portable device for producing stereo images | |
US10580214B2 (en) | Imaging device and imaging method for augmented reality apparatus | |
WO2023112971A1 (en) | Three-dimensional model generation device, three-dimensional model generation method, and three-dimensional model generation program | |
US11847784B2 (en) | Image processing apparatus, head-mounted display, and method for acquiring space information | |
KR20110025083A (en) | Apparatus and method for displaying 3d image in 3d image system | |
CN110827230A (en) | Method and device for improving RGB image quality by TOF | |
CN111080689B (en) | Method and device for determining face depth map | |
US20240236288A9 (en) | Method And Apparatus For Generating Stereoscopic Display Contents | |
US20240137481A1 (en) | Method And Apparatus For Generating Stereoscopic Display Contents | |
JP2013164789A (en) | Similar image area retrieval device, similar image area retrieval method and similar image area retrieval program | |
WO2021100681A1 (en) | Three-dimensional model generation method and three-dimensional model generation device | |
US10701343B2 (en) | Measurement device and processor configured to execute measurement method | |
GÜVENDİK et al. | FPGA Based Disparity Value Estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: LATERAL REALITY KFT., HUNGARY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TORMA, PETER;REEL/FRAME:034034/0869 Effective date: 20141020 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
RF | Reissue application filed |
Effective date: 20190222 |
|
RF | Reissue application filed |
Effective date: 20200203 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
FEPP | Fee payment procedure |
Free format text: SURCHARGE FOR LATE PAYMENT, SMALL ENTITY (ORIGINAL EVENT CODE: M2554); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |