CN109361912A - Multilayer camera apparatus for 3 D visual image capture - Google Patents

Multilayer camera apparatus for 3 D visual image capture Download PDF

Info

Publication number
CN109361912A
CN109361912A CN201710706614.5A CN201710706614A CN109361912A CN 109361912 A CN109361912 A CN 109361912A CN 201710706614 A CN201710706614 A CN 201710706614A CN 109361912 A CN109361912 A CN 109361912A
Authority
CN
China
Prior art keywords
camera
sensor
multiple images
image
camera apparatus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710706614.5A
Other languages
Chinese (zh)
Inventor
马修·托马斯·巴伦特
罗伯特·安德森
戴维·盖洛普
克里斯多佛·爱德华·胡佛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Publication of CN109361912A publication Critical patent/CN109361912A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2213/00Details of stereoscopic systems
    • H04N2213/001Constructional or mechanical details

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • Stereoscopic And Panoramic Photography (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

This application involves the multilayer camera apparatus captured for 3 D visual image.In general aspect, a kind of camera apparatus may include first layer imaging sensor, it includes the first multiple images sensor, wherein the visual field that the first multiple images sensor each of is laid with circular shape and is oriented so that the first multiple images sensor has the axis of the tangent line perpendicular to circular shape.Camera apparatus may include second layer imaging sensor, it includes the second multiple images sensor, wherein the second multiple images sensor is oriented such that the visual field of each of second multiple images sensor has the axis for the visual field for being not parallel to each of first multiple images sensor.

Description

Multilayer camera apparatus for 3 D visual image capture
Cross reference to related applications
This application claims entitled " the Multi-Tier Camera Rig for submitted on August 17th, 2016 The U.S. Provisional Application of Stereoscopic Image Capture (the multilayer camera apparatus for 3 D visual image capture) " No.62/376,140 priority and right, is incorporated herein by reference in their entirety.
Technical field
This specification relates generally to camera apparatus.Specifically, this specification be related to from captured image generate for The stereoscopic vision panorama shown in virtual reality (VR) and/or augmented reality (AR) environment.
Background technique
Technology for panoramic photography can be used for image and video to provide the wide view of scene.In general, panoramic shooting can be used Technology and imaging technique obtain panoramic picture from the multiple adjacent photos for using traditional camera to shoot.Photo can be aligned and be placed in Together to obtain panoramic picture.
Summary of the invention
The system of one or more computers, camera apparatus and the image capture device being contained on the camera apparatus can To be configured as by software, firmware, hardware or their group that system will be caused in operation to execute specific operation or movement Conjunction is mounted in system to execute the movement.
In a general aspect, a kind of camera apparatus includes that the first tomographic image with the first multiple images sensor passes Sensor.First multiple images sensor can with circular shape come lay and be oriented such that their field of view axis perpendicular to They are with it come the tangent line for the circular shape laid.Camera apparatus further includes the second layer comprising the second multiple images sensor. Second multiple images sensor can be oriented such that their field of view axis is not parallel to the view of the first multiple images sensor Field axis.The second layer can be located on the first layer of camera apparatus.
Embodiment can include one or more following characteristics individually or with one or more of the other feature in combination. For example, accommodating the half of the round camera apparatus shell of the first multiple images sensor in any or all above embodiment Diameter is so limited so that the first visual field of the first imaging sensor in the first multiple images sensor and more than first figures As the third imaging sensor in the second visual field of the second imaging sensor in sensor and first multiple images sensor Third visual field intersection.In any one or all in the above-described embodiment, in the first multiple images sensor first In imaging sensor, the second imaging sensor in first multiple images sensor and first multiple images sensor Third imaging sensor arranges (be disposed) planar.
In another aspect, a kind of camera apparatus includes camera case.Camera case includes lower circumference and upper multi-panel lid.Under Circumference is located at below multi-panel lid.Camera apparatus can also include more than first a cameras, with circular shape to lay and along phase The lower circumference of casing body so that more than first it is magazine each have be orthogonal to lower circumference to outer projection.Phase Machine device can also include more than second a cameras.A camera more than second can be arranged in the respective face of multi-panel lid, so that More than second it is magazine each have the normal for being not parallel to lower circumference to outer projection.
In another aspect, a kind of method includes limiting the first image collection for the first layer of multilayer camera apparatus, from the A camera more than one obtains the first set of image, and a camera more than described first is laid with circular shape so that a phase more than first Each of machine have be orthogonal to circular shape to outer projection.This method can also include calculating in the first image collection First light stream and the first image collection is stitched together based on the first light stream to create the first stitching image.This method is also Including limiting the second image collection for the second layer of multilayer camera apparatus.Second image collection can be obtained from more than second a cameras , a camera more than described second is laid so that it is multiple it is magazine each with being not parallel to the circle of a camera more than first The normal of shape to outer projection.This method can also include the second light stream in the second set for calculate image, and based on the Second image collection is stitched together to create the second stitching image by two light streams.This method by by the first stitching image and Second stitching image is stitched together and generates omnidirectional's stereoscopic panoramic image.
The other embodiments of this aspect include the corresponding computer being recorded on one or more computer memory devices System, device and computer program, each computer system, device and computer program are configured as executing the dynamic of this method Make.
The details of one or more embodiments is elaborated in the the accompanying drawings and the following description.Other feature according to description and Attached drawing and claims will become obvious.
Detailed description of the invention
Fig. 1 is the frame for the example system of stereoscopic vision panorama to be captured and rendered in 3D virtual reality (VR) environment Figure.
Fig. 2 is the example camera device for describing the image for being configured as capturing the scene for generating stereoscopic vision panorama Figure.
Fig. 3 is another example camera dress for describing the image for being configured as capturing the scene for generating stereoscopic vision panorama The figure set.
Fig. 4 A to Fig. 4 D is the exemplary figure for describing multilayer camera apparatus and associated component.
Fig. 5 is the figure for showing the field of view axis of the camera in the lower circumference of the camera case of multilayer camera apparatus.
Fig. 6 is the figure for showing the field of view axis of the camera in the upper multi-panel lid of the camera case of multilayer camera apparatus.
Fig. 7 is the figure for illustrating example VR equipment.
Fig. 8 is exemplary graph of the diagram as the camera of the function of viewing field of camera and the number of neighbours.
Fig. 9 is the exemplary graph for illustrating the interpolation visual field of the function as viewing field of camera.
Figure 10 is the exemplary graph for illustrating the selection of the configuration to camera apparatus.
Figure 11 is the example relationship for illustrating the minimal amount that can be used for determining camera according to scheduled assembly dia Curve graph.
Figure 12 A-B is the line chart example for the distortion that may occur during image capture.
Figure 13 A-B is depicted in the example for collecting the light captured during panoramic picture.
Figure 14 A-B illustrates the use of the almost plane perspective projection as shown in Figure 13 A-B.
Figure 15 A-C illustrates the example of the almost plane perspective projection applied to the plane of delineation.
Figure 16 A-B illustrates the example for introducing vertical parallax.
Figure 17 A-B describes the example points that can be used for illustrating the coordinate system of the point in 3D panorama.
Figure 18 indicates the perspective view of the discribed point of Figure 17 A- Figure 17 B.
Figure 19 illustrates the light captured in omni-directional stereo image using panoramic imaging techniques described in the disclosure.
Figure 20 is the curve graph for illustrating the maximum perpendicular parallax caused by the point in 3d space.
Figure 21 is the flow chart of the one embodiment for the process that diagram generates stereoscopic panoramic image.
Figure 22 is the flow chart of one embodiment of the process of diagram capture stereoscopic panoramic image.
Figure 23 is the flow chart for being illustrated in the one embodiment for the process that panoramic picture is rendered in head-mounted display.
Figure 24 is the flow chart for illustrating the one embodiment for the process for determining image boundary.
Figure 25 is the flow chart of the one embodiment for the process that diagram generates video content.
Figure 26 shows the example of the computer equipment and mobile computer device that can be used to implement technology described herein.
Similar reference numerals in each attached drawing indicate similar element.
Specific embodiment
For example, creation panoramic picture generally includes: using the single camera or multiple cameras capture surrounding in camera apparatus The image or video of three-dimensional (3D) scene.It, can be synchronous simultaneously by each camera when using the camera apparatus for accommodating several cameras It is configured to capture image in particular point in time.For example, the first frame that is captured of each camera can with second camera, third phase It is captured that machine and the 4th camera capture the corresponding first frame roughly the same time.Image capture can by simultaneously in a manner of after It is continuous, until capturing some or all in scene.It, can be with although many embodiments are described for camera Embodiment is described with regard to imaging sensor or for camera case (it may include imaging sensor).
The camera apparatus for accommodating multiple cameras may be configured to the special angle of capturing scenes.For example, being contained in camera Camera on device can be oriented with special angle, and all (or at least part) contents captured from the angle can be by It handles to generate the panorama of special scenes.
In some embodiments, the different angle that each camera can be oriented at different angles with capturing scenes.? Some or all of the only a parts of capturing scenes or scene include under events of distortion can execute multiple processing come interpolation or Configure any loss, the content of damage or distortion from panorama.
Following disclosure is described in head-mounted display (HMD) equipment in 3D virtual reality (VR) environment The purpose of such content is shown to capture, handle, correct and render multiple device and method of 3D panorama content.To virtual The reference of reality also may include or can be augmented reality.In some embodiments, camera apparatus may include multi-layer phase Machine is to reduce or eliminate the lack part of scene and reduce interpolation.For example, in some embodiments, camera apparatus may include 16 cameras of lower level and 6 cameras of upper level.In some embodiments, lower level (or layer) camera and upper level The ratio of (or layer) camera is greater than 2:1 but is less than 3:1 (such as 2.67:1).Camera can be directed at different angles, so that Each camera capture can the different content processed to generate the panorama of special scenes.It is appropriate that the ratio of camera has capture Depth, focus etc. 360 ° of videos, while reducing or minimizing the number of camera and image procossing amount is also important.
Fig. 1 is the example system 100 for stereoscopic vision panorama to be captured and rendered in 3D virtual reality (VR) environment Block diagram.In example system 100, camera apparatus 102 can capture, be locally stored (for example, permanent or removable Storage) and/ Or image is provided by network 104, or as an alternative, image can be supplied directly to image processing system 106 for analysis And processing.In some embodiments of system 100, mobile device 108 can serve as camera apparatus 102 with each in network 104 Place provides image.For example, once image is captured, image processing system 106 can execute multiple calculating and processing to image, And the image of processing is supplied to for rendering by head-mounted display (HMD) equipment 110 by network 104.In some implementations In mode, image processing system 106 may include in camera apparatus 102 and/or HMD device 110.In some embodiments In, the image of processing can also be supplied to mobile device 108 and/or calculate equipment 112 to be used for wash with watercolours by image processing system 106 Dye, storage are further processed.
HMD device 110 can indicate virtual reality head-wearing device (headset), the eye of being capable of display virtual real content Mirror, eyepiece or other wearable devices.In operation, HMD device 110 can execute VR application (not shown), can to Image that family playback receives and/or processing.In some embodiments, VR application can equipment 106 as shown in Figure 1, One or more of 108 or 112 carry out trustship.In one example, HMD device 110 can be provided is caught by camera apparatus 102 The video playback of the scene obtained.In another example, HMD device 110 can provide the static figure for being spliced into single panoramic scene The playback of picture.
Camera apparatus 102 can be configured to be used as camera (being referred to as capture device) and/or processing equipment to receive Collect the image data for the rendering content in VR environment.Although camera apparatus 102 is shown as herein and specific function The block diagram being described together, but camera apparatus 102 can take the form of any embodiment shown in Fig. 2 to Fig. 6, and In addition it can have and be directed to the function of camera apparatus description everywhere in the disclosure.For example, the function of system 100 to simplify the description Can, Fig. 1, which shows not having, is arranged in the camera apparatus 102 that the camera of image is captured around camera apparatus 102.Camera apparatus 102 other embodiment may include any number of camera that cloth is set as multilayer, can surround the circle of such as device 102 The circumference of shape camera apparatus.
As shown in Figure 1, camera apparatus 102 includes multiple cameras 139 and communication system 132.Camera 139 may include single Stationary cameras or single camera device.In some embodiments, in one or more layers in accordance with some embodiments, camera 139 may include the multiple stationary cameras or more of (for example, assembly) of being arranged side by side along the peripheral part (for example, ring) of device 102 A video camera device.Camera 139 can be video camera, imaging sensor, stereoscopic camera, infrared camera, and/or movement and set It is standby.Communication system 132 can be used for uploading and downloading image, instruction, and/or other camera related contents.Communication can be Line is wireless, and can be docked by dedicated network or common network.
Camera apparatus 102 can be configured as stationary installation or rotary devices.Each camera on device can (for example, place) is arranged with the rotation center of deviating device.Camera apparatus 102 can be configured as around 360 degree rotation to sweep Plunder and capture all or part of of 360 degree of views of such as scene.In some embodiments, device 102 can be configured To operate in resting position, and in such a configuration, additional camera apparatus can be added to the device with capturing scenes Additional outside visual angle.
In some embodiments, camera apparatus 102 include by side opposite side or it is back-to-back in a manner of arrange it is multiple number view Frequency camera (for example, as shown in Figure 3), so that their camera lens is directed to radially outward direction, to watch surrounding scene or ring The different piece in border.In some embodiments, multiple digital video camera devices are tangent with round camera apparatus 102 to have View direction it is tangential configuration to arrange.For example, camera apparatus 102 may include multiple digital video camera devices, quilt Be arranged so that their camera lens and be directed to radially outward direction, at the same with the base portion of device is tangent lays.Digital video Camera can be directed toward to capture the content of different directions, to check the different angle part of scene around.
In some embodiments, camera apparatus 102 may include multilayer digital video camera device.For example, camera fills Setting may include lower layer, and wherein digital video camera is arranged with side opposite side or in back-to-back fashion, and further include having relatively The upper layer of the additional camera above camera is arranged in lower layer's camera.In some embodiments, upper layer camera is in the case where being different from In the plane of layer camera outwardly from camera apparatus 102.For example, upper layer camera can be arranged in vertical with lower layer's camera or connect Closely in the plane vertical with lower layer's camera, and each camera can be from the center of lower layer outwardly.In some embodiments In, the number of the camera apparatus in upper layer can be different from the number of the camera in lower layer.
In some embodiments, the image of the camera apparatus in lower layer can be on camera apparatus 102 with adjacent To handling.In such a configuration, each first camera device in each adjacent cameras set and camera apparatus base portion The tangent arrangement (for example, place) of circular path, and be directed at (for example, being directed toward with camera lens) left direction.Each adjacent phase The tangent arrangement of the circular path of each second camera device and camera apparatus base portion in machine set (for example, placement), and be aligned (for example, being directed toward with camera lens) is directed toward right direction.Upper layer camera can also be arranged similarly relative to each other.In some realities It applies in mode, adjacent camera is (for example, connecting) neighbours on same level or layer.
The example setting of the camera used on camera apparatus 102 may include the progressive scanning mode of about 60 frame per second (that is, each grid stroke is generated each frame of video by sampling rather than such as the standard recording mode such as most of video cameras Interlacing mode).In addition, each camera can be configured with identical (or similar) setting.It is by each camera configuration Identical (or similar) setting can provide the advantage of capture image, which can splice in the desired manner after capture Together.Exemplary arrangement may include setting one or more cameras to identical scaling, focusing, exposure and shutter speed, And camera is set on white balance in the case where stabilization function to be set as related or close.
In some embodiments, camera apparatus 102 can be in the advance for capturing one or more images or video Row calibration.For example, each camera on camera apparatus 102 can be calibrated and/or be configured to the video that pans.For example, institute Stating setting may include configuring a device into wide visual field and clockwise or counterclockwise around 360 degree of ranges with spy Determine rotational speed operation.In some embodiments, for example, the camera on device 102 can be configured as around the capture road of scene Every degree captures a frame under 360 degree of ranges of diameter.In some embodiments, for example, the camera on device 102 can be configured as The every degree of range captures multiframe under 360 degree (or less) in the capture path of scene.In some embodiments, for example, dress Setting the camera on 102 can be configured as the multiple frames for capturing the range for capturing path around scene without every degree capture spy Surely the frame measured.
In some embodiments, camera can be configured (such as setting) as synchronously work with particular point in time from Camera on camera apparatus captures video.In some embodiments, camera can be configured to synchronously to work at one Between capture particular video frequency part from one or more cameras in section.Another example of calibration camera device may include configuring such as What stores incoming image.For example, incoming image can be used as individual frame or video storage (such as .avi file .mpg text Part), and the image stored in this way can upload to internet, another server or equipment, or on camera apparatus 102 Each camera be locally stored together.In some embodiments, incoming image can be stored as encoded video.
Image processing system 106 includes interpolation module 114, capture correction module 116 and splicing module 118.For example, interior Insert module 116 expression can be used for algorithm below: to the part of digital picture and video carry out sample and determine be likely to from The interpolated image number occurred between the connecting image that camera apparatus 102 captures.In some embodiments, interpolation module 114 It can be configured as interpolated image segment, image section, and/or the horizontal or vertical image item for determining and adjoining between image.? In some embodiments, interpolation module 114 can be configured as determine adjoin image in related pixel between flow field (and/ Or flow vector).Flow field can be used for compensating the transformation that image has been undergone and for handling the image for having undergone transformation.For example, can The transformation of the specific pixel grid of obtained image is compensated to use flow field.In some embodiments, interpolation module 114 can To generate one or more images of not a part of captured images by the interpolation to image around, and can incite somebody to action Image generated, which interweaves, generates the additional virtual real content of scene into institute's captured image.
Capture correction module 116 can be configured as by compensating non-ideal capture setting and correct captured image.Show It may include round camera track, parallel main (camera) axis, perpendicular to camera track that example capture, which is provided as non-limiting example, View direction, with the view direction of camera apparatus trajectory tangential, and/or other contact conditions.In some embodiments, it catches It obtains correction module 116 and can be configured as and compensate non-circular camera track during image capture and/or in image capture One or two of non-parallel main shaft of period.
In capture correction module 116 can be configured as adjustment specific image set to compensate and capture using multiple cameras Hold, wherein camera interval is greater than about 30 degree.For example, if the distance between camera is 40 degree, by collecting from additional camera The content inside perhaps lost by interpolation, capture correction module 116 can solve in the special scenes based on the covering of very few camera Any loss content.
In some embodiments, capture correction module 116 can be additionally configured to adjustment image collection with compensate due to Camera misalignment caused by camera attitude error etc..For example, if during image capture occur camera attitude error (such as by The error caused by the orientation of camera and position), then module 116 can mix two or more column from several picture frames Pixel includes due to ill-exposed (or the exposure changed from picture frame to picture frame) and/or due to one or more to remove Pseudomorphism caused by the misalignment of camera.Splicing module 118 can be configured as based on restriction, obtaining, and/or interpolation Image generates 3D 3 D visual image.Splicing module 118 can be configured to mix/splice from multiple images part pixel and/ Or image item.Splicing can be based on the flow field for example determined by interpolation module 114.For example, splicing module 118 can be (from interpolation Module 114) the interpolated image frame of not a part of image collection is received, and picture frame is interweaved into image collection.Interweave It may include: that module 118 is based at least partially on the light stream that is generated by interpolation module 114 and splices picture frame and image collection Together.
Spliced and combined that can be used for generating omnidirectional for showing in VR head-mounted display three-dimensional (for example, full side To solid) panorama.Picture frame can be based on from multiple adjacent camera being captured to collection being arranged on specific device Video flowing.Such device may include about 12 to about 16 cameras and the dress in the first layer of the device or level 4 to 8 cameras in the second layer or level set, wherein the second layer is located on first layer.It in some embodiments, can be with Including the camera of odd number in each layer of device.In some embodiments, device includes more than one or two phases The set of adjacent camera.In some embodiments, device may include the multiple adjacent cameras that can be assemblied in side by side on device Set.In some embodiments, posture information associated at least one phase adjacency pair can be used in splicing module 118 A part of image collection is spliced in advance before executing intertexture.It more clearly shows and retouches below in conjunction with such as Fig. 3 State the phase adjacency pair on camera apparatus.
It in some embodiments, the use of optics Flow Technique together may include the video that will be captured by image mosaic Content is stitched together.Such optic flow technique can be used for capturing in previously used camera pair and/or single camera specific Intermediate video content is generated between video content.This technology may be used as simulating a series of on round fixed camera device The mode of camera captures images.The camera of simulation can be similar to following methods and carry out capture content: by single camera with round shape Shape (for example, round, roughly circular, circular pattern) is scanned everywhere to capture 360 degree of image, but in the above-described techniques, it is real Border be placed on placed less camera on device and the device can be it is static.A series of ability for simulating cameras also provides The advantages of every content frame in video can be captured, is (for example, capture 360 figures at the capture interval an of image with every degree Picture).
Gather (for example, 360 images of one image of every degree) by using image is dense, in intermediate video generated The video content that light stream is spliced to actual acquisition can be used in appearance, in fact, camera apparatus capture is less than 360 images.Example Such as, if round camera apparatus includes 8 pairs of cameras (i.e. 16 cameras) or 16 non-matching cameras, institute's captured image is counted It can be down to 16 images.Optic flow technique can be used for simulating the content between 16 images, in 360 degree of offer of video Hold.
In some embodiments, interpolation efficiency can be improved using optic flow technique.For example, substitution 360 images of interpolation, Light stream can be calculated between (for example, [1-2], [2-3], [3-4]) in each continuous camera.Given 16 captured figures Picture and light stream, interpolation module 114 and/or capture correction module 116 can calculate any pixel in any medial view, without Must in one of 16 images interpolation whole image.
In some embodiments, splicing module 118 may be configured to splice from the device acquisition with multilayer camera Image, wherein multiple layer is located in above or below mutual.Splicing module 118, which can handle, to be captured from every layer of camera Then video content can be existed stitching image splicing associated with each layer with being directed to each layer of creation stitching image Together to generate 360 degree of images.For example, camera apparatus may include 16 cameras in a lower layer, and in the upper layer 6 Camera, wherein upper layer is located above lower layer on the device.In such an example, splicing module 118 can will come from lower layer 16 cameras image mosaic together, to generate associated with lower layer stitching image (for example, lower layer's stitching image).It spells Connection module 118 can also be by the image mosaic of 6 cameras on upper layer together, to generate splicing associated with upper layer Image (for example, upper layer stitching image).In order to generate 360 degree of images, splicing module then can by lower layer's stitching image with it is upper Layer stitching image is stitched together.In some embodiments, adjacent camera is on same level or layer (for example, adjoining ) neighbours.
Image processing system 106 further includes projection module 120 and image correction module 122.Projection module 120 can be matched It is set to and generates 3D 3 D visual image by projecting image onto plane perspective plane.For example, projection module 120 can obtain Specific image set projection, and can by by some images in described image from plane perspective projection transform be ball Face (i.e. equal rectangles (equirectangular)) perspective projection configures the re-projection of a part of image collection.The conversion packet Include projection modeling technique.
Projection modeling may include limiting projection centre and projection plane.In the example described in the disclosure, in projection The heart can indicate the optical centre at the origin (0,0,0) of predefined xyz coordinate system.Projection plane can be placed in the projection The front of the heart, wherein camera is towards to capture image along the z-axis of xyz coordinate system.In general, can be used from coordinate (x, y, z) Intersection point to the plane perspective plane of the specific image light (image ray) of projection centre projects to calculate.For example, can lead to The conversion that coordinate system is manipulated to be projected is crossed using matrix calculating.
Projection modeling for stereoscopic vision panorama may include using the multi-view image for not having single projection centre. Multi-angle of view is shown generally as circular shape (such as spherical) (referring to Figure 13 B).When rendering content, converted from a coordinate system When to another coordinate system, sphere is can be used as approximate in system described herein.
In general, spherical (waiting rectangles) projection provides the spherical form with ball centre for equally surrounding projection centre Plane.Perspective projection, which is provided, provides the image of 3D object on plane (such as the face 2D) perspective plane with the reality of approximated user The view of border visual perception.In general, can be on flat image plane (for example, computer monitor, mobile device LCD screen) Image is rendered, therefore projection is shown in a manner of plane perspective to provide undistorted view.However, plane projection may not be permitted Perhaps 360 degree of visual fields, therefore captured image (for example, video) can be stored in a manner of equal rectangles (i.e. spherical surface) perspective, and Plane perspective can be projected to again in rendering.
Specific re-projection completion after, projection module 120 can transmit the image for being rendered in HMD through weight Projection section.For example, left eye display that re-projection part can be supplied in HMD 110 by projection module 120 and will throw again Shadow part is supplied to the right eye display in HMD 110.In some embodiments, projection module 120 can be configured to pass through Above-mentioned re-projection is executed to calculate and reduce vertical parallax.
Image correction module 122, which can be configured as through compensating distortion, generates 3D 3 D visual image, including but not It is limited to perspective distortion.In some embodiments, image correction module 122 can determine maintain 3D solid light stream it is specific away from From, and image can be split, only to show the part for maintaining such scene flowed.For example, image correction module 122 can determine that the light stream of 3D stereo-picture is maintained from the radial rice of outer edge about one away from round camera apparatus 102 to for example Between the outer edge about 5 radial rice apart from camera apparatus.Therefore, image correction module 122 may insure in one meter and five meters Between sample be selected for rendering in the projection not being distorted in HMD 110, while also being mentioned for the user of HMD 110 For the appropriate 3D stereoscopic effect with appropriate parallax.
In some embodiments, image correction module 122 can estimate light stream by adjusting specific image.For example, The adjustment may include: a part in correcting image, determine estimation camera posture associated with the part in image, with And the stream between the image in the determining part.In a non-limiting example, image correction module 122 can compensate Calculate the rotational differential between two specific images of stream therein.The correction can be used for removing and be drawn by rotational difference (i.e. rotating flow) The flow component risen.Flowing caused by such correction causes by translating (for example, parallax stream), this can reduce what stream estimation calculated Complexity, while keeping obtained image accurate and robust.In some embodiments, image can be executed before rendering Processing other than image rectification.For example, splicing, mixing or additional school can be executed to image before executing rendering Positive processing.
In some embodiments, image correction module 122 can be corrected to be not based on the camera of plane perspective projection The projection distortion that geometry (camera geometry) captured image content causes.For example, can be by from multiple and different Visual angle is inserted into image and applies school to image by adjusting the viewing light associated with image from common origin Just.Interpolated image can be interleaved into captured image, come to generate for human eye as the rotation parallax of comfort level Human eye seems accurate virtual content.
In example system 100, equipment 106,108 and 112 can be laptop computer, desktop computer, mobile meter Calculate equipment or game console.In some embodiments, equipment 106,108 and 112 can be and can be arranged (for example, putting Set/position) mobile computing device in HMD device 110.The mobile computing device may include display equipment, can be by Screen as such as HMD device 110.Equipment 106,108 and 112 may include the hardware and/or soft for executing VR application Part.In addition, when equipment 106,108 and 112 is placed on the front relative to HMD device 110 or is maintained in position range, These equipment may include the hardware and/or software that can recognize, monitor and track the 3D movement of HMD device 110.Some In embodiment, equipment 106,108 and 112 can provide additional content to HMD device 110 by network 104.In some implementations In mode, equipment 102,106,108,110 and 112 can match or be connected to one another one by network 104 Or it is multiple/interfaced.The connection can be wired or wireless.Network 104 can be public communication network or private communication Network.
System 100 may include Electronic saving.The Electronic saving may include in any equipment (for example, camera apparatus 102, image processing system 106, HMD device 110, and/or such).Electronic saving may include that information is stored electronically Non-transitory storage medium.The Electronic saving can be configured as storage captured image, the image of acquisition, pretreated figure Picture, image of post-processing etc..Can be processed using any disclosed camera apparatus captured image and it be stored as one or more A video flowing is stored as each frame.In some embodiments, storage can occur during capture, and rendering can be straight Sending and receiving are raw after capture portion, so that not being concurrently faster to access full-view stereo content earlier than capture and processing.
Fig. 2 is the example camera device for describing the image for being configured as capturing the scene for generating stereoscopic vision panorama 200 figure.Camera apparatus 200 includes the first camera 202A and second camera for being attached to circumferential support base portion (not shown) 202B.As shown, camera 202A and 202B is towards directing out (towards want captured image/scene) and be parallel to device 200 rotation center or axis (A1) and arranged with annular location.In some embodiments, the figure of Fig. 2 can correspond to more One layer of layer camera apparatus.
In discribed example, camera 202A and 202B (B1) separated by a distance is arranged (such as placement) and is installing On plate 208.In some embodiments, the distance between each camera on camera apparatus 200 (B1) can indicate average people Class interpupillary distance (IPD).Be separated by IPD distance place camera can approximate human eye rotate (left or right, such as 204 institute of arrow at it Show), how image can be checked when the scene around the capture path indicated by arrow 204 to scan.Example is averaged mankind's IPD degree Amount may be about 5 centimetres to about 6.5 centimetres.In some embodiments, each camera for being separated by standard IPD distance to arrange can To be a part of stereoscopic camera pair.
In some embodiments, camera apparatus 200 can be configured to the diameter of approximate test human head.For example, phase Machine device 200 can be designed to the diameter 206 with about 8 centimetres to about 10 centimetres.The diameter 206 can be selected for device 200 How be rotated relative to rotation center A1 with approximate human head and watch scene image with human eye.Other measurements are possible , and if for example to use bigger diameter, device 200 or the adjustable capture technique of system 100 and obtained Image.
In a non-limiting example, camera apparatus 200 can have about 8 centimetres to about 10 centimetres of diameter 206, and And the camera for being separated by about 6 centimetres of IPD distance to place can be accommodated.Multiple devices are described below to lay.It is retouched in the disclosure The each laying stated can be configured with the distance between above-mentioned or other diameters and camera.
As shown in Fig. 2, two cameras 202A, 202B may be configured to wide visual field.For example, camera can capture about 150 degree of visual fields to about 180 degree.Camera 202A, 202B can have the fish eye lens for capturing the wider visual field.In some realities It applies in mode, camera 202A, 202B are used as three-dimensional right.
In operation, device 200 can be rotated by 360 ° around rotation center A1 to capture panoramic scene.As an alternative, the dress Setting can remain static, and can add additional camera apparatus to camera apparatus 200 to capture the appendix of 360 degree of scenes Divide (for example, as illustrated in figs. 3 and 4).
Fig. 3 is another example camera dress for describing the image for being configured as capturing the scene for generating stereoscopic vision panorama Set 300 figure.Camera apparatus 300 includes the multiple camera 302A-302H for being attached to circumferential support base portion (not shown).First phase Machine 302A is shown as solid line, and additional camera 302B-302H is shown in broken lines, to indicate that instruction is optional.With phase Camera (see the camera 202A and 202B) comparison installed in parallel shown in machine device 200, camera 302A-302H and round camera The excircle of device 300 is tangent to be arranged.As shown in figure 3, camera 302A has adjacent cameras 302B and adjacent cameras 302H.
Similar with the camera in device 200 in discribed example, camera 202A and 202B are separated by specific range (B1) To arrange.In this example, camera 302A and 302B can be used as phase adjacency pair to capture respectively the separate center in direction to the left and to the right The angle of camera lens, as described in detail later.
In one example, camera apparatus 300 be include rotatable or 306 (its of fixed base (not shown) and mounting plate Be referred to as supporting element) circular apparatus, and adjacent camera is placed on installation to including: first camera 302A On plate 306, and it is configured to point to tangent with the edge of mounting plate 306 and is set as being directed toward the viewing side of left direction by cloth To and second camera 302B, by with first camera side by side in a manner of be placed on mounting plate 306 and be placed on and the first phase At interpupillary distance (or different distance (for example, being less than IPD distance)), second camera 302B is set as being directed toward and be installed machine 302A by cloth The edge of plate 306 is tangent and is set as being directed toward the view direction in direction to the right by cloth.Similarly, phase adjacency pair can be by camera 302C and 302D are generated, and another pair can be generated by camera 302E and 302F, and another to can be by camera 302G and 302H It generates.In some embodiments, each camera (for example, 302A) can with or not its own adjoin but with its neighbour adjoin Camera pairing matches each camera on the device with another camera on device.In some embodiments, often A camera apparatus can directly neighbours' (in either side) match.
In some embodiments, one or more stereo-pictures can be generated by interpolation module 114.For example, in addition to Except the stereoscopic camera shown on camera apparatus 300, additional stereoscopic camera can be generated as synthetic stereo image camera.Tool Body, analyzing the light (for example, ray trace) from capture image can produce the analog frame of 3D scene.The analysis can wrap It includes: passing through specific image or picture frame from viewpoint backward tracing and enter the light of scene.If specific light shines scene In object, then it by each image pixel can be drawn with certain color to match the object.If the light does not have Shine the object, then can use in scene background or other feature matched color draw the image pixel.Make With viewpoint and ray trace, the additional scene content for appearing to originate from simulating stereo camera is can be generated in interpolation module 114.This is attached Adding content may include image effect, missing image content, background content, the content outside visual field.
As shown in figure 3, the tangent arrangement (for example, being placed) of excircle of camera 302A-302H and camera apparatus 300, and And it therefore can be with the visual field up to 180 degree of capturing scenes.That is, since camera is placed in a tangential manner, it is possible in the device On each camera in capture the 180 degree visual field not being blocked completely.
In some embodiments, camera apparatus 300 includes adjacent cameras.For example, device 300 may include adjacent cameras 302A and 302B.Camera 302A can be configured with being associated with of being directed toward on the view direction tangent with the edge of mounting plate 304 Camera lens and be set as being directed toward direction to the left by cloth.Similarly, camera 302B can be arranged in mounting plate 304 in side-by-side fashion On, and be placed at camera 302A approximation mankind's interpupillary distance, and be set as being directed toward the edge phase with mounting plate 304 by cloth It the view direction cut and is set as being directed toward direction to the right by cloth.
In some embodiments, the particular sensor on camera 302A-H (or on camera apparatus 300) can be arranged to It is tangent with the excircle of camera 302A-H (or device 300), rather than make actual camera 302A-H arranged tangential.With this side Formula, camera 302A-H can be placed according to user preference, and sensor can position based on device 300, scan speed Degree, or image can be captured to detect which camera or camera 302A-H based on camera configuration and setting.
In some embodiments, neighbours may include the camera 302A and camera laid with back-to-back or side-by-side configuration 302E.This laying can be used for collecting the left and right at the azimuth 308 formed by corresponding camera lens and mounting plate 304 Visual angle.In some embodiments, the azimuth 308 that camera is set as and is formed by camera lens and mounting plate 304 respectively by cloth Left and right inclination angle.
In some embodiments, be placed on camera on camera apparatus 300 can during image interpolation with it is any its Its adjacent cameras pairing, and outwardly facing direction on simply around circular apparatus be aligned.In some embodiments, Device 300 includes single camera (for example, camera 302A).In the case where only camera 302A is installed to the event of device 300, Ke Yitong It crosses and camera apparatus 300 is rotated clockwise complete 360 degree to capture stereoscopic panoramic image.
In some embodiments, the figure of Fig. 3 can correspond to one layer of multilayer camera apparatus.For example, in such reality It applies in mode, one layer of multilayer camera apparatus may include the camera for being attached to the circumferential support structure of multilayer camera apparatus 302A-302H。
Fig. 4 A to Fig. 4 D is illustrated according to embodiment, camera apparatus 400 (being referred to as multilayer camera apparatus) The figure of each view (respectively perspective view, side view, top view and bottom view).As shown, camera apparatus 400 includes tool There is the camera case 420 of lower circumference 430 and upper multi-panel lid 440.Lower circumference 430 may include camera 405A to 405C and 405M. Although this embodiment of lower circumference 430 includes the camera more than four, for simplicity, only four cameras are carried out Label.In this embodiment, camera (it can also be referred to as capture device or imaging sensor) may be collectively referred to as camera 405.Upper multi-panel lid 440 may include camera 415A to 415B and 415M.Although this embodiment of upper multi-panel lid includes more In three cameras, but for simplicity, only three cameras are marked.In this embodiment, camera (its Capture device or imaging sensor can also be referred to as) it may be collectively referred to as camera 415.
Camera 415 (for example, 415A etc.) is included in first layer camera (or imaging sensor), and 405 (example of camera Such as, 405A etc.) it is included in second layer camera (or imaging sensor).First layer camera apparatus is properly termed as the master of camera Layer.As shown in Figure 4 B, the visual field (or center) of the magazine each imaging sensor of first layer be arranged in plane PQ1 or with it is flat Face PQ1 intersection, and the visual field (or center) of the magazine each imaging sensor of the second layer be arranged in plane PQ2 or with it is flat Face PQ2 intersection.Plane PQ1 is parallel to plane PQ2.
In this embodiment, camera apparatus 400 includes the first layer and six cameras 405 of 16 camera apparatus 415 The second layer.In some embodiments, the ratio of lower level (or layer) camera and upper level (or layer) camera be greater than 2:1 but Less than 3:1 (such as 2.67:1).
As shown, in this embodiment, camera apparatus 400 only includes two layers of camera.Camera apparatus 400 does not include Three layers of camera, and therefore only there is camera on two planar.In this embodiment, in camera apparatus 400 There is no the corresponding level cameras similar to second layer camera below one layer of camera.It can exclude the camera of lower-level (or layer) To reduce image procossing, weight, expense etc., utilized without sacrificing image.
Although being not shown, in some embodiments, camera apparatus may include three layers of camera.In such implementation In mode, third layer camera apparatus can have number (for example, six cameras) identical from second layer camera or different numbers The camera of (for example, less, more).First layer camera (for example, 16) can be arranged between second and third layer camera.
Similar with other embodiment as described herein, the camera 405 of the lower circumference 430 of camera 400 is outwardly facing (example Such as, far from camera apparatus 400 center towards).In this embodiment, each camera 405 is oriented such that camera 405 Lens system tangent line of axis of the visual field centered on it perpendicular to circular shape (for example, circle, be substantially to justify), the circle Shape by camera case 420 lower circumference 430 and the circular shape that is limited by camera 405 limits in turn.Such example is extremely It is few to be shown in Fig. 5 with axis 510 associated with camera 405 and tangent line 520.
In this embodiment, each camera is configured such that axis 510 (shown in Fig. 5) may extend through camera apparatus A lens system (for example, center of a camera lens or capture sensor) on 400 side, passes through camera apparatus 400 Center 530, and another lens system of the other side by camera apparatus 400.Camera 405 (or imaging sensor) surrounds phase The lower circumference 430 of casing body 420 is laid with circular shape so that each camera 405 have can it is orthogonal with lower circumference 430 and And then it is orthogonal with the circular shape limited from camera apparatus 400 to outer projection (or projection centre).In other words, camera 405 can And have far from camera apparatus 400 inside towards projection.
In some embodiments, the lens system of each camera 405 deviates the center of the main body of each camera 405.This Each camera is caused to be angularly offset relative to other cameras 405 to be disposed in camera case 420 so that each phase The visual field of machine can be with vertical orientation (for example, circular cutting perpendicular to what is limited by lower circumference 420 relative to camera apparatus 400 Line).
Although it is not shown, in some embodiments, the camera of odd number, which can be included in camera case, to be made For a part of lower circumference 420.In such an embodiment, not over multiple camera's lens systems and camera apparatus In the case that the axis of step is carried out at center, the lens system of camera can have with the axis of the tangent line perpendicular to camera apparatus (or by Camera apparatus limit circle) centered on visual field.
In some embodiments, can based on one or more camera 405 optical properties (optics) (for example, view , pixel resolution) limit the minimum or maximum geometry of lower circumference 430.For example, the minimum diameter of lower circumference 430 and/ Or maximum gauge can be limited based on the visual field of at least one camera 405.In some embodiments, relatively large (or wide) Visual field can lead to relatively small lower circumference 430.As shown in Fig. 4 A to Fig. 4 D, for example, each of camera 405 cloth is set as indulging To mode (for example, 4:3 transverse and longitudinal is than mode), captured so that being less than by the horizontal size of 405 captured image of camera by camera 405 Image vertical dimension.In some embodiments, each of camera 405 can with any transverse and longitudinal than orientation (for example, 16:9 or 9:16 transverse and longitudinal ratio, 3:4 transverse and longitudinal ratio) arrangement.
In some embodiments, the diameter (or radius (RA) (shown in Fig. 4 C)) of lower circumference 430 be defined so as to The camera apparatus 405 of few three connectings visual field overlapping (for example, intersection, intersect at least point in space, area, and/or In volume).Sensor arrangement in camera 405 is planar (it is basically parallel to the plane across lower circumference 430).Some In embodiment, the entire visual fields (such as or substantially entire visual field) of at least two adjacent cameras 405 can in camera 405 The visual field of third camera (mutually adjoining at least one of the camera 405 that two adjoin) it is Chong Die.In some embodiments, The visual field of any set of three connecting cameras 405 can be overlapped, so that surrounding any point of lower circumference 430 (for example, passing through phase Any point in the plane of the sensor of machine 405) it can be captured by least three cameras 405.The overlapping of three connecting cameras 405 It can be important for 360 ° of videos can be captured with depth appropriate, focus etc..
In some embodiments, the camera 415 of the upper multi-panel lid 430 of camera apparatus 400 is outwardly facing (for example, separate The center of camera apparatus 400).According to some embodiments, each of camera 415 is oriented such that the mirror of camera 415 Axis along the visual field of head system and the axis of the visual field of the lens system of camera 405 are not parallel.For example, as shown in fig. 6, for Those of arrangement camera 415 (for example, camera 415A and camera 405A), camera right above camera 405 on camera case 420 The axis of the visual field 510 of the axis and camera 405 of 415 visual field 610 forms acute angle.In addition, in the opposite side of camera case 420 On camera 405 above arrange those of camera 415 (for example, camera 415A and camera 405B), the visual field 610 of camera 415 The axis of the visual field 510 of axis and camera 405 forms obtuse angle.
In some embodiments, each camera is configured such that axis 610 (shown in Fig. 6) can extend across camera apparatus A lens system (for example, center of a camera lens or capture sensor) on 400 sides, passes through the center 630 of lower circumference. Camera 415 (or imaging sensor) is laid around the upper multi-panel lid 440 of camera case 420 with circular shape, so that each camera 415 have the outer projections (or projection centre) for the normal for being not parallel to lower circumference 430.
In some embodiments, camera 415 is arranged in the respective face 445 of multi-panel lid 440.For example, such as Fig. 4 A-4D Shown, camera 415A is arranged on the 445A of face, and camera 415B is arranged on the 445B of face, and camera 415M is arranged in face 445M On.The face 445 of upper multi-panel lid 440 can be oriented in the plane with the angle different from the angle of plane of lower circumference 430.One In a little embodiments, although the camera 405 of lower circumference 430 can be directed out from the center of camera case 420 towards face 445 can upward and outward guide camera 415 from the center of camera case 420, as shown in figs. 4 a-4d.In other words, camera 415 can have projection separate from the interior section of camera apparatus 400 and facing upwards.Although being not shown, some In embodiment, the camera apparatus 415 of odd number can be included in camera case 420, one as upper multi-panel lid 440 Part.
In some embodiments, can camera 415 based on one or more optical properties (for example, visual field, pixel point Resolution) limit the minimum or maximum geometry of upper multi-panel lid 440.For example, can be based on the visual field of at least one camera 415 To limit the minimum diameter and/or maximum gauge of upper multi-panel lid 440.In some embodiments, at least one of camera 415 The visual field of relatively large (or wide) of (for example, sensor of at least one camera 415) can lead to relatively small upper multi-panel lid 440.As shown in Fig. 4 A to 4D, for example, each of camera 415 is set as transverse mode (for example, 3:4 transverse and longitudinal compares mould by cloth Formula) so that being greater than the vertical dimension by 415 captured image of camera by the horizontal size of 415 captured image of camera.Some In embodiment, each of camera 415 can be with any transverse and longitudinal ratio or orientation (for example, 16:9 or 9:16 transverse and longitudinal ratio, 4:3 Transverse and longitudinal ratio) it lays.
In some embodiments, in restriction multi-panel lid 440 diameter (or radius) so that at least three adjoin cameras 415 visual field overlapping.In some embodiments, at least two adjoin the entire visual field of camera 415 (for example, or substantially whole A visual field) it can be with the visual field of the third camera (at least one of camera 415 adjoined with two mutually adjoins) in camera 415 Overlapping.In some embodiments, the visual field of any set of the camera 415 of three connectings can be overlapped, so that around upper more Any point (for example, passing through any point in the plane of the sensor of camera 415) of cover 440 can be by least three cameras 415 captures.
According to some embodiments, face 445 can be it is angled so that camera 415 capture camera 405 visual field outside Image.For example, since camera 405 can be arranged along the lower circumference 430 of camera case 420, so that more than first magazine every One has the outside projection for being orthogonal to lower circumference 430, and camera 405 possibly can not capture right above camera case 420 Image.Therefore, face 445 can be at an angle of, so that image of the capture of camera 415 right above camera case 420.
According to some embodiments, camera apparatus 400 may include rod shell 450.Rod shell 450 may include one multiple Airflow chamber has been configured to heat carrying-off from camera 405 and 415 towards the bottom of camera apparatus 400.According to some implementations Mode, fan 460 can be located at the bottom of rod shell to promote air to flow through camera apparatus 400 and remove from camera case 420 Heat.
In some embodiments, camera apparatus 400 may include capturing for recording with camera 405 and camera 410 The microphone (not shown) of image (and video) associated audio.In some embodiments, camera apparatus 400 may include Microphone mounting base, external microphone can be attached to it and be connected to camera apparatus 400.
In some embodiments, camera apparatus 400 may include another equipment for installation into such as tripod Mechanism, and installing mechanism can be attached to rod shell 450.In some embodiments, one or more openings can be arranged (for example, the bottom side for being arranged in camera apparatus 400), allows camera apparatus 400 to be installed to tripod.In some embodiments In, pacify for camera apparatus 400 to be installed to the coupled connection mechanism of another equipment of such as tripod and can be arranged in microphone Fill the opposite side in the position of seat.In some embodiments, for camera apparatus 400 to be installed to the coupling machine of another equipment Structure can be in side identical with the position of microphone mounting base.
In some embodiments, camera apparatus 400 can be removably coupled to such as carrier (for example, such as four axis The aerial carrier of aircraft) another equipment.In some embodiments, camera apparatus 400 can be by material system light enough At so that the carrier of such as four-axle aircraft can be used to move in camera apparatus 400 and associated camera 405,415.
In some embodiments, camera apparatus described in the disclosure may include be mounted on it is any on circular shell The camera of number.In some embodiments, each in the four direction that camera can be outside with the center from circular apparatus Adjacent cameras device on a is mounted equidistant.In this example, for example, the camera for being configured to stereoscopic vision neighbours can be along circle It is all to aim at outward, and arranged in a manner of zero degree, 90 degree, 180 degree and 270 degree, so that each stereopsis Feel that neighbours capture the independent quadrant of 360 degree of visual fields.In general, the selectable field of view of camera determines the camera fields of view of stereoscopic vision neighbours Lap and camera between adjoin quadrant between any blind spot size.One example camera device can use quilt It is configured to one or more stereo vision camera neighbours of about 120 degree of the capture to the field of about 180 degree.
In some embodiments, the camera case of the multilayer camera apparatus described in the disclosure can be configured with about 5 Centimetre to about 8 centimetres diameter (for example, diameter 206 in Fig. 2) with simulate mankind's interpupillary distance capture for example, if user by she Head or body with the rotation of the other parts of a quarter circle shape, semicircular in shape, full circular shape or circular shape and The content that will be seen that.In some embodiments, diameter may refer to across the device or camera case from camera lens to phase The distance of machine camera lens.In some embodiments, diameter may refer to across the device from a camera sensor to another The distance of camera sensor.
In some embodiments, camera apparatus centimetre amplification from about 8 centimetres to about 25, for example to accommodate additional camera Fixing piece.In some embodiments, less camera can be used on the device of small diameter.In such an example, The view between the camera on the device can be found out or be deduced to system described in the disclosure, and to such view and reality The view of border capture is interleaved.
In some embodiments, for example, camera apparatus described in the disclosure can be used for by using with revolving mirror The camera or rotating camera of head capture entire panorama in single exposure to capture panoramic picture.Above-mentioned camera and camera apparatus can To be used together with method described in the disclosure.Specifically, any other camera apparatus as described herein can be used to hold The method that row is described about a camera apparatus.In some embodiments, the content of camera apparatus and subsequent captured can be with Other content combinations, computer graphical (CG) content, and/or other acquisitions of the other content such as virtual content, rendering Or the image generated.
In general, can using at least three cameras (for example, 405A, 405B, 405C) captured image on camera apparatus 400 With the depth measure for calculating special scenes.Depth measure can be used for converting the part of scene (or image from scene) For 3D stereoscopic vision content.For example, depth measure can be used to generate and can be spliced into 360 degree of stereopsis in interpolation module 114 The 3D stereo content of frequency image.
In some embodiments, camera apparatus 400 can capture omni-directional stereo shown in such as Figure 19 (ODS) throwing Whole light needed for shadow, while maximize picture quality and minimizing image fault.
Along (every layer) camera of the circular shape of the radius R with the radius r (not shown) for being greater than the visual field ODS circle.Through The ODS light for crossing camera will be to be at an angle of sin with the normal of the circle where the camera-1(r/R) it in this manner carries out.Two A different arrangements of cameras is possible: being tangentially laid out (not shown), and radial layout, such as such as Fig. 4 A camera apparatus into 4D Shown in 400 each layer.
Magazine half is exclusively used in the light of capture left image and the other half is exclusively used in the capture right side by tangential layout The light of image, and each camera is made to be aligned so that come along the camera optical axis according to this by the ODS light of the camera Mode carries out.On the other hand, the radial layout of camera apparatus 400 collects both left image and right image using whole cameras Light and therefore each camera direct out towards.
In some embodiments, the advantages of radial design of camera apparatus 400 is: image interpolation occurs adjoining phase Between machine, and for tangential in design, image interpolation must send out life every between a camera, this makes view interpolation problem Baseline doubles and makes it more added with challenge.In some embodiments, each camera of the camera apparatus 400 of radial design must The light of left image and right image must be captured, horizontal field of view needed for each camera increases 2sin-1(r/R).In practice, this meaning Taste camera apparatus 400 radial design is more preferable for biggish device radius and tangential in design for lesser radius more It is good.
For example, may be greater than 3cm wide and therefore can limit can be by phase for camera included in camera apparatus 400 Machine device 400 does much small.Therefore, in some embodiments, radial design can be more suitable for and further discussion is Based on the layout.In some embodiments, 400 geometry of camera apparatus can be described with 3 parameters (referring to figure 4C).The number n of the radius R of the device, the horizontal field of view γ of camera and camera.Camera apparatus 400 described herein is at least Realize consideration indicated below:
Device radius R is minimized, vertical distortion is thus reduced.
Make to adjoin the distance between camera minimum, thus reduces the baseline of view interpolation.
There is enough horizontal field of view for each camera, allow to splice apart from the device at least certain distance d's Content.
Maximize the vertical field of view of each camera, this leads to the big vertical field of view for exporting video.
Maximize eral image quality, this, which is generally required, uses big camera.
Between connecting camera in the ring (or layer) of camera apparatus 400, view can be formed straight between two cameras The view of synthesis and these synthesis can only include the point observed by both cameras on line.Fig. 4 C show in order to allow to away from All the points with a distance from at least d of 400 center of camera apparatus are spliced and will be by the amounts of a camera looks into fee.
The ring that the radius comprising n camera is R is given, horizontal field of view needed for the minimum of each camera can export as follows:
b2=d2+R2-2dR cosβ (2)
It in some embodiments, can by the panoramic picture of each layer of generation of camera apparatus (for example, camera apparatus 400) To have at least 10 degree of the overlapping at the minimum splicing distance (for example, about 0.5m) of expectation.More details are in the following It is open: Anderson et al., " Jump:Virtual Reality Video (jump: virtual reality video) ", ACM Transactions on Graphics(TOG),Proceedings of ACM SIGGRAPH Asia 2016,Vol.35, Issue 6, Art.No.198, November 2016, entire contents are incorporated herein by reference.
Fig. 7 is the figure for illustrating example VR equipment (VR head-wearing device) 702.User can by be similar to place goggles, Head-wearing device 702 is placed on her eyes and puts on VR head-wearing device 702 by sunglasses etc.,.In some embodiments, With reference to Fig. 1, VR head-wearing device 702 can be used one or more high-speed wireds and/or wireless communication protocol (for example, Wi-Fi, Bluetooth, bluetooth LE, USB etc.) or by using HDMI interface with calculate equipment 106,108 or 112 multiple monitors dock/connect It connects.Virtual content can be supplied to VR head-wearing device 702 by the connection, so as in the screen for including in VR head-wearing device 702 It is shown on (not shown) to user.In some embodiments, VR head-wearing device 702 can be the equipment for enabling projection.At this In a little embodiments, user, which can choose, provides to VR head-wearing device 702 or projects (projection) content.
In addition, one or more high-speed wireds and/or wireless communication interface and agreement can be used in VR head-wearing device 702 (for example, Wi-Fi, bluetooth, bluetooth LE, universal serial bus (USB) etc.) docks/connection with equipment 104 is calculated.Calculate equipment (Fig. 1) can recognize the interface to VR head-wearing device 702, and in response, can execute VR application, make user and calculating Equipment is in the 3D environment (space VR) of computer generation for including virtual content.
In some embodiments, VR head-wearing device 702 may include the removable calculating equipment that can execute VR application. The removable equipment that calculates can be similar to calculating equipment 108 or 112.The removable equipment that calculates may be embodied in VR head-wearing device In the shell or frame of (for example, VR head-wearing device 702), then which can be worn by the user of VR head-wearing device 702 On.In these embodiments, user can be provided in the 3D environment (space VR) generated with computer by moving calculating equipment The display or screen watched when interaction.Connect as described above, wired or wireless interface protocol can be used in mobile computing device 104 It is connected to VR head-wearing device 702.Mobile computing device 104 can be the controller in the space VR, can be used as pair in the space VR As occurring, input can be provided to the space VR, and feedback/output can be received from the space VR.
In some embodiments, mobile computing device 108 can execute VR application, and can be to VR head-wearing device 702 provide data for the creation space VR.In some embodiments, on the screen for including in VR head-wearing device 702 to The content in the space VR that user shows also may be displayed in the display equipment for including in mobile computing device 108.This allows Other people see the content that user may interact in the space VR.
VR head-wearing device 702 can provide the position of instruction mobile computing device 108 and the information and data of orientation.VR is answered Instruction with can receive and using position and orientation data as user's interaction in the space VR.
Fig. 8 is that diagram is shown as camera (and neighbours) number of the function of one layer of viewing field of camera of multilayer camera apparatus Example curve graph 800.The expression of curve graph 800 is determined for can be with for the predefined visual field for generating stereoscopic vision panorama It is arranged in the exemplary graph of the number of cameras on one layer of multilayer camera apparatus.Curve graph 800 can be used for calculating camera and set It sets and is disposed with camera, to ensure specific stereoscopic full views result.One example setting may include selecting multiple cameras with attached To specific camera apparatus.Another setting can include determining that the calculation that will be used during capture, pretreatment or post-processing step Method.For example, splicing complete 360 degree of panoramas can indicate that each optical ray direction should be by extremely for Optical flow interpolation technology Few two cameras are seen.This may limit to cover the minimal number of entire 360 degree of cameras to be used, be camera view The function of field theta [θ].Optical flow interpolation technology can be executed by camera neighbours (or to) or each camera and configuration.
As shown in figure 8, describing the curve graph of diagram function 802.Function 802 indicates the function as viewing field of camera [θ] 806 Camera number [n] 804.In this example, about 95 degree of viewing field of camera is shown by line 808.Line 808 and function 802 Intersection point 810, which is shown, will provide desired panorama result using a camera in 16 (16) for all having 95 degree of visual fields.Such In example, it can be interleaved by the adjacent cameras to each adjacent cameras set to configure camera apparatus, it should so as to be used in Any space being likely to occur when placing adjacent cameras on device.
Other than being interleaved to adjacent cameras, light stream requires to can specify that the camera that system 100 calculates same type Between light stream.I.e., it is possible to calculate light stream for first camera, and light stream then is calculated for second camera, rather than it is same When calculate both.In general, the stream at pixel can be calculated as orientation (for example, direction and angle) and amplitude (for example, speed).
Fig. 9 is interpolation visual field [θ of the diagram as the function of viewing field of camera [θ] 9041] 902 exemplary graph 900.It is bent Line chart 900 be determined for the visual field of camera which is partially shared with its left neighbour or right neighbours.Here, at about 95 degree Viewing field of camera (being shown by line 906), interpolation visual field is illustrated as about 48 degree, as shown in intersection point 908.
Given two continuous cameras do not capture the image of identical visual field usually, then the visual field of interpolation camera will be by The intersection point of the visual field of camera neighbours indicates.Interpolation visual field [θ1] it can be the letter of angle between viewing field of camera [θ] and camera neighbours Number.It, can be by [θ if having selected the camera (using method shown in Fig. 8) of minimal amount for given viewing field of camera1] Function as [θ] is calculated, as shown in Figure 9.
Figure 10 is the exemplary graph 1000 for illustrating the selection of configuration of camera apparatus.Specifically, curve graph 1000 can be with For determining that it is much that certain camera device can be designed to.Curve graph 1000 is depicted as assembly dia [D is in centimeters] Curve graph of the splicing of function than [d/D] 1002.In order to generate comfortable virtual reality full views watching experience, in showing for the disclosure It is about 5 centimetres to about 6.5 centimetres that the 3 D spliced diameter of omnidirectional [d] is selected in example, this is typical human IPD.In some embodiment party In formula, it is 3 D spliced to execute omnidirectional that the capture diameter [D] roughly the same with splicing diameter [d] can be used.That is, for example, protecting The splicing of about " 1 " is held than easier splicing can be provided in the post-processing of omnidirectional's stereo-picture.This specific configuration can So that distortion minimization, because the optical ray for splicing is identical as the light that actual camera captures.When selected camera When number is high (for example, 12-18 camera of every device), the splicing ratio for obtaining " 1 " can be difficult.
In order to which the problem of too many camera, which can be designed to have bigger size to wrap on alleviator Hold additional camera and allows to splice than maintaining identical (or essentially identical).In order to ensure stitching algorithm during capture to leaning on Content in the image at portrait attachment center is sampled, and can fix splicing than the angle [α] to determine camera relative to device. It improves picture quality for example, Figure 10 shows the sampling near optical center and minimizes geometric distortion.Specifically, compared with Small angle [α] can help to avoid device and block (for example, camera imaging component of device itself).
As shown in Figure 10, at 1006,0.75 splicing corresponds to about 6.5 centimetres (i.e. typical mankind IPD) than [d/D] Assembly dia.Splicing, which is reduced to about 0.45 permission assembly dia than [d/D], increases to about 15 centimetres (showing at 1008), This, which can permit, is added to additional camera in the device.Camera can be based on the spelling of selection relative to the angle of camera apparatus It connects than being adjusted.It can be greatly to about 12.5 centimetres for example, camera angle is adjusted to about 30 degree of instruction device diameters.It is similar Ground for example, camera apparatus angle, which is adjusted to about 25 degree of instruction device diameters, can be large enough to 15 centimetres, and is rendered for user When still maintain parallax appropriate and visual effect.
In general, setter diameter [D], can calculate best camera angle [α].Maximum field of view can be calculated from [α] [Θu].Maximum field of view [Θu] generally correspond to device will not partial occlusion camera visual field.Maximum field of view can limit camera dress How many camera apparatus can be held by setting, and still provide the view not being blocked.
Figure 11 is one layer of the camera that diagram can be used for determining multilayer camera apparatus according to predefined assembly dia Minimal amount example relationship curve graph 1100.Here, show in 1104 layers given of assembly dia [D] most Small number of cameras [nmin]1102.Assembly dia [D] 1104, which limits, most very much not blocks visual field, and function is to limit camera most Peanut.As shown, for about 10 centimetres of assembly dia, can be used most in one layer of camera apparatus at 1106 The small a camera in 16 (16) is to provide the view not blocked.Modification assembly dia can permit to increase or decrease to be placed on the device Number of cameras.In one example, about about 8 to about 25 centimetres of device size, which can contain about 12 to about 16 A camera.
Since other methods can be used for adjusting visual field and image capture setting, so these calculating can be with these other sides Method is combined with camera apparatus size of further refining.It is, for example, possible to use optical flow algorithms to be commonly used in change (for example, reduction) Splice the number of the camera of omnidirectional's stereoscopic full views.In some embodiments, for example, it is describing in the disclosure or from the disclosure The curve graph that the system and method for description generate can be applied in combination to generate the virtual content for rendering in HMD device.
Figure 12 A-B indicates line chart (line drawing) example for the distortion that can occur during image capture.Specifically , distortion shown here corresponds to the effect occurred when capturing stereoscopic full views.In general, the camera when capturing scenes leans on When capturing the scene recently, distortion may be more serious.Figure 12 A indicates that two meters apart from the outside one meter of arrangement of image center multiply two meters Scene in plane.Figure 12 B is plane identical with Figure 12 A, but outside 25 centimetres of the plan range camera in the figure.Two A figure all uses 6.5 centimetres of capture diameters.Figure 12 A shows the slight stretching of the immediate vicinity at 1202, and Figure 12 B is shown The center 1204 more expanded.This distortion can be corrected using many technologies.Following paragraphs describe use in capture image Hold analyze projection (for example, spherical surface and plane projection) with correct the approximation method of distortion and system (for example, camera apparatus/catch Obtain device).
Figure 13 A-B depicts panoramic picture is collected by the camera being located on one layer of multilayer camera apparatus during capture The example of light.Figure 13 A is shown: given captured image set, can be left from anywhere in capturing on path 1302 Both eye and right eye generate fluoroscopy images.Here, the light of left eye is shown with light 1304a, and shows use at 1306a In the light of right eye.It is in some embodiments, insufficient due to camera setting, failure or only for the device setting of scene, It may be without capturing each discribed light.Therefore, in light 1304a and 1306a it is some can by approximation (for example, being based on Other light carry out interpolation).For example, the measurable feature of in scene includes from origin if scene is infinity To the radiation direction of destination.
In some embodiments, rayorigin may not be collectable.Therefore, the system in the disclosure can be approximate Left eye and/or right eye are to determine the origin position of light.Figure 13 B show for right eye approximate radiation direction 1306b extremely 1306f.In this example, substitution is originated from the light of identical point, and each light is originated from the difference in circular shape 1302.Light 1306b to 1306f is shown as with capture circular shape 1302 into tangent angle, and is arranged in capture circular shape 1302 At the specific region of circumference.In addition, two different images Sensor-image sensor 13-1 associated with camera apparatus Show in camera apparatus circular shape 1303 with the position of imaging sensor 13-2 (associated with camera or including in the camera) Out.As shown in Figure 13 B, camera apparatus circular shape 1303 is greater than capture circular shape 1302.
Can use in this way from the approximate multiple light of the outside different directions of circular shape (and with each light The color and intensity of associated image).In this way it is possible to provide for both left eye and right-eye view including many images Entire 360 degree of panoramic views.This technology can lead to the distortion in solved apart from object, but under certain examples, when It still can deform when neighbouring object is imaged.For simplicity, approximate left eye ray direction is not described.? In the example embodiment, a small amount of light 1306b to 1306f is illustrated only.However, it is possible to limit thousands of such light (and image associated with those light).Therefore, many new images associated with each light can be limited (for example, interior It inserts).
As shown in Figure 13 B, light 1306b is projected between imaging sensor 13-1 and imaging sensor 13-2, and image passes Sensor 13-1 and imaging sensor 13-2 can be arranged on one layer of multilayer camera apparatus.Imaging sensor 13-1 and image pass Sensor 13-2 is adjacent.Light can be with range image sensor 13-1 (for example, projection centre of imaging sensor 13-1) distance G1 With range image sensor 13-2 (for example, projection centre of imaging sensor 13-2) distance G2.Distance G1 and G2 can be based on The position that light 1306b intersects with camera apparatus circular shape 1303.Distance G1 can be different from (for example, be greater than, be less than) away from From G2.
In order to define image associated with light 1306b (for example, interpolated image, new images), by imaging sensor 13- First image (not shown) (for example, being stitched together) of 1 capture is combined with the second image captured by imaging sensor 13-2. In some embodiments, optic flow technique can be used for combining the first image and the second image.It comes from and comes for example, can identify From the pixel of corresponding first image of the pixel of the second image.
In order to limit image associated with such as light 1306b, corresponding pixel is based on distance G1 and G2 and deviates.It can With assume the resolution ratio of imaging sensor 13-1,13-2, transverse and longitudinal than, height etc. for limit light 1306b image (for example, New images) purpose be identical.In some embodiments, resolution ratio, transverse and longitudinal ratio, height etc. can be different.However, In such an embodiment, need to modify interpolation to adapt to these differences.
As a specific example, the first pixel associated with the object in the first image can be identified as and second Associated second pixel of object in image is corresponding.Because from imaging sensor 13-1, (it is located at camera apparatus circle shape First position around shape 1303) visual angle capture the first image, and from imaging sensor 13-2 (its be located at camera apparatus justify The second position around shape shape 1303) angle capture the second image, with the second image in position (XY coordinate position) phase Than the object will shift on the position (for example, XY coordinate position) in the first image.Similarly, associated with the object First pixel will shift on position (for example, X-Y coordinate position) relative to also the second pixel associated with the object.In order to New images associated with light 1306b are generated, can be limited based on the ratio of distance G1 and G2 and the first pixel and the second picture Plain (and object) corresponding new pixel.Specifically, new pixel can be limited at such position: it is from based on distance G1 First pixel of (and with factor of the distance between the position of position and the second pixel based on the first pixel come scalable) and Based on distance G2's (and with factor of the distance between the position of position and the second pixel based on the first pixel come scalable) Second pixel deviates in position.
According to above embodiment, can be and associated with the first image and the consistent light 1306b of the second image New images limit parallax.Specifically, the object of relatively close camera apparatus can be moved than being relatively distant from the object of camera apparatus Dynamic bigger amount.Can based on the distance G1 and G2 of light 1306b, pixel offset (for example, from the first pixel and the second picture Element) between maintain the parallax.
To all light around capture circular shape 1302 (for example, light 1306b to 1306f) can repeat the mistake Journey.New images associated with each light around capture circular shape 1302 can be based on camera apparatus circular shape 1303 Around each light and imaging sensor (for example, adjacent imaging sensor, imaging sensor 13-1,13-2) between away from From limiting.
As shown in Figure 13 B, the diameter of camera apparatus circular shape 1303 is greater than the diameter of capture circular shape 1302.One In a little embodiments, the diameter of camera apparatus circular shape 1303 may be greater than the 1.5 of the diameter of capture circular shape 1302 To between 8 times.As a specific example, the diameter for capturing circular shape can be 6 centimetres, and camera apparatus circular shape 1303 diameter (for example, camera mounting ring 412 shown in Fig. 4 A) can be 30 centimetres.
Figure 14 A-B illustrates the use of almost plane perspective projection, as illustrated in Figure 13 A-B.Figure 14 A is shown in approximation There is the panoramic scene of distortion lines before plane perspective light and projection.As shown, curtain rod 1402a, window frame 1404a, It is depicted as the object with bending features with door 1406a, but actually they are linear feature objects.Linear feature object packet Include the object (such as flat index card, rectangular box, rectangular frame etc.) of not curved surface.In this example, object 1402a, 1404a and 1406a be shown as it is curved because they are distorted in the picture.Figure 14 B is shown at 90 degree The correction image of horizontal field of view lower aprons plane perspective projection.Here, curtain rod 1402a, window frame 1404a and door 1406a difference It is shown as straight object 1402a, 1404b and 1404c of correction.
Figure 15 A-C illustrates the example of the almost plane perspective projection applied to the plane of delineation.Figure 15 A, which is shown, uses this Technology described in open is projected from the plane perspective of pan-shot.Discribed plan view 1500 can indicate in Figure 14 B The covering of plane shown in image.Specifically, Figure 15 A indicates Figure 14 A of correction, wherein curve projection is in line.Here, entirely The plane 1500 of scape shows (horizontal field of view with 90 degree) with one meter of distance.Line 1502,1504,1506 and 1508 is straight , and (corresponding to Figure 14 A) before, identical center line is bending and distortion.
Other distortions can occur based on selected projection scheme.For example, Figure 15 B, Figure 15 C are indicated using in the disclosure Technology from the plane perspective of pan-shot projection generate plane (1510 and 1520).With 25 centimetres of distances, (90 degree horizontal Visual field) capture panorama.Figure 15 B shows left eye capture 1510, and Figure 15 C shows right eye capture 1520.Here, plane The bottom of (1512,1522) does not project to straight line and introduces vertical parallax.When being projected using plane perspective, it may occur that this Kind certain variations.
Figure 16 A-B illustrates the example for introducing vertical parallax.Figure 16 A is depicted according to typical omnidirectional's stereoscopic full views technology The straight line 1602a of capture.In discribed example, each light 1604a-1618a is originated from the difference in circular shape 1622 Point.
Figure 16 B depicts same straight line when watching using perspective approximation technique.As shown, showing straight line 1602a change Shape is 1602b.Light 1604b-1618b is derived from a single point in circular shape 1622.The deformation, which can have, makes line 1602b Left side the effect of viewer is pushed away towards viewer and by the right half part of line.For left eye, opposite feelings can occur Condition, that is, the left-half of line seems further from and the right half part of the line seems closer.The line of deformation is in two asymptotes Between be bent, interval is equal to the distance of the diameter 1624 of the round selection 1622 of panorama rendering.Since deformation is shown as catching with panorama The identical size of radius is obtained, so it may only be significant for neighbouring object.This variant can cause user to see The vertical parallax for seeing image, when executing splicing to the image of distortion, this may cause fusion difficult.
Figure 17 A-B depicts the example points that can be used for illustrating the coordinate system of the point in 3D panorama.Figure 17 A-B, which is depicted, to be passed through The point (O, Y, Z) 1702 of the imaging of panoramic technique described in the disclosure.Projection of this in the panorama of left and right can be by (- θ, φ) (θ, φ) is indicated, as following equation (1) and (2) are shown respectively, in which:
And wherein r1704 is the radius of panorama capture.
Figure 17 A depicts the top view of the panoramic imagery of point (O, Y, Z) 1702.Figure 17 B depicts point (O, Y, Z) 1702 The side view of panoramic imagery.Shown in put (- θ, the φ) that projects in left panorama and project (θ, φ) in right panorama.These Particular figure is captured and does not project in another plane.
Figure 18 indicates the projection view of the discribed point of Figure 17 A-17B.Here, 1702 perspective eye is put to be oriented as Horizontally see in the case where the rotation of the angle [α] of y-axis, as 1802 as shown in Figure 18.Only due to the perspective projection Consider radiation direction, it is possible to by will be seen that the light conversion of the point 1702 in panoramic projection 1802 is the reference of perspective camera It is to find out the light along a little 1702 projections.For example, point 1702 is along following ray cast shown in the following table 1:
Table 1
Perspective segmentation is executed, can determine a projection, as shown in the equation in the following table 2:
Table 2
If can be seen that(the original 3D point 1702 corresponding to infinity), then putting 1702 will be usually saturating at two Identical y-coordinate is projected in visible image, and therefore will be not present vertical parallax.However, as θ becomes far from(with this Point is mobile closer to camera), for left eye and right eye, the y-coordinate of projection is by different (in addition to regarding with the perspective to point 1702 is seen The case where corresponding α=0 in open country).
In some embodiments, it can avoid being distorted by capturing image and scene in a specific way.For example, nearly Scene capture in may cause distortion element to occur to camera (i.e. distance is less than one meter).Therefore, from one meter of capture outward Scene or image are a kind of modes for making distortion minimization.
In some embodiments, depth information can be used to correct distortion.For example, the accurate depth of given scenario is believed Breath, it is possible to correction distortion.That is, since distortion can depend on current view direction, so possible before rendering can not It can be to panoramic picture application individual distortion.On the contrary, depth information can transmit together with panorama, and used in rendering.
Figure 19 shows the light captured in omni-directional stereo image using panoramic imaging techniques described in the disclosure Line.In this example, clockwise light 1902,1904,1906 is directed toward corresponding to left eye around circular shape 1900 Light.Similarly, the light that anticlockwise light 1908,1910,1912 corresponds to right eye is directed toward around circle 1900.Often A light counterclockwise can have the corresponding light clockwise seen in the same direction on the opposite side of circular shape.This can Light is watched to provide left/right for each direction for indicating in single image.
The ray sets for capturing panorama described in the present invention may include: the mobile camera around circular shape 1900 (not shown), thus keep the camera and circular shape 1900 tangent be aligned (for example, by camera lens outwardly facing scene and With circular shape 1900 is tangent is directed toward).For left eye, camera can be directed toward right side (for example, light 1904 is captured to center The right side of line 1914a).Similarly, for right eye, camera can be directed toward left side (for example, light 1910 is captured to center line The left side of 1914a).For on the other side of circular shape 1900 and being lower than the camera of center line 1914b, can be used Heart line 1914b limits similar left region and right region.Generating omni-directional stereo image can be used for real camera capture or first Computer graphical (CG) content of preceding rendering.View interpolation can make together with the camera content of capture and the CG content of rendering With with the point between the real camera on analog capture such as circular shape 1900.
Stitching image set may include using spherical surface/isometric projection panoramic picture for storage.In general, at this There are two images, one images of each eyes in kind method.Each pixel in isogonism image corresponds to the direction on spherical surface. For example, x coordinate can correspond to longitude and y-coordinate can correspond to latitude.For mono- omnidirectional images, the viewing of pixel The origin of light can be identical point.However, each viewing light may originate from circular shape 1900 for stereo-picture On difference.Then, by each pixel in analysis captured image, ideal viewing light is generated according to projection model, And the pixel from its viewing light and the most matched capture of ideal light rays or the image of interpolation is sampled, it can be from capture Image mosaic panoramic picture.Next, values of light can be mixed to generate panoramic pixel value.
In some embodiments, the view interpolation based on light stream can be used to generate every degree in circular shape 1900 At least one image.In some embodiments, the entire column of panoramic picture can be once filled, because can determine: such as A pixel in the fruit column will be sampled from given image, then the pixel in the column will be sampled from identical image.
Panoramic format that capture with the disclosure and rendering aspect are used together by left eye and right eye it is ensured that watched The image coordinate of object differs only by horizontal-shift.This horizontal-shift is known as parallax.This is suitable for equal rectangular projections, and In this projection, object can seem extremely to be distorted.
The amplitude of this distortion can depend on the distance and view direction to camera.The distortion may include: line bending Distortion, different left eyes and right eye distortion, and in some embodiments, parallax may no longer appear as level.Generally For, the vertical parallax of 1-2 degree (on the spherical surface plane of delineation) can cosily be tolerated by human user.In addition, for periphery Object in informer (peripheral eye line) can ignore distortion.This corresponds to about 30 degree of distance center view direction. Based on these discoveries, limitation can be constructed, limits the region near the camera that object should not enter into avoid uncomfortable change Shape.
Figure 20 is the curve graph 2000 for illustrating the maximum perpendicular parallax caused by the point in 3d space.Specifically, curve graph 2000 depict the given point projection by 3d space to 30 degree of the center maximum in terms of spending caused by the point away from image Vertical parallax.Curve graph 2000 depicts horizontal position of the control of the upright position (in meters) apart from image center apart from camera (in meters).In the figure, camera is located at origin [0,0].When curve graph is mobile far from the origin, the severity of distortion becomes It obtains smaller.For example, on the graph from about zero to one 2002, and from zero to negative one 2004 (vertical), distortion is worst.This Image corresponding to right above camera (being placed on origin) and underface.As scene is displaced outwardly, distortion mitigates, and works as phase Machine only encounters the vertical parallax of half degree when putting at 2006 and 2008 to scene imaging.
If it exceeds 30 distortions enclosed of being outside one's consideration can be ignored, then all pixels of the view direction within 30 degree of pole all may be used To be removed.If allowing peripheral threshold value is 15 degree, 15 degree of pixel can be removed.For example, the pixel removed can be set It is set to color block (for example, black, white, magenta etc.) or still image (for example, logo, known boundaries, veining layer etc.) New expression with the pixel of removal can be inserted into panorama to substitute the pixel removed.In some embodiments, it moves The pixel removed can be blurred, and the Fuzzy Representation of the pixel removed can be inserted into panorama to substitute the picture removed Element.
Figure 21 is the flow chart for illustrating one embodiment of the process 2100 for generating stereoscopic panoramic image.Such as Figure 21 institute Show, at frame 2102, system 100 can limit image collection based on captured image.Image may include pretreatment image, Post-process image, virtual content, video, picture frame, the part of picture frame, pixel etc..
The image of restriction can be accessible by user, such as accesses content (for example, VR using head-mounted display (HMD) Content).System 100 can determine specific action performed by user.For example, at certain point, at frame 2104, system 100 It can receive view direction associated with the user of VR HMD.Similarly, if user changes its view direction, in frame System can receive the instruction of the change of user's view direction at 2106.
In response to receiving the instruction of such variation on view direction, system 100 can configure the one of image collection Partial re-projection, as shown in frame 2108.Re-projection can be at least partially based on the view direction and and captured image of change Associated visual field.The visual field can be from one to 180 degree, and the fritter (sliver) of the image of scene can be taken into account The panoramic picture of scene.The re-projection of configuration can be used for a part of image collection being converted to plane from spherical perspective projection Projection.In some embodiments, re-projection may include from around the bending projected from spherical perspective projection to plane perspective Path projects (recast) a part for watching light associated with image collection come the multiple viewpoints laid again.
Re-projection may include any or all of step that a part on the surface of spherical surface scene is mapped to plane scene. These steps may include: the scene content of amendment distortion, in seam crossing or close in seam crossing mixing (for example, splicing) scene Hold, tone mapping, and/or scalable.
After completing re-projection, system 100 can render the view of update based on the re-projection, as shown in frame 2110. The view of update can be configured as correction and be distorted and provide a user stereoscopic parallax.At frame 2112, system 100 can be provided Update view including stereoscopic full views scene corresponding with the view direction of change.For example, system 100 can provide update View can be provided with correcting the distortion of original view (before re-projection) in the display of VR head-mounted display Stereoscopic parallax effect.
Figure 22 is the one of the process 2200 for capturing comprehensive panoramic picture using multilayer camera apparatus device of illustrating The flow chart of a embodiment.At frame 2202, system 100 can based on from first layer at least one adjacent cameras set receive The video flowing of the capture integrated to limit image collection as the first layer of polyphaser device.In some embodiments, first layer can be with It is the lower layer of multilayer camera apparatus, and the camera in first layer can be arranged and be laid with circular shape, so that more than first It is magazine each have be orthogonal to the circular shape to outer projection.For example, adjacent cameras (example can be used in system 100 Such as, as shown in Figure 2) or multiple adjacent cameras set (for example, as shown in Figure 3 and Figure 5).In some embodiments, system 100 The video flowing from the capture collected with circular shape come about 12 to about 16 cameras laying can be used to limit image Set so that they have be orthogonal to the circular shape to outer projection.In some embodiments, system 100 can be used Computer graphical (CG) content that partly or entirely renders limits image collection.
At frame 2204, system 100 can calculate the first light stream of the first image collection.For example, calculating the first image set Light stream in conjunction may include: the image intensity field of a part of analysis pixel column associated with image collection, and to pixel The part of column executes optic flow technique, as detailed above.
In some embodiments, the first light stream can be used for the picture frame that interpolation is not a part of image collection, and And as detailed above.Then, system 100 can be based at least partially on the first light stream for picture frame and the first image Set is stitched together (at step 2206).
At frame 2208, system 100 can be based on the capture collected from least one adjacent cameras set in the second layer Video flowing come for polyphaser device the second layer limit image collection.In some embodiments, the second layer can be multi-layer phase The upper layer of machine device, and the camera in the second layer can lay so that it is multiple it is magazine each have and be not parallel to the The normal of the circular shape of a camera more than one to outer projection.For example, the second layer of multilayer camera apparatus can be used in system 100 Upper multi-panel lid (for example, as shown in Fig. 4 A and Fig. 6) or multiple adjacent cameras set adjacent cameras.In some embodiments In, the video flowing of the capture of from about 4 to about 8 cameras collection can be used to limit image collection in system 100.In some implementations In mode, computer graphical (CG) content that partly or entirely renders is can be used to limit image collection in system 100.
At frame 2210, system 100 can calculate the second light stream of the second image collection.For example, calculating the second image set Light stream in conjunction may include: the image intensity field of a part of analysis pixel column associated with image collection, and to pixel The part of column executes optic flow technique, as detailed above.
In some embodiments, the first light stream can be used for the picture frame that interpolation is not a part of image collection, and And as detailed above.System 100 can be based at least partially on the second light stream for picture frame and the second image collection It is stitched together (at step 2212).
At frame 2214, system 100 can by by the first stitching image associated with the first layer of multilayer camera and The second stitching image associated with the second layer of multilayer camera next life that is stitched together helps to solid.In some embodiments In, omnidirectional's stereoscopic full views are for being shown in VR head-mounted display.In some embodiments, system 100 can be used with At least one three-dimensional associated posture information of neighborhood executes image mosaic, with for example execute interweave before spell in advance Connect a part of image collection.
Figure 23 is the flow chart for being illustrated in the one embodiment for the process 2300 that panoramic picture is rendered in head-mounted display. As shown in figure 23, at frame 2302, system 100 can receive image collection.Described image can be described to be filled from multilayer camera The content for the capture set.At frame 2304, system 100 can choose the part in the picture frame in described image.Described image Frame may include the content using the capture of multilayer camera apparatus.Any part of the content of capture can be used in system 100.Example Such as, system 100 can choose including by the device from the radial rice of the outer edge apart from camera apparatus base portion about one to apart from phase A part in the picture frame of the content of the range acquisition of the radial rice of the outer edge about five of machine device base portion.In some embodiment party In formula, how far which can perceive 3D content based on user.Here, it is arrived apart from camera about apart from one meter of camera apparatus Five meters of distance can indicate that user can watch " region " of 3D content.More shorter than this, then 3D view may be distorted, and Longer than this, then user possibly can not find out 3D shape.That is, context can only look like 2D from distant place.
At frame 2306, the selected portion in picture frame can be spliced together to generate stereoscopic vision panoramic view.? In the example, which can be based at least partially on the other images at least one of selected part and selected portion Frame matching.At frame 2308, panoramic view can be provided in the display of such as HMD device.In some embodiments, may be used Splicing is executed to use the selected splicing ratio of the diameter for being based at least partially on camera apparatus.In some embodiments, The splicing includes following multiple steps: by the second pixel column in the first pixel column and the second picture frame in the first picture frame Match, and third pixel column that the second pixel column is matched in third picture frame is formed to the scene parts of linking.Some In embodiment, many pixel columns can be matched and combine in this way come to form frame, and these frames can be combined To form image.Furthermore, it is possible to combine these images to form scene.According to some embodiments, system 100 can be multiphase Each of machine device layer executes frame 2306 and 2308 to create stitching image associated with each layer of camera apparatus, and is System 100 can be by the image mosaic of splicing together to generate panoramic view.
In some embodiments, method 2300 may include interpolations steps, carry out interpolation and non-image using system 100 The additional image frame of a part of the part in frame.For example, such interpolation can be executed to ensure by phase apart from each other It is flowed between machine captured image.Once executing the interpolation of additional image content, system 100 can hand over additional image frame It knits in the part in picture frame, to generate the virtual content of view.The virtual content can be spliced together conduct The part in picture frame to interweave with additional image frame.For example, the result, which can be used as, updates view to be supplied to HMD.This is more New view can be based at least partially on the part and additional image frame in picture frame.
Figure 24 is the one embodiment for illustrating the process 2400 of one layer of the image boundary for determining multilayer camera apparatus Flow chart.At frame 2402, system 100 can be based on from least one adjacent cameras collection in one layer of multilayer camera apparatus The video flowing for the capture collected is closed to limit image collection.For example, an adjacent cameras set (such as Fig. 2 can be used in system 100 It is shown) or multiple adjacent cameras set (as illustrated in figs. 3 and 4).In some embodiments, system 100 can be used from about The video flowing for the capture that 12 to about 16 cameras are collected limits image collection.In some embodiments, system 100 can make Image collection is limited with computer graphical (CG) content partly or entirely rendered.In some embodiments, with image set Close the video content that corresponding video flowing includes coding.In some embodiments, video flowing corresponding with image collection It may include the content obtained configured at least one adjacent cameras set using 180 degree of visual fields.
At frame 2404, by by with from a part in the path of circular shape come the image for the multiple viewpoints laid The associated viewing light in the part of set projects a viewpoint again, system 100 can by with the one of multilayer camera apparatus layer A part of associated image collection is from fluoroscopy images plane projection to the spherical surface plane of delineation.For example, image collection can be by One layer of polyphaser device capture, imaging sensor is with annular shape come the circle that is laid in camera apparatus in the polyphaser device It, can be with the multiple cameras of trustship on camera case (for example, as shown in Figure 4 A).Each camera can be associated with viewpoint, and These viewpoints are from camera apparatus outwardly scene.Specifically, substitution is derived from a single point, viewing light is on the device Each camera.System 100 can project the light of each viewpoint on path in single viewpoint again.For example, system 100 can analyze each viewpoint of the scene by camera capture, and can calculate phase Sihe difference to determine that expression comes from The scene (or scene set) of the scene of single interpolation viewpoint.
At frame 2406, system 100 can determine peripheral boundary corresponding with single viewpoint, and outer by removing this The pixel of surrounding edge out-of-bounds generates the image of update.The peripheral boundary can draw the clear simplicity of the picture material from distortion Picture material.For example, peripheral boundary can describe the pixel not being distorted according to having the pixel of distortion.In some implementations In mode, peripheral boundary can be about the visual field other than the Representative peripheral area of visual field of user.Removing such pixel can be true Protect the picture material that distortion will not unnecessarily be presented to user.Remove the pixel may include with colored block, still image or Fuzzy pixel indicates to replace the pixel, such as described in detail above.In some embodiments, peripheral boundary is defined as About 150 degree of visual field of one or more camera associated with institute's captured image.In some embodiments, periphery sides Boundary is defined as about 120 degree of visual field of one or more cameras associated with institute's captured image.In some embodiment party In formula, peripheral boundary is about 30 degree of corresponding spherical shapes above the viewing plane with camera associated with captured image A part, and remove pixel include blacking (black out) or remove spherical surface scene top.In some embodiments In, peripheral boundary is about 30 degree of corresponding spherical shapes below the viewing plane with camera associated with captured image A part, and removing pixel includes blacking or the top for removing spherical surface scene.At frame 2408, system 100 can be provided more New image in the range of peripheral boundary for showing.
In some embodiments, method 2400 can also include: by one layer of the image set from multi-layer phase machine device At least two frames in conjunction are stitched together.The splicing may comprise steps of: sampled pixel arranges from frame, and at least The additional pixel column not captured in the frame is inserted between the pixel column of two samplings.In addition, the splicing may include by The step of column and additional column of sampling are mixed to generate pixel value.In some embodiments, at least portion can be used Point ground captures the diameter of the round camera apparatus of image and executes mixing come the splicing ratio of selection based on for obtaining.The splicing is also It may comprise steps of: can for example provide the left scene for being used to show in HMD and You Chang by the way that pixel value to be configured to Scape generates three-dimensional stereoscopic visual panorama.
Figure 25 is the flow chart for illustrating one embodiment of the process 2500 for generating video content.At frame 2502, System 100 can limit image collection based on the video flowing for the capture collected from least one adjacent cameras set.For example, being Solid can be used to (as shown in Figure 2) or multiple adjacent cameras set in system 100 (for example, as illustrated in figs. 3 and 4).Some In embodiment, about 12 to about 16 cameras and multi-layer phases from one layer of multilayer camera apparatus are can be used in system 100 The video flowing of the capture that 4 to 8 cameras in the second layer of machine device are collected limits image collection.In some embodiments In, computer graphical (CG) content that partly or entirely renders can be used to limit image collection in system 100.
At frame 2504, image collection can be spliced into equal rectangular videos stream by system 100.For example, the splicing can wrap It includes: will be combined with the associated image of camera capture angle to the left and with the camera capture associated image of angle to the right.
At frame 2506, system can by by video flowing from the equal rectangular projections of first view and the second view to perspective To render the video flowing for playback.First view can correspond to the left-eye view of head-mounted display, and the second view can To correspond to the right-eye view of head-mounted display.
At frame 2508, system can determine that wherein distortion is higher than the boundary of predetermined threshold.Predefined threshold value can mention Horizontal, mismatch level (level of mismatch), and/or the admissible error level in specific image set for parallax. For example, when by video flowing from a plane or view projections to another plane or view, distortion can at least partly ground In projection configurations.
At frame 2510, as being discussed in detail above, system can be by being in boundary in removal image collection or surpassing The picture material on boundary generates the video flowing of update out.When updating the video flowing, such as can be provided more to the user of HMD New stream is for display.In general, the system and method through disclosure description can be used for capturing image, moved from captured image Except distortion, and image is rendered to provide 3D stereoscopic vision view to the user of HMD device.
Figure 26 shows the general purpose computing device 2600 and General Mobile that can be used together with technology described herein The example of computer equipment 2650.It calculates equipment 2600 and is intended to indicate that various forms of digital computers, calculating such as on knee Machine, desktop computer, work station, personal digital assistant, server, blade server, mainframe and other suitable computers. Calculate equipment 2650 and be intended to indicate that various forms of mobile devices, such as personal digital assistant, cellular phone, smart phone and Other similar calculating equipment.Component, their connection and relationship depicted herein and their function are only exemplary , and it is not intended to limit described herein and/or claimed invention embodiment.
Equipment 2600 is calculated to include processor 2602, memory 2604, storage equipment 2606, be connected to 2604 and of memory The high-speed interface 2608 of high-speed expansion ports 2610 and the low-speed interface for being connected to low speed bus 2614 and storage equipment 2606 2612.Each of component 2602,2604,2606,2608,2610 and 2612 uses various bus interconnections, and can pacify It is installed in other ways on public mainboard or in appropriate situation.Processor 2602 can handle for calculating equipment 2600 The instruction of interior execution, including the instruction being stored in memory 2604 or in storage equipment 2606, to be set in external input/output The graphical information of standby upper display GUI, the external input/output device are such as couple to the display of high-speed interface 2608 2616.In other embodiments, multiple processors and/or multiple buses can be together with multiple memories and more in appropriate situation The memory of a type is used together.Furthermore, it is possible to multiple calculating equipment 2600 be connected, wherein each equipment provides necessary operation A part (for example, as server library, the group of blade server or multicomputer system).
Memory 2604 is calculating 2600 inner storag information of equipment.In one embodiment, memory 2604 is one Or multiple volatile memory-elements.In another embodiment, memory 2604 is one or more nonvolatile memories Unit.Memory 2604 can also be another form of computer-readable medium, such as disk or CD.
Storage equipment 2606 can provide massive store to calculate equipment 2600.In one embodiment, storage is set Standby 2606 can be or comprising: computer-readable medium, such as floppy device, hard disc apparatus, compact disk equipment or tape unit, Flash memory or other similar solid storage devices, or the equipment array including equipment or other configurations in storage area network. Computer program product can be tangibly embodied in information carrier.Computer program product can also include instruction, the instruction One or more methods, method as escribed above are executed when executed.Information carrier is computer or machine-readable media, all Such as the memory on memory 2604, storage equipment 2606 or processor 2602.
High-speed controller 2608 management calculate equipment 2600 bandwidth-intensive operations, and low speed controller 2612 management compared with Low bandwidth intensive.Such distribution of function is merely exemplary.In one embodiment, high-speed controller 2608 It is couple to memory 2604, display 2616 (for example, by graphics processor or accelerator) and is subjected to various expansion cards The high-speed expansion ports 2610 of (not shown).In embodiments, low speed controller 2612 is couple to storage equipment 2606 and low Fast ECP Extended Capabilities Port 2614.It may include the low speed expansion of various communication port (for example, USB, bluetooth, Ethernet, wireless ethernet) Exhibition port can be couple to one or more input-output apparatus, such as keyboard, indicating equipment, scanner or for example pass through net Network adapter is couple to the networked devices of such as switch or router.
Calculating equipment 2600 can be realized with many different forms, as shown in the figure.For example, it may be implemented as marking Quasi- server 2620, or repeatedly realized in the group of such server.It also may be implemented as rack-mount server A part of system 2624.In addition, it can be realized in such as personal computer of laptop computer 2622.As an alternative, It can be combined with other component (not shown) in the mobile device of such as equipment 2650 from the component for calculating equipment 2600.This Each of a little equipment, which may include, calculates one or more of equipment 2600,2650, and whole system can be by that Multiple calculating equipment 2600,2605 of this communication form.
Calculating equipment 2650 includes processor 2652, memory 2664, such as display 2654,2666 and of communication interface The input-output apparatus of transceiver 2668 and other components.Equipment 2650 is also provided with such as microdrive or other The storage equipment of equipment, to provide additional storage.Each of component 2650,2652,2664,2654,2666 and 2668 Using various bus interconnections, and several components may be mounted on public mainboard or install in other ways in appropriate situation.
Processor 2652 can execute the instruction calculated in equipment 2650, including the instruction being stored in memory 2664. Processor may be implemented as including the individual chipset with the chips of multiple analog- and digital- processors.For example, processor Coordination to other components of equipment 2650 can be provided --- such as to user interface, by equipment 2650 run application, with And the control of the wireless communication carried out by equipment 2650.
Processor 2652 can be by being couple to control interface 2658 and display interface 2656 and the user of display 2654 Communication.Display 2654 can be such as TFT LCD (Thin Film Transistor-LCD) or OLED (Organic Light Emitting Diode) Display or other display technologies appropriate.Display interface 2656 may include for driving display 2654 to user's presentation figure The proper circuit of shape and other information.Control interface 2658 can receive order from the user and be converted for submitting to Processor 2652.It is furthermore possible to also provide external interface 2662 is communicated with processor 2652, with realize equipment 2650 with it is other The near region field communication of equipment.External interface 2662 can provide wire communication for example in some embodiments, or at it Wireless communication in its embodiment, and multiple interfaces can also be used.
Memory 2664 is stored in the information calculated in equipment 2650.Memory 2664 may be implemented as one or more In computer-readable medium, one or more volatile memory-elements or one or more Nonvolatile memery units It is one or more.Extended menory 2674 can also be provided and be connected to equipment 2650 by expansion interface 2672, extension Interface 2672 may include such as SIMM (single-in-line memory module) card interface.Such extended menory 2674 can be Equipment 2650 provides additional memory space, or can also store application or the other information of equipment 2650.Specifically, expanding Exhibition memory 2674 may include the instruction for executing or supplementing the above process, and can also include security information.Therefore, example Such as, extended menory 2674 may be provided as the security module of equipment 2650, and can be with allowing to use equipment safely 2650 instruction programs.Furthermore, it is possible to provide security application together with additional information via SIMM card, such as with can not be illegal Identification information is placed on SIMM card by the mode of intrusion.
Memory may include such as flash memory and or NVRAM memory, as described below.In one embodiment, it calculates Machine program product is tangibly embodied in information carrier.Computer program product includes instruction, when executed, executes one A or multiple methods --- such as the above method.Information carrier is can be for example by transceiver 2668 or external interface On 2662 received computer or machine-readable medias, such as memory 2664, extended menory 2674 or processor 2652 Memory.
Equipment 2650 can be carried out wireless communication by communication interface 2666, and communication interface 2666 can include if necessary Digital signal processing circuit.Communication interface 2666 can provide such as GSM audio call, SMS, EMS or MMS message transmitting-receiving, The communication of the various modes such as CDMA, TDMA, PDC, WCDMA, CDMA2000 or GPRS or agreement.Such communication can be such as Occurred by RF transceiver 2668.Additionally, it is possible to such as using transceiver (not shown) as bluetooth, Wi-Fi or other Short haul connection occurs.In addition, GPS (global positioning system) receiver module 2670 can provide additional lead to equipment 2650 Boat and location-related wireless data, in appropriate situation can by run in equipment 2650 using.
Equipment 2650 can also use audio codec 2660 audibly communicate, audio codec 2660 can from Family receives the information said and is converted into available digital information.Audio codec 2660 equally can such as pass through example Loudspeaker such as in the electrophone of equipment 2650 is that user generates audible sound.Such sound may include coming from The sound of voice telephone calls may include the sound (for example, speech message, music file etc.) of record, and can be with Including the sound generated by the application operated in equipment 2650.
Calculating equipment 2650 can be realized with many different forms, as shown in the figure.For example, it may be implemented as bee Cellular telephone 2680.It also may be implemented as the one of smart phone 2682, personal digital assistant or other similar mobile devices Part.
The various embodiments of system and technology described herein can in Fundamental Digital Circuit, integrated circuit, specially set It is realized in the ASIC (specific integrated circuit) of meter, computer hardware, firmware, software, and/or a combination thereof.These various embodiment party Formula may include embodiments in one or more computer programs executable and/or interpretable on programmable systems, The programmable system includes that can be at least one programmable processor, at least one input equipment and at least one output to set Standby, the programmable processor can be dedicated or general processor, is coupled to receive data from storage system and refer to It enables and transmits data and instruction to storage system.
These computer programs (also referred to as program, software, software application or code) include for programmable processing The machine instruction of device, and can be realized with advanced programs and/or object-oriented programming languages and/or compilation/machine language.Such as Used herein, term " machine readable media ", " computer-readable medium " refer to any computer program product, device And/or equipment (for example, disk, CD, memory, programmable logic device (PLD)), it is used to provide to programmable processor Machine instruction and/or data comprising receive the machine readable media of the machine instruction as machine-readable signal.Term " machine Device readable signal " refers to programmable processor and provides any signal of machine instruction and/or data.
In order to provide the interaction with user, system and technology described herein can be realized on computers, the calculating Machine includes the display equipment for showing information to user (for example, CRT (cathode-ray tube) or LCD (liquid crystal display) monitoring Device) and user can by its to computer provide input keyboard and indicating equipment (for example, mouse or trackball).It is other The equipment of type can also be used for providing the interaction with user;For example, the feedback for being supplied to user may be any type of sense organ It feeds back (for example, visual feedback, audio feedback or touch feedback);And input from the user can be received in any form, It is inputted including sound, voice or tactile.
System and technology described herein may be implemented in computing systems, which includes aft-end assembly (example Such as, as data server) or including middleware component (for example, application server), or including front end assemblies (for example, tool There is the client computer of graphic user interface or Web browser, user can pass through itself and system described herein and technology Realization interact) or such rear end, middleware or front end assemblies it is any combination of.The component of system can pass through Any form or the interconnection of the digital data communications (for example, communication network) of medium.The example of communication network includes local area network (" LAN "), wide area network (" WAN ") and internet.
Computing system may include client and server.Client and server is generally remote from each other, and usually logical Communication network is crossed to interact.Relationship between client and server is by means of running on corresponding computer and each other Between with client-server relation computer program and generate.
Many embodiments have been described.It will be appreciated, however, that the spirit and scope for not departing from this specification the case where Under, various modifications can be carried out.For example, the example of appended each claim and above-mentioned such claim can be with Any combination is combined to produce other example embodiment.
Further embodiment is described in example below.
Example 1: a kind of camera apparatus, comprising: first layer imaging sensor, the first layer imaging sensor include first Multiple images sensor, the first multiple images sensor lays and is oriented such that described with circular shape The visual field of each of one multiple images sensor has the axis of the tangent line perpendicular to the circular shape;And second layer figure As sensor, the second layer imaging sensor includes the second multiple images sensor, the second multiple images sensor quilt It is oriented so that the visual field of each of described second multiple images sensor has and is not parallel to first multiple images The axis of the visual field of each of sensor
Example 2: according to camera apparatus described in example 1, wherein each of described first multiple images sensor Visual field is disposed in the first plane, and the visual field of each of described second multiple images sensor is disposed in the second plane It is interior.
Example 3: the camera apparatus according to example 1 or 2, wherein the first multiple images sensor is disposed in In first plane, and the second multiple images sensor is disposed in the second plane for being parallel to first plane.
Example 4: the camera apparatus according to example 1 to 3, wherein the first multiple images sensor is included in In the first layer, so that the first visual field of the first imaging sensor in the first multiple images sensor and described first Third figure in second visual field of the second imaging sensor in multiple images sensor and the first multiple images sensor As the third visual field of sensor intersects.
Example 5: the camera apparatus according to example 1 to 4, wherein the camera apparatus has shell, the shell limit Such radius of round camera apparatus shell has been determined so that at least three connectings in the first multiple images sensor Each of imaging sensor visual field overlapping.
Example 6: according to camera apparatus described in example 5, the imaging sensor of three connectings intersects with plane.
Example 7: the camera apparatus according to example 1 to 6 further comprises: rod shell is arranged in the second layer figure As the first layer imaging sensor between sensor and the rod shell.
Example 8: the camera apparatus according to example 1 to 7, wherein the second layer imaging sensor includes six figures As sensor, and the first layer imaging sensor includes 16 imaging sensors.
Example 9: the camera apparatus according to example 1 to 8, the view of each of described first multiple images sensor Field is orthogonal with the visual field of each of the second multiple images sensor.
Example 10: the camera apparatus according to example 1 to 9, wherein each in the first multiple images sensor The transverse and longitudinal ratio of a visual field is in vertical pattern, and the transverse and longitudinal of the visual field of each of described second multiple images sensor is than at In transverse mode.
Example 11: a kind of camera apparatus, comprising: first layer imaging sensor, the first layer imaging sensor includes cloth The first multiple images sensor in the first plane is set, the first multiple images sensor is configured such that described first The visual field overlapping of each of at least three imaging sensors adjoined in multiple images sensor;And second tomographic image Sensor, the second layer imaging sensor include the second multiple images sensor being arranged in the second plane, and described Two multiple images sensors all have the cross more different than orientation from the transverse and longitudinal of each of the first multiple images sensor Vertical ratio orientation.
Example 12: according to camera apparatus described in example 11, wherein first plane is parallel to second plane.
Example 13: the camera apparatus according to example 11 or 12 further comprises: rod shell is arranged in described second The first layer imaging sensor between tomographic image sensor and the rod shell.
Example 14: the camera apparatus according to example 1 to 10 or 11 to 13, wherein the first layer imaging sensor In imaging sensor and the imaging sensor in the second layer imaging sensor ratio between 2:1 and 3:1.
Example 15: the camera apparatus according to example 1 to 10 or 11 to 14 is spliced described in use using Optical flow interpolation First layer imaging sensor and the second layer imaging sensor captured image.
Example 16: a kind of camera apparatus, comprising: camera case, comprising: lower circumference and upper multi-panel lid, the lower circumference It is arranged in below the multi-panel lid;First multiple images sensor, with circular shape to lay and along the camera shell The lower circumference of body, so that each of described first multiple images sensor, which has, is orthogonal to the lower circumference To outer projection;And the second multiple images sensor, it is arranged on the face of the upper multi-panel lid, so that more than described second Each of imaging sensor have the normal for being not parallel to the lower circumference to outer projection.
Example 17: according to camera apparatus described in example 16, wherein the lower circumference define such radius so that The visual field intersection of at least three imaging sensors adjoined in the first multiple images sensor.
Example 17: the camera apparatus according to example 16 or 17, wherein the figure in the first multiple images sensor As the ratio of the imaging sensor in sensor and the second multiple images sensor is between 2:1 and 3:1.
In addition, the logic flow described in attached drawing do not need shown in particular order or consecutive order realize desired knot Fruit.Furthermore, it is possible to provide other steps from described process or can be with removal process, and other components can be added It is added in described system or removes from described system other components.Therefore, other embodiments are wanted in appended right In the range of seeking book.

Claims (17)

1. a kind of camera apparatus, comprising:
First layer imaging sensor, the first layer imaging sensor includes the first multiple images sensor, a more than described first Each of described first multiple images sensor is laid and be oriented such that imaging sensor with circular shape Visual field has the axis of the tangent line perpendicular to the circular shape;And
Second layer imaging sensor, the second layer imaging sensor includes the second multiple images sensor, a more than described second Imaging sensor is oriented such that the visual field of each of described second multiple images sensor is described with being not parallel to The axis of the visual field of each of first multiple images sensor,
The circular shape has certain radius, so that at least three images adjoined in the first multiple images sensor The visual field of each of sensor is overlapped.
2. camera apparatus according to claim 1, wherein the visual field of each of described first multiple images sensor It is disposed in the first plane, the visual field of each of described second multiple images sensor is disposed in the second plane.
3. camera apparatus according to claim 1, wherein the first multiple images sensor is disposed in the first plane It is interior, and the second multiple images sensor is disposed in the second plane for being parallel to first plane.
4. camera apparatus according to claim 1, wherein the first multiple images sensor is included in described first In layer, so that the first visual field of the first imaging sensor in the first multiple images sensor and first multiple images Third imaging sensor in second visual field of the second imaging sensor in sensor and the first multiple images sensor Third visual field intersection.
5. camera apparatus according to claim 1, wherein the imaging sensor of three connectings intersects with a plane.
6. camera apparatus according to claim 1 further comprises rod shell, wherein the first layer imaging sensor cloth It sets between the second layer imaging sensor and the rod shell.
7. camera apparatus according to claim 1, wherein the second layer imaging sensor includes six image sensings Device, and the first layer imaging sensor includes 16 imaging sensors.
8. camera apparatus according to claim 1, wherein the visual field of each of described first multiple images sensor It is orthogonal with the visual field of each of the second multiple images sensor.
9. camera apparatus according to claim 1, wherein the visual field of each of described first multiple images sensor Transverse and longitudinal ratio be in vertical pattern, the transverse and longitudinal ratio of the visual field of each of described second multiple images sensor is in lateral mould Formula.
10. a kind of camera apparatus, comprising:
First layer imaging sensor, the first layer imaging sensor include that the first multiple images being arranged in the first plane pass Sensor, the first multiple images sensor are configured such that at least three connectings in the first multiple images sensor The visual field of each of imaging sensor is overlapped;And
Second layer imaging sensor, the second layer imaging sensor include that the second multiple images being arranged in the second plane pass Sensor, the second multiple images sensor all has to be taken with the transverse and longitudinal ratio of each of the first multiple images sensor It is orientated to different transverse and longitudinal ratios,
First multiple images sensor definition circular shape, the circular shape have certain radius, so that described the The visual field overlapping of each of at least three imaging sensors adjoined in one multiple images sensor.
11. camera apparatus according to claim 10, wherein first plane is parallel to second plane.
12. camera apparatus according to claim 10 further comprises rod shell, the first layer imaging sensor arrangement Between the second layer imaging sensor and the rod shell.
13. camera apparatus according to claim 10, wherein imaging sensor in the first layer imaging sensor with The ratio of imaging sensor in the second layer imaging sensor is between 2:1 and 3:1.
14. camera apparatus according to claim 10, wherein spliced using Optical flow interpolation and use first tomographic image Sensor and the second layer imaging sensor captured image.
15. a kind of camera apparatus, comprising:
Camera case, comprising:
Lower circumference, and
Upper multi-panel lid, the lower circumference is below the multi-panel lid;
First multiple images sensor, with circular shape to lay and along the lower circumference cloth of the camera case Set so that each of described first multiple images sensor have be orthogonal to the lower circumference to outer projection;And
Second multiple images sensor is arranged on the face of the upper multi-panel lid, so that second multiple images sense Each of device have the normal for being not parallel to the lower circumference to outer projection,
The circular shape of the first multiple images sensor has such radius so that first multiple images The visual field overlapping of each of at least three imaging sensors adjoined in sensor.
16. camera apparatus according to claim 15, wherein the second multiple images sensor definition second is specific Radius, so that at least three in the second multiple images sensor adjoin the visual field intersection of imaging sensor.
17. camera apparatus according to claim 15, wherein the imaging sensor in the first multiple images sensor Ratio with the imaging sensor in the second multiple images sensor is between 2:1 and 3:1.
CN201710706614.5A 2016-08-17 2017-08-17 Multilayer camera apparatus for 3 D visual image capture Pending CN109361912A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662376140P 2016-08-17 2016-08-17
US62/376,140 2016-08-17

Publications (1)

Publication Number Publication Date
CN109361912A true CN109361912A (en) 2019-02-19

Family

ID=59738478

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201721035347.5U Expired - Fee Related CN207369210U (en) 2016-08-17 2017-08-17 Multilayer camera apparatus for 3 D visual image capture
CN201710706614.5A Pending CN109361912A (en) 2016-08-17 2017-08-17 Multilayer camera apparatus for 3 D visual image capture

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201721035347.5U Expired - Fee Related CN207369210U (en) 2016-08-17 2017-08-17 Multilayer camera apparatus for 3 D visual image capture

Country Status (4)

Country Link
CN (2) CN207369210U (en)
DE (2) DE202017104934U1 (en)
GB (1) GB2555908A (en)
WO (1) WO2018035347A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113544733A (en) * 2019-03-10 2021-10-22 谷歌有限责任公司 360-degree wide-angle camera using butt-joint method

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE202017104934U1 (en) * 2016-08-17 2017-11-20 Google Inc. Multi-level camera carrier system for stereoscopic image acquisition
EP3752877A4 (en) 2018-02-17 2021-11-03 Dreamvu, Inc. System and method for capturing omni-stereo videos using multi-sensors
USD931355S1 (en) 2018-02-27 2021-09-21 Dreamvu, Inc. 360 degree stereo single sensor camera
USD943017S1 (en) 2018-02-27 2022-02-08 Dreamvu, Inc. 360 degree stereo optics mount for a camera
EP3690822A1 (en) 2019-01-30 2020-08-05 Koninklijke Philips N.V. Image representation of a scene
EP4094110A4 (en) * 2019-12-23 2024-05-15 Circle Optics Inc Mounting systems for multi-camera imagers
CN111540017B (en) * 2020-04-27 2023-05-05 深圳市瑞立视多媒体科技有限公司 Method, device, equipment and storage medium for optimizing camera position variable
CN114189697B (en) * 2021-12-03 2022-10-14 腾讯科技(深圳)有限公司 Video data processing method and device and readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6141034A (en) * 1995-12-15 2000-10-31 Immersive Media Co. Immersive imaging method and apparatus
CN1531826A (en) * 2001-02-09 2004-09-22 Method and apparatus for omni-directional image and 3-dimensional data acquisition with data annotation and dynamic range extension method
WO2006110584A2 (en) * 2005-04-07 2006-10-19 Axis Engineering Technologies, Inc. Stereoscopic wide field of view imaging system
US20080316301A1 (en) * 2000-11-29 2008-12-25 Micoy Corporation System and method for spherical stereoscopic photographing
WO2014152855A2 (en) * 2013-03-14 2014-09-25 Geerds Joergen Camera system
US20150348580A1 (en) * 2014-05-29 2015-12-03 Jaunt Inc. Camera array including camera modules
WO2016055688A1 (en) * 2014-10-07 2016-04-14 Nokia Technologies Oy Camera devices with a large field of view for stereo imaging
CN105739231A (en) * 2016-05-06 2016-07-06 中国科学技术大学 Multi-camera panorama stereo imaging device of planar distribution
CN207369210U (en) * 2016-08-17 2018-05-15 谷歌有限责任公司 Multilayer camera apparatus for 3 D visual image capture

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6947059B2 (en) * 2001-08-10 2005-09-20 Micoy Corporation Stereoscopic panoramic image capture device
US20050025313A1 (en) * 2003-06-19 2005-02-03 Wachtel Robert A. Digital imaging system for creating a wide-angle image from multiple narrow angle images
EP2490659B1 (en) * 2009-10-22 2017-03-01 Henkel AG & Co. KGaA Composition for the temporary shaping of keratinic fibres comprising a nonionic propyleneoxide-modified starch and a chitosan
US9007432B2 (en) * 2010-12-16 2015-04-14 The Massachusetts Institute Of Technology Imaging systems and methods for immersive surveillance
WO2012082127A1 (en) * 2010-12-16 2012-06-21 Massachusetts Institute Of Technology Imaging system for immersive surveillance
US9036001B2 (en) * 2010-12-16 2015-05-19 Massachusetts Institute Of Technology Imaging system for immersive surveillance
US20160165211A1 (en) * 2014-12-08 2016-06-09 Board Of Trustees Of The University Of Alabama Automotive imaging system
US20170363949A1 (en) * 2015-05-27 2017-12-21 Google Inc Multi-tier camera rig for stereoscopic image capture

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6141034A (en) * 1995-12-15 2000-10-31 Immersive Media Co. Immersive imaging method and apparatus
US20080316301A1 (en) * 2000-11-29 2008-12-25 Micoy Corporation System and method for spherical stereoscopic photographing
CN1531826A (en) * 2001-02-09 2004-09-22 Method and apparatus for omni-directional image and 3-dimensional data acquisition with data annotation and dynamic range extension method
WO2006110584A2 (en) * 2005-04-07 2006-10-19 Axis Engineering Technologies, Inc. Stereoscopic wide field of view imaging system
WO2014152855A2 (en) * 2013-03-14 2014-09-25 Geerds Joergen Camera system
US20150348580A1 (en) * 2014-05-29 2015-12-03 Jaunt Inc. Camera array including camera modules
WO2016055688A1 (en) * 2014-10-07 2016-04-14 Nokia Technologies Oy Camera devices with a large field of view for stereo imaging
CN105739231A (en) * 2016-05-06 2016-07-06 中国科学技术大学 Multi-camera panorama stereo imaging device of planar distribution
CN207369210U (en) * 2016-08-17 2018-05-15 谷歌有限责任公司 Multilayer camera apparatus for 3 D visual image capture

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113544733A (en) * 2019-03-10 2021-10-22 谷歌有限责任公司 360-degree wide-angle camera using butt-joint method

Also Published As

Publication number Publication date
DE202017104934U1 (en) 2017-11-20
GB2555908A (en) 2018-05-16
DE102017118714A1 (en) 2018-02-22
GB201713180D0 (en) 2017-10-04
WO2018035347A1 (en) 2018-02-22
CN207369210U (en) 2018-05-15
WO2018035347A8 (en) 2018-03-29

Similar Documents

Publication Publication Date Title
CN207369210U (en) Multilayer camera apparatus for 3 D visual image capture
CN107431796B (en) The omnibearing stereo formula of panoramic virtual reality content captures and rendering
US20170363949A1 (en) Multi-tier camera rig for stereoscopic image capture
US10375381B2 (en) Omnistereo capture and render of panoramic virtual reality content
US10244226B2 (en) Camera rig and stereoscopic image capture
US20190019299A1 (en) Adaptive stitching of frames in the process of creating a panoramic frame
CN106797460B (en) The reconstruction of 3 D video
US10038887B2 (en) Capture and render of panoramic virtual reality content
CN107925722A (en) Stabilisation based on accelerometer data
CN107810633A (en) Three-dimensional rendering system
US10453244B2 (en) Multi-layer UV map based texture rendering for free-running FVV applications
US11812009B2 (en) Generating virtual reality content via light fields
Chapdelaine-Couture et al. The omnipolar camera: A new approach to stereo immersive capture
CN113838116A (en) Method and device for determining target view, electronic equipment and storage medium
US11758101B2 (en) Restoration of the FOV of images for stereoscopic rendering
Ra et al. Decoupled Hybrid 360° Panoramic Stereo Video
US20220232201A1 (en) Image generation system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190219

WD01 Invention patent application deemed withdrawn after publication