US20220078345A1 - Immersive capture and review - Google Patents

Immersive capture and review Download PDF

Info

Publication number
US20220078345A1
US20220078345A1 US17/531,040 US202117531040A US2022078345A1 US 20220078345 A1 US20220078345 A1 US 20220078345A1 US 202117531040 A US202117531040 A US 202117531040A US 2022078345 A1 US2022078345 A1 US 2022078345A1
Authority
US
United States
Prior art keywords
target space
immersion
immersive
camera module
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/531,040
Inventor
Bryan COLIN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US17/531,040 priority Critical patent/US20220078345A1/en
Publication of US20220078345A1 publication Critical patent/US20220078345A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/23238
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/684Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time
    • H04N23/6845Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time by combination of a plurality of images sequentially taken
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/685Vibration or motion blur correction performed by mechanical compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/41Extracting pixel data from a plurality of image sensors simultaneously picking up an image, e.g. for increasing the field of view by combining the outputs of a plurality of sensors
    • H04N5/217
    • H04N5/2251
    • H04N5/23277
    • H04N5/2328
    • H04N5/247
    • H04N5/3415

Definitions

  • the subject innovation generally relates to capturing and providing immersive media experiences.
  • the subject innovation more specifically concerns allowing users to view remote locations in a non-linear and self-driven manner.
  • Video and other media are used to allow entities to view or otherwise experience remote environments. However, this media has generally been limiting in a variety of ways. Moving video images are generally constrained to a linear path as recorded and do not permit substantial user interaction to drive the content. Still frame photographs can be used to provide additional control (e.g., with directional controls to move to an adjacent location) but are also limited to the views taken by the photographer.
  • a system in an embodiment, includes an immersive camera module including a camera mounting block having a plurality of camera mounting sites and a plurality of cameras mounted to the plurality of camera mounting sites. Each of the plurality of cameras includes a partially-overlapping field of view, and the camera module is configured to comprehensively capture a target space.
  • the system further includes a chassis operatively coupled with the immersive camera module, the chassis configured to smoothly maneuver the camera module comprehensively through the target space.
  • a system includes an immersive video generation module configured to seamlessly combine a comprehensive capture of a target space to a travelable comprehensive immersion.
  • the immersive video generation module is configured to receive at least one image from each of a plurality of cameras at a first location, continuously stitch the at least one image from each of the plurality of cameras at the first location to produce a first location immersion, receive at least one image from the plurality of cameras at a second location, continuously stitch the at least one image from each of the plurality of cameras at the second location to produce a second location immersion, and continuously stitch the first location immersion and the second location immersion to create a travelable comprehensive immersion.
  • a method in an embodiment, includes providing an immersive camera module including a camera mounting block having a plurality of camera mounting sites and a plurality of cameras mounted to the plurality of camera mounting sites.
  • the each of the plurality of cameras includes a partially-overlapping field of view and the camera module is configured to comprehensively capture a target space.
  • the method also includes providing a chassis operatively coupled with the camera module, the chassis configured to smoothly maneuver the camera module comprehensively through the target space and recording at least one image from each of the plurality of cameras to record a comprehensive capture of the target space.
  • the method also includes simultaneously while recording, smoothly maneuvering the camera module through the target space.
  • a method includes receiving at least one image from each of a plurality of cameras at a first location, continuously stitching the at least one image from each of the plurality of cameras at the first location to produce a first location immersion, receiving at least one image from the plurality of cameras at a second location, continuously stitching the at least one image from each of the plurality of cameras at the second location to produce a second location immersion, and continuously stitching the first location immersion and the second location immersion to create a travelable comprehensive immersion.
  • a system in an embodiment, includes an immersion engine configured to access a travelable comprehensive immersion.
  • the immersion engine controls maneuver and view through the travelable comprehensive immersion based on user input.
  • the system also includes a display configured to display the travelable comprehensive immersion as provided by the immersion engine and a control configured to provide the user input to the immersion engine.
  • a method includes receiving a travelable comprehensive immersion, displaying an initial viewer state of the travelable comprehensive immersion, receiving user input related to the travelable comprehensive immersion, and displaying a subsequent viewer state of the travelable comprehensive immersion based on the user input.
  • the subsequent viewer state differs from the initial viewer state in at least one of viewer position or viewer orientation.
  • FIGS. 1A and 1B illustrate example techniques for viewing an environment
  • FIG. 2 illustrates an embodiment of a camera module for capturing an environment
  • FIGS. 3A and 3B illustrate embodiments of camera modules coupled to chasses and vehicles for maneuvering the camera modules
  • FIGS. 4A, 4B, 4C, and 4D illustrate embodiments of camera modules coupled to chasses and physical interfaces for human maneuver
  • FIG. 5 illustrates an alternative embodiment of a chassis coupled to a camera module
  • FIGS. 6A and 6B illustrate embodiments of techniques for controlling a camera module and/or elements operatively coupled therewith
  • FIG. 7 illustrates embodiments of techniques for controlling a camera module and/or elements operatively coupled therewith
  • FIGS. 8A and 8B illustrate modules used for capturing an environment
  • FIG. 9 illustrates aspects of techniques for capturing an environment
  • FIG. 10 illustrates aspects of techniques for capturing an environment
  • FIGS. 11A to 11C illustrate aspects of techniques for capturing an environment
  • FIGS. 12A to 12C illustrate aspects of techniques for capturing an environment
  • FIG. 13 illustrates aspects of techniques for capturing an environment
  • FIG. 14 illustrates aspects of alternative techniques for capturing an environment
  • FIG. 15 illustrates an example embodiment of viewing an environment
  • FIG. 16 illustrates an alternative or complementary example embodiment of viewing an environment
  • FIG. 17 illustrates an alternative or complementary example embodiment of viewing an environment
  • FIG. 18 illustrates an alternative or complementary example embodiment of viewing an environment
  • FIG. 19 illustrates an example environment for supplemental content
  • FIG. 20 illustrates an example environment including supplemental content
  • FIG. 21 illustrates example supplemental content
  • FIG. 22 illustrates an example embodiment synchronizing devices for use with aspects herein;
  • FIG. 23 illustrates an example embodiment of a system for viewing media
  • FIGS. 24A to 24D illustrate example embodiments of a camera module and system using the camera module
  • FIGS. 25A and 25B illustrate example embodiments of a camera module utilizing mobile devices
  • FIG. 26 illustrates an example embodiment of a system using a camera module
  • FIG. 27 illustrates an example embodiment of use of a system using a camera module
  • FIGS. 28A and 28B illustrate example aspects related to field of vision stop and go.
  • FIG. 29 shows an example computing device.
  • FIG. 30 shows an example computing environment.
  • aspects herein generally relate to systems and methods for comprehensively capturing a target space or environment, as well as displaying or providing comprehensive captures of target spaces or environments.
  • These travelable comprehensive immersions provide a completely unique experience based on the user, on the basis that they can be explored continuously in three dimensions using control input. They have no start, end, timeline, or path, and are based off actual recorded media of the target space as opposed to a digital model.
  • Direction, movement, speed, elevation, location, viewing angle, and so forth are all placed in user hands with no duration or predetermined time element.
  • a target space can be any space or environment, including both indoor and outdoor public or private spaces.
  • a target space is comprehensively captured after a camera module maneuvers the target space while recording. Maneuvering the target space can include movement in all three dimensions, and in various embodiments may include traveling a linear path through the space, travelling multiple paths through the space, travelling a gridded path or series of gridded paths through the space, travelling a curved path or series of curved paths through the space, traveling diagonals of the space, following a human-walked path through the space, et cetera.
  • Maneuvering the target space can include travelling along or near walls or boundaries of the target space, and in some embodiments may then involve logically segmenting the space therein into sections, grids, curves, et cetera, either based on the dimensions of the target space or predefined intervals.
  • maneuver can include a third, vertical dimension in addition to the area (e.g., floor or ground) covered, and the camera module can be held in a two dimensional location while multiple vertical views are collected, or the comprehensive maneuver can occur following the same or different two-dimensional paths at different heights.
  • the camera module either continuously or according to a capture rate/interval records photographs or video of the space to provide combinable immersive views continuously or at discrete points for the entire maneuver.
  • Comprehensively capturing a target space can also include maneuvering to or around focal points to provide still further views or other enhanced images of items of interest within the space.
  • smoothly maneuver means to maneuver in a fashion not substantially subject to bumps, shaking, or other disruption modifying the intended path and orientation of the camera module there through.
  • image quality is improved both in individual views and during stitching of different individual views into adjacent views.
  • the travelable comprehensive immersion can be a file or group of files containing images and/or video of the target space combined in a manner that allows viewing of, movement through, and exploration of the target space in a non-linear and non-programmed manner. Because the space is “rebuilt” virtually—the camera module captures surrounding views in a variety of locations—the location and orientation of a viewer using the travelable comprehensive immersion can be modified in a substantially continuous manner, allowing movement to anywhere in the space and different viewing angles at any such point. In embodiments, these capabilities can be subject to a capture rate or interval, where discrete locations (e.g., 1 inch, 6 inches, 1 foot, 3 feet, 6 feet, and any other distance) are captured with interval gaps there between.
  • discrete locations e.g., 1 inch, 6 inches, 1 foot, 3 feet, 6 feet, and any other distance
  • the terms “may” and “may be” indicate a possibility of an occurrence within a set of circumstances; a possession of a specified property, characteristic or function; and/or qualify another verb by expressing one or more of an ability, capability, or possibility associated with the qualified verb. Accordingly, usage of “may” and “may be” indicates that a modified term is apparently appropriate, capable, or suitable for an indicated capacity, function, or usage, while taking into account that in some circumstances the modified term may sometimes not be appropriate, capable, or suitable. For example, in some circumstances an event or capacity can be expected, while in other circumstances the event or capacity cannot occur—this distinction is captured by the terms “may” and “may be.”
  • FIGS. 1A and 1B illustrate example techniques for viewing an environment.
  • FIG. 1A shows a person in the environment.
  • the person may be a guide for the environment, such as a realtor or customer service representative, or a person interested in but unfamiliar with the environment, such as a prospective buyer or tourist visiting for the first time.
  • This provides the greatest flexibility and realism in viewing an environment inasmuch as the person can choose her location and viewing angle, but she must be physically present.
  • physical presence may not always be possible.
  • FIG. 1B shows a computer interface for, e.g., a virtual tour of the environment.
  • the interface can include a main photograph, controls, and thumbnails of other photos. Based on the controls or selection of a thumbnail, the main photograph changes to provide a larger view of particular views in the environment.
  • the environment can only be viewed in the very limited number of views available, thereby leaving large gaps and a stuttered, unrealistic viewing experience.
  • FIG. 2 illustrates an embodiment of an immersive camera module for capturing an environment.
  • the camera module is an immersive camera module which collects a spherical view using a plurality of cameras, providing a continuous view including rotational degrees of freedom similar to or exceeding those possessed by a person standing at the location in question.
  • the camera module can include a mounting block having a plurality of camera mounting sites and the plurality of cameras mounted thereon.
  • the cameras may be coupled without use of a camera mounting block (e.g., integral hardware facilitates their connection).
  • the plurality of cameras are arranged such that each camera has a partially overlapping field of view with one or more adjacent cameras to facilitate collection of images sharing overlapping portions which can be merged by matching portions of different images to provide a comprehensive capture of the target space.
  • the camera module is configured to comprehensively capture the target space.
  • the camera module includes six cameras, with five mounted to provide a 360-degree panoramic view around the camera module and one mounted atop to allow upward viewing.
  • the cameras may be mounted at angles to modify the field of view.
  • the panoramic series of cameras can include a slight downward tilt to reduce field of view overlap with the sixth camera directed upward, thereby maximizing the amount of unique image data in each immersive image constructed from individual camera images.
  • the camera module(s) illustrated herein are provided for purposes of example only, and do not limit other possible camera module arrangements. In embodiments, other numbers of cameras can be utilized, and camera angles other than those pictured (e.g., downward, between top and side cameras, et cetera) can be employed without departing from the scope or spirit of the innovation.
  • the cameras can provide images collected to temporary or persistent storage, or directly to an immersive video generation module for production of an immersive video of the target space.
  • the cameras can utilize any wired or wireless means of communication and/or powering.
  • the camera module can be operatively coupled to a chassis.
  • the chassis is configured to smoothly maneuver the camera module comprehensively through the target space. This chassis is also visible in later figures.
  • FIGS. 3A and 3B illustrate embodiments of camera modules coupled to chasses and immersive capture vehicles for maneuvering the camera modules.
  • chasses can be coupled to immersive capture vehicles which smoothly maneuver the chassis and immersive camera module comprehensively through the target space.
  • immersive capture vehicles may have two or four wheels, or any other number.
  • the immersive capture vehicle may move about on one or more spherical wheels, or one or more continuous tracks (e.g., “tank tread”).
  • the propulsion mechanisms employed with the immersive capture vehicles can influence their speed, maneuverability (e.g., turning radius), capability for negotiating obstacles (e.g., a threshold, raised carpet, a staircase, and others) or terrain (e.g., wet surfaces, mud, snow, gravel, and others).
  • the immersive capture vehicle includes at least a vehicle logic module capable of managing maneuver of the immersive capture vehicle (e.g., direction and speed) by controlling its propulsion mechanisms.
  • the vehicle logic module can be operatively coupled or include a communication module (e.g., to send and receive information), storage and/or a general or application-specific processor (e.g., storing data for use controlling movement, calculating paths of movement, modifying vehicle operation, and so forth), sensor modules (e.g., for collecting data about vehicle operation, for collecting data about the environment), and others.
  • the logic module can receive information about a target space before beginning or discover information about the target space (e.g., using the sensor module) before or during comprehensive capture of the target space. Techniques by which the logic module can automatically capture spaces or capture spaces based on user input are discussed further below.
  • a logic module can include a location module, which can utilize one or more location techniques such as a global positioning system, a triangulation technique, or other techniques providing an absolute location, or techniques for discovering a relative location at a distance (e.g., radar, sonar, laser, infrared). Logic can be provided to prevent collisions in the target space while immersive media is being collected.
  • an immersive capture vehicle can be a robot. In an embodiment, an immersive capture vehicle can be a self-balancing automated device.
  • FIGS. 4A, 4B, 4C, and 4D illustrate embodiments of camera modules coupled to chasses and physical interfaces for human maneuver.
  • physical interfaces such as a helmet ( FIG. 4A ), a harness ( FIG. 4B ), or a grip ( FIG. 4D ) can be provided.
  • the chassis itself can be gripped by a person ( FIG. 4C ).
  • other components of the system can be integrated into the physical interface and/or chassis.
  • a computer readable storage media and/or hardware and/or software of an immersive video generation module can be maintained in, e.g., the grip of FIG. 4D .
  • Physical interfaces can include various aspects to improve ergonomics.
  • the physical interface and/or chassis can be pivot-able, extended or retracted, or otherwise adjustable to provide for ergonomic carriage facilitating smooth maneuver of the chassis and camera module.
  • smooth maneuver may or may not include substantially level or stable maneuver of the camera module, but may instead mimic human motion for a walking experience when viewed.
  • a person can stabilize the human interface but be conveyed on another vehicle (e.g., rolling chair as in FIG. 4C ) to reduce the impact of motion.
  • FIG. 5 illustrates an alternative embodiment of a chassis coupled to a camera module.
  • Chasses herein can include an adjustment module to change the location or orientation of the camera module with respect to, e.g., a point on the chassis. This can include telescoping members, jointed members for pivoting or tilting, members which can spin, et cetera.
  • an adjustment module can include a pivot having a plummet thereunder.
  • the adjustment mechanism including the pivot-plummet apparatus is one technique for reducing or eliminating shake or tilt during starting and stopping of system movement or during other conditions such as uneven flooring.
  • Other techniques can include, alternatively or complementarily, springs or suspensions, flexible joints, padding, et cetera.
  • FIGS. 6A and 6B illustrate embodiments of techniques for controlling a camera module and/or elements operatively coupled therewith.
  • FIGS. 6A and 6B illustrate manual and/or semi-automatic techniques for control of a camera module or aspects operatively coupled therewith.
  • FIG. 6A shows a tablet while FIG. 6B shows a video game style controller, both of which can be used for remote control of systems herein.
  • Alternatives to touchscreens and controllers can include a mouse, keyboard, joystick, trackball, pointing stick, stylus, et cetera.
  • FIG. 6B specifically shows the controller used to control spinning (including a rate of spinning) of the camera module on the chassis.
  • controllers can be used to start, steer, and stop immersive capture vehicles, enable or disable camera capture, adjust the camera module using an adjustment module of the chassis, et cetera.
  • Actuators can be provided to various elements of the system and operatively coupled with a communication module to facilitate remote control. Further, in alternative or complementary embodiments, gesture-based feedback can be used for control (e.g., user head movement where elements controlled using wearable headgear).
  • FIG. 7 illustrates embodiments of techniques for controlling a camera module and/or elements operatively coupled therewith.
  • a controller can be used to control one or more camera modules present at a remote event.
  • substantially static chassis can be provided at, e.g., seat locations in a sporting event.
  • Simulating attendance e.g., based on a pay-per-view arrangement, a subscriber service, affiliation with a team, et cetera
  • users can control camera modules to experience the remote event. This experience can be provisioned in real-time or later based on recorded immersive media capture.
  • FIGS. 8A and 8B illustrate modules used for capturing an environment.
  • the media produced comprehensively capturing a target space can be provided to an immersive video generation module which combines the images to create a travelable comprehensive immersion.
  • the immersive video generation module can be operatively coupled with, e.g., an input/output or communication module to receive media for processing and to provide the generated travelable comprehensive immersion.
  • FIG. 8B shows an alternative arrangement showing in greater detail an example flow of information.
  • the immersive camera module collects immersive media, and in embodiments can be at least partially controlled by a user control.
  • the immersive camera module then provides collected media to one or both of storage media and the immersive video generation module.
  • the immersive video generation module outputs at least one travelable comprehensive immersion, which can be provided to user displays and controls either via storage or directly from the immersive video generation module.
  • FIGS. 8A and 8B are provided for example purposes only, and the modules present as well as their arrangement and information flow can vary without departing from the scope or spirit of the innovation.
  • FIG. 9 illustrates aspects of techniques for capturing an environment.
  • a user can use a computer or another device to provide signals or pre-program a system to comprehensively capture a space automatically or semi-automatically.
  • walls can be virtually (e.g., using an interface for programming comprehensive capture) or physically (e.g., using visible or invisible light wavelengths, applying color to walls, applying markers to walls) marked to aid with at least combining of media to produce a travelable comprehensive immersion of the target space.
  • light or markers invisible to the human eye can be used to avoid changes to the environment and/or any need for image processing to remove added elements.
  • FIG. 10 illustrates aspects of techniques for capturing an environment.
  • an immersive capture vehicle can transport a camera module and connecting chassis about the exterior of a room, near or against the room's walls. After completing its loop, the room may be adequately imaged in some embodiments, or the interior of the room can be maneuvered (e.g., according to a pattern or pre-planned path) to provide additional full-resolution views from within the target space.
  • the target space can be mapped (or a path created therein) prior to recording and maneuvering, or the target space can be mapped during maneuvering and recording (e.g., interior is discovered by maneuvering about the exterior).
  • FIGS. 11A to 11C illustrate aspects of techniques for capturing an environment. While FIG. 10 and other drawings herein can take continuous imaging during maneuver, in embodiments pictures can be taken at relative or absolute intervals during maneuver. Thus, as can be appreciated in FIGS. 11A to 11C , a target resolution or capture rate can determine how frequently immersive media is captured. In FIGS. 11A to 11C , the camera module can advance by a distance of x between immersive media capture instances. In embodiments, x can be an increment of, e.g., 1 inch, 6 inches, one foot, two feet, three feet, six feet, or more, any amount there between, or any amount greater or less.
  • FIG. 11B in particular also demonstrates how the height of a camera module can be identified.
  • the chassis can be supported at a height of y 1 while the camera module is located at a height of y 2 dependent upon the y 1 and the (fixed or variable) length of the chassis.
  • FIGS. 12A to 12C illustrate aspects of techniques for capturing an environment.
  • FIG. 12A illustrates the fields of view captured by two cameras in opposing positions.
  • Knowledge of the field of view (e.g., as an angle) of one or more cameras can be used to determine the amount of a target space captured from a given location.
  • cameras are of a resolution facilitating the use of zoom to comprehensively capture the area, allowing for the use of fixed-location camera modules or obviating the need for the camera module to be maneuvered over every piece of the target space.
  • FIG. 12B illustrates the additional space captured (as well as space overlapped) by locating single cameras or multi-camera modules at additional sites in a target space.
  • FIG. 12C illustrates another example in which twelve paths can be travelled by a moving camera module to provide immersive media comprehensively capturing a square target space.
  • Zoom features can be employed based on pre-shot video tracks combined as described herein allowing the user to experience the target space in any location or orientation without a sense of confinement to the pre-shot lines.
  • This example is provided for illustrative purposes only, and it is understood on review of the disclosures herein how this concept can be extended to any target space.
  • FIG. 13 illustrates aspects of techniques for capturing an environment. Specifically, FIG. 13 illustrates camera module arrangement positioned about an event field (e.g., with opposing goals). The field of view is represented using lines extending from the cameras to how the field area is covered with opposing camera modules. This can be employed with techniques such as, e.g., those shown in FIGS. 12A and 12B .
  • a user can be enabled to stand in the middle of an event without disruption using combined immersive media from multiple angles.
  • the immersion can include views that appear at eye-level from points where no attendee would be permitted to stand.
  • an immersive video generation module can include an algorithm for combining or stitching opposing or offset camera views to create stitched live-video views without requiring a camera module in that location. In this fashion, users may, for example, view from “within” a sports game, seeing players run around them without any disruption to the players.
  • FIG. 14 returns to aspects of capturing an environment relating to a remote event.
  • Virtual attendance can be simulated either during the event or in an immersive replay.
  • multiple camera modules can be combined treating their location as an interval, and using various zoom and image processing can provide views within the space there between. While the camera modules are shown directed towards the event (e.g., basketball court), global media may be collected to allow a remote attendee to look at other aspects (e.g., the crowd).
  • Embodiments such as that of, e.g., FIG. 14 can provide for a premium location for virtual attendance. Further, access to areas not available to physical attendees (e.g., locker rooms, warm-up areas, bench or dugout, and others) can be provided through camera modules located thereabout.
  • FIG. 15 illustrates an example embodiment of viewing an environment.
  • a computer can be used to navigate a travelable comprehensive immersion. Rather than discrete movements between views selected by a third party, the entire space is continuously explore-able by the user, who can translate or rotate with up to six degrees of freedom throughout the boundaries of the target space (e.g., walls of a structure).
  • An immersion engine such as those described herein can produce a travelable comprehensive immersion which can then be provided from the engine or from storage to a viewer (e.g., custom application, internet browser, other display). The viewer can control the immersion using controls such as a mouse or pointer, keyboard, and/or touch screen, et cetera.
  • a travelable comprehensive immersion can be received (e.g., from storage and/or an immersive video generation module).
  • An initial viewer state of the travelable comprehensive immersion is displayed (e.g., entryway, initial location programmed into immersion, initial location selected by user).
  • User input can then be received related to the travelable comprehensive immersion. Based on the user input, a subsequent viewer state can be displayed.
  • the subsequent viewer state can differ from the initial viewer state in at least one of viewer position (e.g., location within the target space) or viewer orientation (e.g., viewing direction at a location within the target space).
  • Additional changes provided in subsequent state(s) can include environmental changes not based on user input, such as moving water, motion of the sun, curtains moving due to open window, et cetera.
  • the environment of the target space can be dynamic.
  • Immersions can be provided using remote storage.
  • an immersion is provided on a software-as-a-service basis or from a cloud hosting site.
  • Billing can be based on the number of times an immersion is accessed, the time spent in an immersion, and so forth.
  • Recording and/or viewing technology can also be provided as a service, and both viewing applications and immersion content can be provisioned wholly remotely.
  • FIG. 15 while collection of target space media is immersive, its display can be immersive (e.g., spherical view) or semi-immersive (e.g., unlimited maneuver on conventional display).
  • FIG. 16 illustrates an alternative or complementary example embodiment of viewing an environment which is fully immersive by use of providing media which fully catches the user's audiovisual senses using a worn virtual reality display over both eyes and optionally headphones over the user's ears. Where headphones are provided, audio tracks including environmental noise and automatic or selectable audio general to the target space or relating to specific locations in the target space can be provided.
  • a virtual tour using systems and methods herein can provide music or local noise from the target space, or can include voice tracks or other audible information related to the tour which can be provided at the user's speed and based on the user's interest in the target space.
  • FIG. 17 illustrates an alternative or complementary example embodiment of viewing an environment.
  • a travelable comprehensive immersion can be provided on a single screen or dual screens (one for each eye) in a virtual reality headset.
  • the travelable comprehensive immersion can be controlled using a controller (e.g., shown in FIGS. 16 and 17 ) or gestures (e.g., head movement while wearing the virtual reality headset).
  • sensors can be provided on user extremities or elsewhere on the body to enable intuitive gesture control.
  • FIG. 18 illustrates an alternative or complementary example embodiment of viewing an environment.
  • the user may attend a remote event using techniques herein.
  • the user is viewing the court from a camera module located above and beyond the courtside modules.
  • the user can swap view to and/or control the other cameras visible in the field of view provided.
  • FIG. 19 illustrates an example environment for supplemental content.
  • a target space includes various household furnishings. Users may be interested in these furnishings, either based on their interest in the target space or based on a virtual reality retail experience.
  • FIG. 20 illustrates an example environment including supplemental content.
  • One or more of the supplemental content items providing additional views, price details, et cetera, related to the furnishings in the target space can be shown in the display.
  • These are only a few examples of the user's control to access further information regarding items in a target space.
  • Such information can automatically populate based on the user's view, be provided based on user selection using a controller or gesture (e.g., press button, reach out or pointing toward item, and so forth).
  • the information can contain links or information for purchasing, or purchasing can be completed entirely in the travelable comprehensive immersion.
  • FIG. 21 illustrates example supplemental content which can be superimposed over an environment.
  • Supplemental content may be provided separately from the travelable comprehensive immersion, and in embodiments a supplemental content module can augment or superimpose supplemental content on an immersion without leveraging the immersive video generation engine or modifying the underlying immersion file(s).
  • supplemental content can be provided to a target space where the user is present in the target space and using a transparent or translucent virtual reality headset.
  • a supplemental content module acts in a standalone manner to show virtual items in the space or provide information about virtual or real items in the space visible through the virtual reality headset providing superimposition.
  • FIG. 22 illustrates an example embodiment synchronizing devices for use with aspects herein.
  • a single controller can provide synchronizing signals or provision content simultaneously to a plurality of devices.
  • various devices or virtual reality systems e.g., virtual reality headsets
  • Users can then co-attend a tour while maintaining some element of autonomy (e.g., view different things at tour stops) or the users can diverge immediately.
  • user locations can be stored in memory to permit pausing or resuming of group activity and/or to aid in individual activity after group activity.
  • FIG. 23 illustrates an example embodiment of a system for viewing media.
  • an immersion engine can be used to provide or render a travelable comprehensive immersion.
  • the immersion engine may be the same as or a separate element from an immersive video generation module, and may communicate with various input/output modules, displays, or other intervening components.
  • FIGS. 24A to 24D illustrate an embodiment of a camera module using 8 (or another number) of lenses to create a virtual reality camera module.
  • This camera improves flaws related to focal point and parallax resulting in blurry or doubled images (e.g., in close up shots or in small spaces).
  • focal point can be reduced to a minimum. This can be accomplished using small lenses (e.g., one inch or less, one half inch or less, one quarter inch or less) in some embodiments.
  • FIG. 24A illustrates an example embodiment of a camera (e.g., charge coupled device) which can be combined in such a module.
  • the ribbon used to connect the device and its lens is shown extended to a longer distance in FIG. 24B .
  • the ribbon length can be, e.g., 5 feet.
  • the ribbon connector (which can be small in size in particular embodiments) is connected into position in the camera (or, e.g., phone, laptop or other device) carrying immersive imaging software.
  • FIG. 24C shows how the above lens arrangement can be repeated (e.g., eight times for eight lenses) and placed into a mounting block (in this case, e.g., octahedron block) housing the lenses.
  • the (e.g., eight) separate extended ribbons (or wires) can then be extended down a pole or chassis to interact with the device including storage and processing power.
  • no ribbons are required as compact wireless communication components are provided at each lens.
  • the lenses can share a common wire or ribbon after being electrically coupled at the mounting block.
  • a group of cables connected to individual cameras, mobile devices, et cetera can connect into a mobile computer or other computing device.
  • the lenses can be arranged in, e.g., an octahedron. This is intended to minimize space between lenses and arranges the respective fields of view to avoid difficulties reconciling parallax.
  • the distance between lenses and processing and/or storage equipment can be variable from zero to 30 feet or more. For example, with a drone carrying onboard computing elements, the distance between the lens arrangement and computing elements can be zero or minimal distance. For VR camera rigs, the distance can be 3 to 10 feet. And for remote security cameras, sporting event cameras, concerts, et cetera, the distance can be greater than 10 feet. These are only examples, and various other arrangements using wired or wireless components at distance.
  • computing elements disposed at a distance from a lens or lenses may be larger or more power-intensive than those which could be integrated into a mobile element, or such that close proximity to the camera lenses is impossible without obstructing the wide view(s).
  • a tiny lens or group of lenses can be provided in an enclosure courtside at a basketball game to capture the entire game without blocking spectator views of the game.
  • the footprint to both other spectators (or viewing apparatuses) and the lens field of view is reduced by tethering (via wired or wireless means) is reduced by offsetting larger aspects. In this fashion, neither the visual data collected nor the image quality/processing need suffer on behalf of the other.
  • Storage, processing, and power can be located distal to the lens or lenses to support high resolution, rapid stitching, and other processing to minimize camera system footprint.
  • FIG. 24D shows the above-disclosed camera module mounted atop a self-balancing immersive capture vehicle.
  • the base of the self-balancing immersive capture vehicle can include one or more devices for each camera unit (or one device for multiple camera units) including memory and logic for recording synchronized and coordinated video producing immersive media.
  • Various wired or wireless automatic, semi-automatic, and/or manual controls can be included between components of the system and/or users of the system. Batteries or other power means can also be provided.
  • focal points can be controlled to aid in combining different media sources into an immersive media product.
  • the footprint of the device itself is quite small, and the device will not (or only minimally) interrupt clear views of the target space.
  • image processing logic aboard the system or offsite can be used to remove the device itself from portions of the image which it interrupts.
  • FIGS. 25A and 25B illustrate an embodiment where a plurality of phones, tablets, or other mobile devices are tethered to leverage image capture capabilities to produce a deconstructed camera such as that of FIGS. 24A to 24D .
  • FIG. 25A shows a plurality of cell phone devices tethered using a wired configuration
  • FIG. 25B shows each of the phones enclosed in a housing.
  • the tethers can run to a camera mount on top of a camera rig.
  • the rig's chasses (through which wired tethers can be threaded) can be mounted atop a self-balancing vehicle as described herein.
  • the completed apparatus allows for rapid, steady, programmable, unmanned image capture, including high definition video, with little or no footprint or artifact left on the captured image data.
  • the system can also include components and logic for post-production, or provide captured image data to other systems for such.
  • the self-balancing vehicle can be provided with gimbal stabilizers and self-guiding software to produce steady, zero-footprint shots (requiring no nadir). Due to the stability and high quality, removal of undesirable video imperfections such as ghosting and blurring is made simpler, less-intensive, and more accurate.
  • Hardware and/or other components for such use can be provided in the vehicle or rig, or be accomplished remote thereto.
  • FIG. 26 shows an application of systems described in earlier drawings, illustrating a self-balancing rig for image capture as described herein.
  • FIG. 27 shows an application of systems described in, e.g., FIGS. 24A to 24D , FIG. 26 , and others.
  • the chassis is automatically extendable to provide smooth immersive video travelling up a staircase where the vehicle cannot traverse the staircase or where movement up the staircase would be too disruptive to the consistency and smoothness of the immersive video.
  • FIGS. 28A and 28B illustrates example aspects relating to field of vision control.
  • FIGS. 28A and 28B relate to examples employing field of vision stop and go (FVSG).
  • FVSG control can be employed to modify motion when the field of view is changed.
  • FVSG control can be employed to modify motion when the field of view is changed.
  • motion in the immersion can be changed (e.g., stopped, slowed, limited to particular dimensions such as up-and-down movement but no lateral movement) to assist with viewing the particular site in the immersion during more detailed viewing.
  • motion can resume.
  • FVSG can be toggle-able on and off, and may automatically engage or disengage based on various rules (e.g., enter a room during a tour where a virtual agent is speaking and look around room from stationary view in relation to virtual agent; FVSG returns the user view to a direction of travel or specific items of interest based on virtual agent activity).
  • the agent can be instructed to avoid talking while walking so that any verbal communication is not met with a pause triggering FVSG activity.
  • aspects herein can use high-definition or ultra-high-definition resolution cameras. Further technologies leveraged can include global positioning systems and other techniques. Location techniques can also employ cellular or network-based location, triangulation, radar, sonar, infrared or laser techniques, and image analysis or processing to discern distances and location information from images collected.
  • Aerial or waterborne drones can be utilized in various embodiments as an immersive capture vehicle.
  • two or more vehicles (which can be any combination of land-based, aerial, or marine) can be used simultaneously in a coordinated fashion to comprehensively capture a target space with greater speed or to capture views from locations and orientations which cannot be provided by a single device.
  • Multiple devices can follow the same track in two dimensions at different heights, or different paths at the same or different heights.
  • Multiple vehicles can be locationally “anchored” to one another for alignment or offset to aid in coordination, and one or both may include independent navigation systems to aid in location control.
  • Combination of the various images can prevent the existence of blind spots in views created.
  • a continuous, single and uncut scene of the target space is provided in both static and moving manners. Fluid travel in any direction of space up to the boundaries can be provided.
  • features of interest or “hotpoints” can be emphasized in immersions by providing supplemental content, related audio content, particular views, or other aspects.
  • Such aspects can be a window with a view, a vista or patio, a fireplace, a water feature, et cetera.
  • the environment of immersions can change, such as providing a 24-hour lighting cycle based on sun and/or weather.
  • the immersion permits users to control the interest, pace, and length of a tour or other remote viewing.
  • the viewing can be sped up or slowed down at user desire.
  • Static cameras can be integrated with movable camera modules to provide additional views or reference views which can be used to aid in navigation or to provide specific visual information to users.
  • aspects herein related to recording and providing immersions in embodiments concern track-less, free movement by the user, movable cameras or virtual viewing can travel along pre-programmed tracks in embodiments still using other aspects of the innovation.
  • an immersion can be edited to show the inclusion or exclusion of items and/or changes to the target space such as removal of a wall or other renovation.
  • the non-changed portions of the immersion remain recorded media of the actual space, while modelling can be leveraged to integrate changes to the actual space to realistically display the modifications of the target space.
  • a target space includes partitions which are removed through editing (e.g., knock out a wall)
  • actual collected media of both sides can be stitched with only the space occupied by the removed wall being a model or virtualization of the space.
  • Augmented reality technology can be leveraged as well.
  • Controls can include user interfaces that allow jumping to different portions of an immersion, speed controls (e.g., fast forward and/or rewind based on movement or viewing orientation), mute button, drone view button (in relevant embodiments or where the drone view is distinguishable from the main immersive view), still capture button, time lapse (to pause environment or other activity and view freeze), view angle controls, location or position controls, view outside target space (e.g., view of building from outside or above), and so forth.
  • speed controls e.g., fast forward and/or rewind based on movement or viewing orientation
  • mute button e.g., mute button
  • drone view button in relevant embodiments or where the drone view is distinguishable from the main immersive view
  • still capture button e.g., time lapse (to pause environment or other activity and view freeze), view angle controls, location or position controls, view outside target space (e.g., view of building from outside or above), and so forth.
  • the number of cameras can vary based on particular camera modules. Cost, field of view, resolution, lens size, and other considerations can be considered to customize a camera module or camera modules for a particular use.
  • Example services provided with aspects herein are solo target space (e.g., apartment, home, or commercial unit) tours, guided tours, three-dimensional and 360 -degree floorplans provided by augmented reality technology, websites or other network resources for hosting such (e.g., one per space or multiple spaces at a single hub), applications to aid in contracting, purchasing, payment, et cetera, related to immersions or supplemental content, and so forth.
  • solo target space e.g., apartment, home, or commercial unit
  • guided tours guided tours
  • three-dimensional and 360 -degree floorplans provided by augmented reality technology
  • websites or other network resources for hosting such e.g., one per space or multiple spaces at a single hub
  • applications to aid in contracting purchasing, payment, et cetera, related to immersions or supplemental content, and so forth.
  • immersive media can be used for training purposes.
  • individual cameras or camera modules located around a sports field can collect combinable media related to action on the sports field.
  • the motion, delivery, speed, and movement of a pitch can be recorded from various angles, enabling an immersed batter to practice against a particular opponent pitcher.

Abstract

In an embodiment, a system includes an immersive camera module including a camera mounting block having a plurality of camera mounting sites and a plurality of cameras mounted to the plurality of camera mounting sites. Each of the plurality of cameras includes a partially-overlapping field of view, and the camera module is configured to comprehensively capture a target space. The system further includes a chassis operatively coupled with the immersive camera module, the chassis configured to smoothly maneuver the camera module comprehensively through the target space. Aspects herein can also relate to methods for capturing immersions, systems and methods for providing immersions, and systems and methods for viewing and controlling immersions.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This patent application is a continuation of U.S. patent application Ser. No. 15/613,704, filed Jun. 5, 2017, which claims priority to and the benefit of U.S. Provisional Patent Application Ser. No. 62/346,234 filed Jun. 6, 2016, both of which are incorporated herein by reference in their entirety.
  • TECHNICAL FIELD
  • The subject innovation generally relates to capturing and providing immersive media experiences. The subject innovation more specifically concerns allowing users to view remote locations in a non-linear and self-driven manner.
  • BACKGROUND
  • Video and other media are used to allow entities to view or otherwise experience remote environments. However, this media has generally been limiting in a variety of ways. Moving video images are generally constrained to a linear path as recorded and do not permit substantial user interaction to drive the content. Still frame photographs can be used to provide additional control (e.g., with directional controls to move to an adjacent location) but are also limited to the views taken by the photographer.
  • SUMMARY
  • In an embodiment, a system includes an immersive camera module including a camera mounting block having a plurality of camera mounting sites and a plurality of cameras mounted to the plurality of camera mounting sites. Each of the plurality of cameras includes a partially-overlapping field of view, and the camera module is configured to comprehensively capture a target space. The system further includes a chassis operatively coupled with the immersive camera module, the chassis configured to smoothly maneuver the camera module comprehensively through the target space.
  • In an embodiment, a system includes an immersive video generation module configured to seamlessly combine a comprehensive capture of a target space to a travelable comprehensive immersion. The immersive video generation module is configured to receive at least one image from each of a plurality of cameras at a first location, continuously stitch the at least one image from each of the plurality of cameras at the first location to produce a first location immersion, receive at least one image from the plurality of cameras at a second location, continuously stitch the at least one image from each of the plurality of cameras at the second location to produce a second location immersion, and continuously stitch the first location immersion and the second location immersion to create a travelable comprehensive immersion.
  • In an embodiment, a method includes providing an immersive camera module including a camera mounting block having a plurality of camera mounting sites and a plurality of cameras mounted to the plurality of camera mounting sites. The each of the plurality of cameras includes a partially-overlapping field of view and the camera module is configured to comprehensively capture a target space. The method also includes providing a chassis operatively coupled with the camera module, the chassis configured to smoothly maneuver the camera module comprehensively through the target space and recording at least one image from each of the plurality of cameras to record a comprehensive capture of the target space. The method also includes simultaneously while recording, smoothly maneuvering the camera module through the target space.
  • In an embodiment, a method includes receiving at least one image from each of a plurality of cameras at a first location, continuously stitching the at least one image from each of the plurality of cameras at the first location to produce a first location immersion, receiving at least one image from the plurality of cameras at a second location, continuously stitching the at least one image from each of the plurality of cameras at the second location to produce a second location immersion, and continuously stitching the first location immersion and the second location immersion to create a travelable comprehensive immersion.
  • In an embodiment, a system includes an immersion engine configured to access a travelable comprehensive immersion. The immersion engine controls maneuver and view through the travelable comprehensive immersion based on user input. The system also includes a display configured to display the travelable comprehensive immersion as provided by the immersion engine and a control configured to provide the user input to the immersion engine.
  • In an embodiment, a method includes receiving a travelable comprehensive immersion, displaying an initial viewer state of the travelable comprehensive immersion, receiving user input related to the travelable comprehensive immersion, and displaying a subsequent viewer state of the travelable comprehensive immersion based on the user input. The subsequent viewer state differs from the initial viewer state in at least one of viewer position or viewer orientation.
  • These and other embodiments will be described in greater detail below.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention may take physical form in certain parts and arrangements of parts, an embodiment of which will be described in detail in the specification and illustrated in the accompanying drawings which form a part hereof, and wherein:
  • FIGS. 1A and 1B illustrate example techniques for viewing an environment;
  • FIG. 2 illustrates an embodiment of a camera module for capturing an environment;
  • FIGS. 3A and 3B illustrate embodiments of camera modules coupled to chasses and vehicles for maneuvering the camera modules;
  • FIGS. 4A, 4B, 4C, and 4D illustrate embodiments of camera modules coupled to chasses and physical interfaces for human maneuver;
  • FIG. 5 illustrates an alternative embodiment of a chassis coupled to a camera module;
  • FIGS. 6A and 6B illustrate embodiments of techniques for controlling a camera module and/or elements operatively coupled therewith;
  • FIG. 7 illustrates embodiments of techniques for controlling a camera module and/or elements operatively coupled therewith;
  • FIGS. 8A and 8B illustrate modules used for capturing an environment;
  • FIG. 9 illustrates aspects of techniques for capturing an environment;
  • FIG. 10 illustrates aspects of techniques for capturing an environment;
  • FIGS. 11A to 11C illustrate aspects of techniques for capturing an environment;
  • FIGS. 12A to 12C illustrate aspects of techniques for capturing an environment;
  • FIG. 13 illustrates aspects of techniques for capturing an environment;
  • FIG. 14 illustrates aspects of alternative techniques for capturing an environment;
  • FIG. 15 illustrates an example embodiment of viewing an environment;
  • FIG. 16 illustrates an alternative or complementary example embodiment of viewing an environment;
  • FIG. 17 illustrates an alternative or complementary example embodiment of viewing an environment;
  • FIG. 18 illustrates an alternative or complementary example embodiment of viewing an environment;
  • FIG. 19 illustrates an example environment for supplemental content;
  • FIG. 20 illustrates an example environment including supplemental content;
  • FIG. 21 illustrates example supplemental content;
  • FIG. 22 illustrates an example embodiment synchronizing devices for use with aspects herein;
  • FIG. 23 illustrates an example embodiment of a system for viewing media;
  • FIGS. 24A to 24D illustrate example embodiments of a camera module and system using the camera module;
  • FIGS. 25A and 25B illustrate example embodiments of a camera module utilizing mobile devices;
  • FIG. 26 illustrates an example embodiment of a system using a camera module;
  • FIG. 27 illustrates an example embodiment of use of a system using a camera module; and
  • FIGS. 28A and 28B illustrate example aspects related to field of vision stop and go.
  • FIG. 29 shows an example computing device.
  • FIG. 30 shows an example computing environment.
  • DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
  • Aspects herein generally relate to systems and methods for comprehensively capturing a target space or environment, as well as displaying or providing comprehensive captures of target spaces or environments. These travelable comprehensive immersions provide a completely unique experience based on the user, on the basis that they can be explored continuously in three dimensions using control input. They have no start, end, timeline, or path, and are based off actual recorded media of the target space as opposed to a digital model. Direction, movement, speed, elevation, location, viewing angle, and so forth are all placed in user hands with no duration or predetermined time element.
  • As used herein, a target space can be any space or environment, including both indoor and outdoor public or private spaces. A target space is comprehensively captured after a camera module maneuvers the target space while recording. Maneuvering the target space can include movement in all three dimensions, and in various embodiments may include traveling a linear path through the space, travelling multiple paths through the space, travelling a gridded path or series of gridded paths through the space, travelling a curved path or series of curved paths through the space, traveling diagonals of the space, following a human-walked path through the space, et cetera. Maneuvering the target space can include travelling along or near walls or boundaries of the target space, and in some embodiments may then involve logically segmenting the space therein into sections, grids, curves, et cetera, either based on the dimensions of the target space or predefined intervals. In embodiments, maneuver can include a third, vertical dimension in addition to the area (e.g., floor or ground) covered, and the camera module can be held in a two dimensional location while multiple vertical views are collected, or the comprehensive maneuver can occur following the same or different two-dimensional paths at different heights. The camera module either continuously or according to a capture rate/interval records photographs or video of the space to provide combinable immersive views continuously or at discrete points for the entire maneuver. Comprehensively capturing a target space can also include maneuvering to or around focal points to provide still further views or other enhanced images of items of interest within the space.
  • As used herein, “smoothly maneuver” means to maneuver in a fashion not substantially subject to bumps, shaking, or other disruption modifying the intended path and orientation of the camera module there through. When camera modules are smoothly maneuvered, image quality is improved both in individual views and during stitching of different individual views into adjacent views.
  • When a target space is comprehensively captured through smooth maneuver, all images can be combined to produce a travelable comprehensive immersion. The travelable comprehensive immersion can be a file or group of files containing images and/or video of the target space combined in a manner that allows viewing of, movement through, and exploration of the target space in a non-linear and non-programmed manner. Because the space is “rebuilt” virtually—the camera module captures surrounding views in a variety of locations—the location and orientation of a viewer using the travelable comprehensive immersion can be modified in a substantially continuous manner, allowing movement to anywhere in the space and different viewing angles at any such point. In embodiments, these capabilities can be subject to a capture rate or interval, where discrete locations (e.g., 1 inch, 6 inches, 1 foot, 3 feet, 6 feet, and any other distance) are captured with interval gaps there between.
  • In the specification and claims, reference will be made to a number of terms that have the following meanings. The singular forms “a”, “an” and “the” include plural referents unless the context clearly dictates otherwise. Approximating language, as used herein throughout the specification and claims, may be applied to modify a quantitative representation that could permissibly vary without resulting in a change in the basic function to which it is related. Accordingly, a value modified by a term such as “about” is not to be limited to the precise value specified. In some instances, the approximating language may correspond to the precision of an instrument for measuring the value. Moreover, unless specifically stated otherwise, a use of the terms “first,” “second,” etc., does not denote an order or importance, but rather the terms “first,” “second,” etc., are used to distinguish one element from another.
  • As used herein, the terms “may” and “may be” indicate a possibility of an occurrence within a set of circumstances; a possession of a specified property, characteristic or function; and/or qualify another verb by expressing one or more of an ability, capability, or possibility associated with the qualified verb. Accordingly, usage of “may” and “may be” indicates that a modified term is apparently appropriate, capable, or suitable for an indicated capacity, function, or usage, while taking into account that in some circumstances the modified term may sometimes not be appropriate, capable, or suitable. For example, in some circumstances an event or capacity can be expected, while in other circumstances the event or capacity cannot occur—this distinction is captured by the terms “may” and “may be.”
  • Turning to the figures, FIGS. 1A and 1B illustrate example techniques for viewing an environment. FIG. 1A shows a person in the environment. The person may be a guide for the environment, such as a realtor or customer service representative, or a person interested in but unfamiliar with the environment, such as a prospective buyer or tourist visiting for the first time. This provides the greatest flexibility and realism in viewing an environment inasmuch as the person can choose her location and viewing angle, but she must be physically present. Depending on the location and character of the environment, and capabilities and resources of the person, physical presence may not always be possible.
  • FIG. 1B shows a computer interface for, e.g., a virtual tour of the environment. The interface can include a main photograph, controls, and thumbnails of other photos. Based on the controls or selection of a thumbnail, the main photograph changes to provide a larger view of particular views in the environment. However, the environment can only be viewed in the very limited number of views available, thereby leaving large gaps and a stuttered, unrealistic viewing experience.
  • Limitations of the viewing techniques of FIGS. 1A and 1B can be reduced using comprehensive captures of environments. Comprehensive captures can be created using systems and methods disclosed herein. FIG. 2 illustrates an embodiment of an immersive camera module for capturing an environment. The camera module is an immersive camera module which collects a spherical view using a plurality of cameras, providing a continuous view including rotational degrees of freedom similar to or exceeding those possessed by a person standing at the location in question. The camera module can include a mounting block having a plurality of camera mounting sites and the plurality of cameras mounted thereon. In embodiments, the cameras may be coupled without use of a camera mounting block (e.g., integral hardware facilitates their connection). The plurality of cameras are arranged such that each camera has a partially overlapping field of view with one or more adjacent cameras to facilitate collection of images sharing overlapping portions which can be merged by matching portions of different images to provide a comprehensive capture of the target space. In this fashion, the camera module is configured to comprehensively capture the target space.
  • In the illustrated embodiment, the camera module includes six cameras, with five mounted to provide a 360-degree panoramic view around the camera module and one mounted atop to allow upward viewing. In embodiments, the cameras may be mounted at angles to modify the field of view. For example, the panoramic series of cameras can include a slight downward tilt to reduce field of view overlap with the sixth camera directed upward, thereby maximizing the amount of unique image data in each immersive image constructed from individual camera images. The camera module(s) illustrated herein are provided for purposes of example only, and do not limit other possible camera module arrangements. In embodiments, other numbers of cameras can be utilized, and camera angles other than those pictured (e.g., downward, between top and side cameras, et cetera) can be employed without departing from the scope or spirit of the innovation.
  • The cameras can provide images collected to temporary or persistent storage, or directly to an immersive video generation module for production of an immersive video of the target space. The cameras can utilize any wired or wireless means of communication and/or powering.
  • As partially shown in FIG. 2, the camera module can be operatively coupled to a chassis. The chassis is configured to smoothly maneuver the camera module comprehensively through the target space. This chassis is also visible in later figures.
  • FIGS. 3A and 3B illustrate embodiments of camera modules coupled to chasses and immersive capture vehicles for maneuvering the camera modules. Specifically, chasses can be coupled to immersive capture vehicles which smoothly maneuver the chassis and immersive camera module comprehensively through the target space. As shown, immersive capture vehicles may have two or four wheels, or any other number. In an alternative embodiment, the immersive capture vehicle may move about on one or more spherical wheels, or one or more continuous tracks (e.g., “tank tread”). The propulsion mechanisms employed with the immersive capture vehicles can influence their speed, maneuverability (e.g., turning radius), capability for negotiating obstacles (e.g., a threshold, raised carpet, a staircase, and others) or terrain (e.g., wet surfaces, mud, snow, gravel, and others).
  • Control of immersive capture vehicles can be manual, automatic, or combinations thereof. Accordingly, the immersive capture vehicle includes at least a vehicle logic module capable of managing maneuver of the immersive capture vehicle (e.g., direction and speed) by controlling its propulsion mechanisms. The vehicle logic module can be operatively coupled or include a communication module (e.g., to send and receive information), storage and/or a general or application-specific processor (e.g., storing data for use controlling movement, calculating paths of movement, modifying vehicle operation, and so forth), sensor modules (e.g., for collecting data about vehicle operation, for collecting data about the environment), and others.
  • In embodiments where control is automated, the logic module can receive information about a target space before beginning or discover information about the target space (e.g., using the sensor module) before or during comprehensive capture of the target space. Techniques by which the logic module can automatically capture spaces or capture spaces based on user input are discussed further below. In embodiments, a logic module can include a location module, which can utilize one or more location techniques such as a global positioning system, a triangulation technique, or other techniques providing an absolute location, or techniques for discovering a relative location at a distance (e.g., radar, sonar, laser, infrared). Logic can be provided to prevent collisions in the target space while immersive media is being collected.
  • In an embodiment, an immersive capture vehicle can be a robot. In an embodiment, an immersive capture vehicle can be a self-balancing automated device.
  • FIGS. 4A, 4B, 4C, and 4D illustrate embodiments of camera modules coupled to chasses and physical interfaces for human maneuver. Particularly, physical interfaces such as a helmet (FIG. 4A), a harness (FIG. 4B), or a grip (FIG. 4D) can be provided. Alternatively, the chassis itself can be gripped by a person (FIG. 4C). In embodiments, other components of the system can be integrated into the physical interface and/or chassis. For example, a computer readable storage media and/or hardware and/or software of an immersive video generation module can be maintained in, e.g., the grip of FIG. 4D.
  • Physical interfaces can include various aspects to improve ergonomics. For example, the physical interface and/or chassis can be pivot-able, extended or retracted, or otherwise adjustable to provide for ergonomic carriage facilitating smooth maneuver of the chassis and camera module. Where a person walks the system, smooth maneuver may or may not include substantially level or stable maneuver of the camera module, but may instead mimic human motion for a walking experience when viewed. Alternatively, a person can stabilize the human interface but be conveyed on another vehicle (e.g., rolling chair as in FIG. 4C) to reduce the impact of motion.
  • FIG. 5 illustrates an alternative embodiment of a chassis coupled to a camera module. Chasses herein can include an adjustment module to change the location or orientation of the camera module with respect to, e.g., a point on the chassis. This can include telescoping members, jointed members for pivoting or tilting, members which can spin, et cetera. As illustrated in FIG. 5, an adjustment module can include a pivot having a plummet thereunder. The adjustment mechanism including the pivot-plummet apparatus is one technique for reducing or eliminating shake or tilt during starting and stopping of system movement or during other conditions such as uneven flooring. Other techniques can include, alternatively or complementarily, springs or suspensions, flexible joints, padding, et cetera.
  • FIGS. 6A and 6B illustrate embodiments of techniques for controlling a camera module and/or elements operatively coupled therewith. FIGS. 6A and 6B illustrate manual and/or semi-automatic techniques for control of a camera module or aspects operatively coupled therewith. FIG. 6A shows a tablet while FIG. 6B shows a video game style controller, both of which can be used for remote control of systems herein. Alternatives to touchscreens and controllers can include a mouse, keyboard, joystick, trackball, pointing stick, stylus, et cetera.
  • FIG. 6B specifically shows the controller used to control spinning (including a rate of spinning) of the camera module on the chassis. However, in other embodiments, controllers can be used to start, steer, and stop immersive capture vehicles, enable or disable camera capture, adjust the camera module using an adjustment module of the chassis, et cetera. Actuators can be provided to various elements of the system and operatively coupled with a communication module to facilitate remote control. Further, in alternative or complementary embodiments, gesture-based feedback can be used for control (e.g., user head movement where elements controlled using wearable headgear).
  • FIG. 7 illustrates embodiments of techniques for controlling a camera module and/or elements operatively coupled therewith. In the illustrated embodiment, a controller can be used to control one or more camera modules present at a remote event. In this manner, a more realistic analog of attendance at a remote event can be effected. While camera modules herein can be movable, in at least one embodiment, substantially static chassis can be provided at, e.g., seat locations in a sporting event. Simulating attendance (e.g., based on a pay-per-view arrangement, a subscriber service, affiliation with a team, et cetera) users can control camera modules to experience the remote event. This experience can be provisioned in real-time or later based on recorded immersive media capture.
  • FIGS. 8A and 8B illustrate modules used for capturing an environment. The media produced comprehensively capturing a target space can be provided to an immersive video generation module which combines the images to create a travelable comprehensive immersion. As shown in FIG. 8A, the immersive video generation module can be operatively coupled with, e.g., an input/output or communication module to receive media for processing and to provide the generated travelable comprehensive immersion.
  • FIG. 8B shows an alternative arrangement showing in greater detail an example flow of information. The immersive camera module collects immersive media, and in embodiments can be at least partially controlled by a user control. The immersive camera module then provides collected media to one or both of storage media and the immersive video generation module. The immersive video generation module outputs at least one travelable comprehensive immersion, which can be provided to user displays and controls either via storage or directly from the immersive video generation module.
  • As will be appreciated, the arrangements illustrated in FIGS. 8A and 8B are provided for example purposes only, and the modules present as well as their arrangement and information flow can vary without departing from the scope or spirit of the innovation.
  • FIG. 9 illustrates aspects of techniques for capturing an environment. Specifically, a user can use a computer or another device to provide signals or pre-program a system to comprehensively capture a space automatically or semi-automatically. In embodiments, walls can be virtually (e.g., using an interface for programming comprehensive capture) or physically (e.g., using visible or invisible light wavelengths, applying color to walls, applying markers to walls) marked to aid with at least combining of media to produce a travelable comprehensive immersion of the target space. In embodiments, light or markers invisible to the human eye can be used to avoid changes to the environment and/or any need for image processing to remove added elements.
  • FIG. 10 illustrates aspects of techniques for capturing an environment. As shown in FIG. 10, an immersive capture vehicle can transport a camera module and connecting chassis about the exterior of a room, near or against the room's walls. After completing its loop, the room may be adequately imaged in some embodiments, or the interior of the room can be maneuvered (e.g., according to a pattern or pre-planned path) to provide additional full-resolution views from within the target space. In embodiments, the target space can be mapped (or a path created therein) prior to recording and maneuvering, or the target space can be mapped during maneuvering and recording (e.g., interior is discovered by maneuvering about the exterior).
  • FIGS. 11A to 11C illustrate aspects of techniques for capturing an environment. While FIG. 10 and other drawings herein can take continuous imaging during maneuver, in embodiments pictures can be taken at relative or absolute intervals during maneuver. Thus, as can be appreciated in FIGS. 11A to 11C, a target resolution or capture rate can determine how frequently immersive media is captured. In FIGS. 11A to 11C, the camera module can advance by a distance of x between immersive media capture instances. In embodiments, x can be an increment of, e.g., 1 inch, 6 inches, one foot, two feet, three feet, six feet, or more, any amount there between, or any amount greater or less.
  • FIG. 11B in particular also demonstrates how the height of a camera module can be identified. The chassis can be supported at a height of y1 while the camera module is located at a height of y2 dependent upon the y1 and the (fixed or variable) length of the chassis.
  • FIGS. 12A to 12C illustrate aspects of techniques for capturing an environment. Specifically, FIG. 12A illustrates the fields of view captured by two cameras in opposing positions. Knowledge of the field of view (e.g., as an angle) of one or more cameras (alone or in a camera module having a plurality of cameras) can be used to determine the amount of a target space captured from a given location. In embodiments, cameras are of a resolution facilitating the use of zoom to comprehensively capture the area, allowing for the use of fixed-location camera modules or obviating the need for the camera module to be maneuvered over every piece of the target space. FIG. 12B illustrates the additional space captured (as well as space overlapped) by locating single cameras or multi-camera modules at additional sites in a target space.
  • FIG. 12C illustrates another example in which twelve paths can be travelled by a moving camera module to provide immersive media comprehensively capturing a square target space. Zoom features can be employed based on pre-shot video tracks combined as described herein allowing the user to experience the target space in any location or orientation without a sense of confinement to the pre-shot lines. This example is provided for illustrative purposes only, and it is understood on review of the disclosures herein how this concept can be extended to any target space.
  • FIG. 13 illustrates aspects of techniques for capturing an environment. Specifically, FIG. 13 illustrates camera module arrangement positioned about an event field (e.g., with opposing goals). The field of view is represented using lines extending from the cameras to how the field area is covered with opposing camera modules. This can be employed with techniques such as, e.g., those shown in FIGS. 12A and 12B.
  • In particular embodiments such as those of FIG. 13, a user can be enabled to stand in the middle of an event without disruption using combined immersive media from multiple angles. The immersion can include views that appear at eye-level from points where no attendee would be permitted to stand. In embodiments, an immersive video generation module can include an algorithm for combining or stitching opposing or offset camera views to create stitched live-video views without requiring a camera module in that location. In this fashion, users may, for example, view from “within” a sports game, seeing players run around them without any disruption to the players.
  • FIG. 14 returns to aspects of capturing an environment relating to a remote event. Virtual attendance can be simulated either during the event or in an immersive replay. In an embodiment, multiple camera modules can be combined treating their location as an interval, and using various zoom and image processing can provide views within the space there between. While the camera modules are shown directed towards the event (e.g., basketball court), global media may be collected to allow a remote attendee to look at other aspects (e.g., the crowd).
  • Embodiments such as that of, e.g., FIG. 14 can provide for a premium location for virtual attendance. Further, access to areas not available to physical attendees (e.g., locker rooms, warm-up areas, bench or dugout, and others) can be provided through camera modules located thereabout.
  • FIG. 15 illustrates an example embodiment of viewing an environment. In the embodiment of FIG. 15, a computer can be used to navigate a travelable comprehensive immersion. Rather than discrete movements between views selected by a third party, the entire space is continuously explore-able by the user, who can translate or rotate with up to six degrees of freedom throughout the boundaries of the target space (e.g., walls of a structure). An immersion engine such as those described herein can produce a travelable comprehensive immersion which can then be provided from the engine or from storage to a viewer (e.g., custom application, internet browser, other display). The viewer can control the immersion using controls such as a mouse or pointer, keyboard, and/or touch screen, et cetera.
  • When displaying the immersion, a travelable comprehensive immersion can be received (e.g., from storage and/or an immersive video generation module). An initial viewer state of the travelable comprehensive immersion is displayed (e.g., entryway, initial location programmed into immersion, initial location selected by user). User input can then be received related to the travelable comprehensive immersion. Based on the user input, a subsequent viewer state can be displayed.
  • The subsequent viewer state can differ from the initial viewer state in at least one of viewer position (e.g., location within the target space) or viewer orientation (e.g., viewing direction at a location within the target space). Additional changes provided in subsequent state(s) can include environmental changes not based on user input, such as moving water, motion of the sun, curtains moving due to open window, et cetera. In this regard, the environment of the target space can be dynamic.
  • Immersions can be provided using remote storage. In an embodiment, an immersion is provided on a software-as-a-service basis or from a cloud hosting site. Billing can be based on the number of times an immersion is accessed, the time spent in an immersion, and so forth. Recording and/or viewing technology can also be provided as a service, and both viewing applications and immersion content can be provisioned wholly remotely.
  • As suggested by FIG. 15, while collection of target space media is immersive, its display can be immersive (e.g., spherical view) or semi-immersive (e.g., unlimited maneuver on conventional display). FIG. 16 illustrates an alternative or complementary example embodiment of viewing an environment which is fully immersive by use of providing media which fully catches the user's audiovisual senses using a worn virtual reality display over both eyes and optionally headphones over the user's ears. Where headphones are provided, audio tracks including environmental noise and automatic or selectable audio general to the target space or relating to specific locations in the target space can be provided. Thus, a virtual tour using systems and methods herein can provide music or local noise from the target space, or can include voice tracks or other audible information related to the tour which can be provided at the user's speed and based on the user's interest in the target space.
  • FIG. 17 illustrates an alternative or complementary example embodiment of viewing an environment. A travelable comprehensive immersion can be provided on a single screen or dual screens (one for each eye) in a virtual reality headset. The travelable comprehensive immersion can be controlled using a controller (e.g., shown in FIGS. 16 and 17) or gestures (e.g., head movement while wearing the virtual reality headset). Further, sensors can be provided on user extremities or elsewhere on the body to enable intuitive gesture control.
  • FIG. 18 illustrates an alternative or complementary example embodiment of viewing an environment. As discussed, the user may attend a remote event using techniques herein. As shown, the user is viewing the court from a camera module located above and beyond the courtside modules. However, in embodiments, the user can swap view to and/or control the other cameras visible in the field of view provided.
  • FIG. 19 illustrates an example environment for supplemental content. In the example provided, a target space includes various household furnishings. Users may be interested in these furnishings, either based on their interest in the target space or based on a virtual reality retail experience.
  • FIG. 20 illustrates an example environment including supplemental content. One or more of the supplemental content items providing additional views, price details, et cetera, related to the furnishings in the target space can be shown in the display. These are only a few examples of the user's control to access further information regarding items in a target space. Such information can automatically populate based on the user's view, be provided based on user selection using a controller or gesture (e.g., press button, reach out or pointing toward item, and so forth). The information can contain links or information for purchasing, or purchasing can be completed entirely in the travelable comprehensive immersion.
  • FIG. 21 illustrates example supplemental content which can be superimposed over an environment. Supplemental content may be provided separately from the travelable comprehensive immersion, and in embodiments a supplemental content module can augment or superimpose supplemental content on an immersion without leveraging the immersive video generation engine or modifying the underlying immersion file(s).
  • In an alternative embodiment, supplemental content can be provided to a target space where the user is present in the target space and using a transparent or translucent virtual reality headset. In this fashion, a supplemental content module acts in a standalone manner to show virtual items in the space or provide information about virtual or real items in the space visible through the virtual reality headset providing superimposition.
  • FIG. 22 illustrates an example embodiment synchronizing devices for use with aspects herein. A single controller can provide synchronizing signals or provision content simultaneously to a plurality of devices. In this manner, various devices or virtual reality systems (e.g., virtual reality headsets) can enter a travelable comprehensive immersion at the same time. Users can then co-attend a tour while maintaining some element of autonomy (e.g., view different things at tour stops) or the users can diverge immediately. In embodiments, user locations can be stored in memory to permit pausing or resuming of group activity and/or to aid in individual activity after group activity.
  • FIG. 23 illustrates an example embodiment of a system for viewing media. In an embodiment, an immersion engine can be used to provide or render a travelable comprehensive immersion. The immersion engine may be the same as or a separate element from an immersive video generation module, and may communicate with various input/output modules, displays, or other intervening components.
  • FIGS. 24A to 24D illustrate an embodiment of a camera module using 8 (or another number) of lenses to create a virtual reality camera module. This camera improves flaws related to focal point and parallax resulting in blurry or doubled images (e.g., in close up shots or in small spaces). By using this disclosed camera module, focal point can be reduced to a minimum. This can be accomplished using small lenses (e.g., one inch or less, one half inch or less, one quarter inch or less) in some embodiments.
  • FIG. 24A illustrates an example embodiment of a camera (e.g., charge coupled device) which can be combined in such a module. The ribbon used to connect the device and its lens is shown extended to a longer distance in FIG. 24B. In embodiments, the ribbon length can be, e.g., 5 feet. The ribbon connector (which can be small in size in particular embodiments) is connected into position in the camera (or, e.g., phone, laptop or other device) carrying immersive imaging software. By disassembling the lenses from the cameras (or other devices) and having the lenses placed adjacently at close proximity (e.g., module carrying lenses less than 3 inches in diameter, less than 2 inches in diameter, et cetera), and having the other functions such as memory, batteries and others offset to save space between lenses, a virtual reality specific module avoiding some issues with focal length and parallax can be provided.
  • FIG. 24C shows how the above lens arrangement can be repeated (e.g., eight times for eight lenses) and placed into a mounting block (in this case, e.g., octahedron block) housing the lenses. The (e.g., eight) separate extended ribbons (or wires) can then be extended down a pole or chassis to interact with the device including storage and processing power. In alternative embodiments, no ribbons are required as compact wireless communication components are provided at each lens. Alternatively, the lenses can share a common wire or ribbon after being electrically coupled at the mounting block.
  • In an embodiment, a group of cables connected to individual cameras, mobile devices, et cetera can connect into a mobile computer or other computing device. The lenses can be arranged in, e.g., an octahedron. This is intended to minimize space between lenses and arranges the respective fields of view to avoid difficulties reconciling parallax. The distance between lenses and processing and/or storage equipment can be variable from zero to 30 feet or more. For example, with a drone carrying onboard computing elements, the distance between the lens arrangement and computing elements can be zero or minimal distance. For VR camera rigs, the distance can be 3 to 10 feet. And for remote security cameras, sporting event cameras, concerts, et cetera, the distance can be greater than 10 feet. These are only examples, and various other arrangements using wired or wireless components at distance.
  • In embodiments, computing elements disposed at a distance from a lens or lenses may be larger or more power-intensive than those which could be integrated into a mobile element, or such that close proximity to the camera lenses is impossible without obstructing the wide view(s). For example, a tiny lens or group of lenses can be provided in an enclosure courtside at a basketball game to capture the entire game without blocking spectator views of the game. The footprint to both other spectators (or viewing apparatuses) and the lens field of view is reduced by tethering (via wired or wireless means) is reduced by offsetting larger aspects. In this fashion, neither the visual data collected nor the image quality/processing need suffer on behalf of the other. Storage, processing, and power can be located distal to the lens or lenses to support high resolution, rapid stitching, and other processing to minimize camera system footprint.
  • FIG. 24D shows the above-disclosed camera module mounted atop a self-balancing immersive capture vehicle. The base of the self-balancing immersive capture vehicle can include one or more devices for each camera unit (or one device for multiple camera units) including memory and logic for recording synchronized and coordinated video producing immersive media. Various wired or wireless automatic, semi-automatic, and/or manual controls can be included between components of the system and/or users of the system. Batteries or other power means can also be provided.
  • In embodiments using small cameras with FIGS. 24A to 24D, focal points can be controlled to aid in combining different media sources into an immersive media product. By using a narrow, pole-like chassis and small base holding circuitry and other elements, the footprint of the device itself is quite small, and the device will not (or only minimally) interrupt clear views of the target space. In embodiments, image processing logic aboard the system or offsite can be used to remove the device itself from portions of the image which it interrupts.
  • FIGS. 25A and 25B illustrate an embodiment where a plurality of phones, tablets, or other mobile devices are tethered to leverage image capture capabilities to produce a deconstructed camera such as that of FIGS. 24A to 24D. FIG. 25A shows a plurality of cell phone devices tethered using a wired configuration, while FIG. 25B shows each of the phones enclosed in a housing. The tethers can run to a camera mount on top of a camera rig.
  • The rig's chasses (through which wired tethers can be threaded) can be mounted atop a self-balancing vehicle as described herein. The completed apparatus allows for rapid, steady, programmable, unmanned image capture, including high definition video, with little or no footprint or artifact left on the captured image data. The system can also include components and logic for post-production, or provide captured image data to other systems for such. The self-balancing vehicle can be provided with gimbal stabilizers and self-guiding software to produce steady, zero-footprint shots (requiring no nadir). Due to the stability and high quality, removal of undesirable video imperfections such as ghosting and blurring is made simpler, less-intensive, and more accurate. Hardware and/or other components for such use can be provided in the vehicle or rig, or be accomplished remote thereto.
  • FIG. 26 shows an application of systems described in earlier drawings, illustrating a self-balancing rig for image capture as described herein.
  • FIG. 27 shows an application of systems described in, e.g., FIGS. 24A to 24D, FIG. 26, and others. In the embodiment of FIG. 27, the chassis is automatically extendable to provide smooth immersive video travelling up a staircase where the vehicle cannot traverse the staircase or where movement up the staircase would be too disruptive to the consistency and smoothness of the immersive video.
  • FIGS. 28A and 28B illustrates example aspects relating to field of vision control. Specifically, FIGS. 28A and 28B relate to examples employing field of vision stop and go (FVSG). A viewer “moves” through an immersion with a field of view during motion. However, FVSG control can be employed to modify motion when the field of view is changed. For example, when a user breaks his or her field of vision during user-guided or automated motion, motion in the immersion can be changed (e.g., stopped, slowed, limited to particular dimensions such as up-and-down movement but no lateral movement) to assist with viewing the particular site in the immersion during more detailed viewing. Thereafter, by returning the view to that for motion (which can, but need not be, the direction of motion), motion can resume. Alternatively, motion can be taken thereby snapping the view back to that for motion. FVSG can be toggle-able on and off, and may automatically engage or disengage based on various rules (e.g., enter a room during a tour where a virtual agent is speaking and look around room from stationary view in relation to virtual agent; FVSG returns the user view to a direction of travel or specific items of interest based on virtual agent activity). The agent can be instructed to avoid talking while walking so that any verbal communication is not met with a pause triggering FVSG activity.
  • Aspects herein can use high-definition or ultra-high-definition resolution cameras. Further technologies leveraged can include global positioning systems and other techniques. Location techniques can also employ cellular or network-based location, triangulation, radar, sonar, infrared or laser techniques, and image analysis or processing to discern distances and location information from images collected.
  • Aerial or waterborne drones (or similar devices) can be utilized in various embodiments as an immersive capture vehicle. In embodiments, two or more vehicles (which can be any combination of land-based, aerial, or marine) can be used simultaneously in a coordinated fashion to comprehensively capture a target space with greater speed or to capture views from locations and orientations which cannot be provided by a single device. Multiple devices can follow the same track in two dimensions at different heights, or different paths at the same or different heights. Multiple vehicles can be locationally “anchored” to one another for alignment or offset to aid in coordination, and one or both may include independent navigation systems to aid in location control.
  • Combination of the various images can prevent the existence of blind spots in views created. A continuous, single and uncut scene of the target space is provided in both static and moving manners. Fluid travel in any direction of space up to the boundaries can be provided.
  • As noted above, features of interest or “hotpoints” can be emphasized in immersions by providing supplemental content, related audio content, particular views, or other aspects. Such aspects can be a window with a view, a vista or patio, a fireplace, a water feature, et cetera.
  • The environment of immersions can change, such as providing a 24-hour lighting cycle based on sun and/or weather.
  • The immersion permits users to control the interest, pace, and length of a tour or other remote viewing. The viewing can be sped up or slowed down at user desire.
  • Static cameras can be integrated with movable camera modules to provide additional views or reference views which can be used to aid in navigation or to provide specific visual information to users.
  • While aspects herein related to recording and providing immersions in embodiments concern track-less, free movement by the user, movable cameras or virtual viewing can travel along pre-programmed tracks in embodiments still using other aspects of the innovation.
  • In embodiments, an immersion can be edited to show the inclusion or exclusion of items and/or changes to the target space such as removal of a wall or other renovation. In such embodiments, the non-changed portions of the immersion remain recorded media of the actual space, while modelling can be leveraged to integrate changes to the actual space to realistically display the modifications of the target space. Where a target space includes partitions which are removed through editing (e.g., knock out a wall), actual collected media of both sides can be stitched with only the space occupied by the removed wall being a model or virtualization of the space. Augmented reality technology can be leveraged as well.
  • Controls can include user interfaces that allow jumping to different portions of an immersion, speed controls (e.g., fast forward and/or rewind based on movement or viewing orientation), mute button, drone view button (in relevant embodiments or where the drone view is distinguishable from the main immersive view), still capture button, time lapse (to pause environment or other activity and view freeze), view angle controls, location or position controls, view outside target space (e.g., view of building from outside or above), and so forth.
  • Features such as allowing virtual reality goggles to share power with a phone (e.g., either charging the other) can be provided.
  • The number of cameras can vary based on particular camera modules. Cost, field of view, resolution, lens size, and other considerations can be considered to customize a camera module or camera modules for a particular use.
  • Example services provided with aspects herein are solo target space (e.g., apartment, home, or commercial unit) tours, guided tours, three-dimensional and 360-degree floorplans provided by augmented reality technology, websites or other network resources for hosting such (e.g., one per space or multiple spaces at a single hub), applications to aid in contracting, purchasing, payment, et cetera, related to immersions or supplemental content, and so forth.
  • In embodiments, immersive media can be used for training purposes. For example, individual cameras or camera modules located around a sports field can collect combinable media related to action on the sports field. In a specific example, the motion, delivery, speed, and movement of a pitch can be recorded from various angles, enabling an immersed batter to practice against a particular opponent pitcher.
  • This written description uses examples to disclose the invention, including the best mode, and also to enable one of ordinary skill in the art to practice the invention, including making and using devices or systems and performing incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to one of ordinary skill in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differentiate from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims.

Claims (20)

What is claimed:
1. A system, comprising:
an immersive camera module configured to capture a target space at discrete locations of the target space spaced apart from one another by distance intervals predetermined by a desired capture rate based on a target resolution; and
an immersive video generation module configured to seamlessly combine the capture of the target space to a travelable comprehensive immersion, wherein seamlessly combining the capture of the target space includes continuously stitching at least one image from the immersive camera module at a first one of the discrete locations of the target space to produce a first location immersion, continuously stitching at least one image from the immersive camera module at a second one of the discrete locations of the target space to produce a second location immersion, and continuously stitching the first location immersion and the second location immersion to create a travelable comprehensive immersion including a synthesized view of the target space from a location at which the immersive camera module is not present.
2. The system of claim 1, wherein the immersive camera module includes a camera mounting block having a plurality of camera mounting sites and a plurality of cameras mounted to the plurality of camera mounting sites.
3. The system of claim 2, wherein the plurality of cameras are mounted to the immersive camera module such that the immersive camera module is configured to capture a 360-degree panoramic view of the target space, and wherein at least one of the plurality of cameras is mounted atop the immersive camera module to capture an upward view of the target space.
4. The system of claim 1, wherein the travelable comprehensive immersion further includes one or more virtual items superimposed into the target space and supplemental content providing information relating to the one or more virtual items superimposed into the target space.
5. The system of claim 4, wherein the supplemental content is selected from the group consisting of an additional view of the one or more items, information for purchasing the one or more items, a link to the one or more items, and a feature of interest with respect to the one or more items.
6. The system of claim 1, further comprising a chassis operatively coupled with the immersive camera module, the chassis configured to smoothly maneuver the immersive camera module through the target space between the discrete locations of the target space.
7. The system of claim 6, further comprising:
an immersive capture vehicle; and
an immersive capture vehicle controller configured to control movement of the immersive capture vehicle,
wherein the chassis is operatively coupled to the immersive capture vehicle, and wherein the immersive capture vehicle is configured to smoothly maneuver the chassis and the immersive camera module through the target space between the discrete locations of the target space.
8. The system of claim 7, further comprising:
a sensor module which collects space geometry and obstacle data related to the target space.
9. The system of claim 8, wherein the immersive capture vehicle is configured to maneuver about obstacles based on the space geometry and the obstacle data.
10. The system of claim 8, further comprising:
a modeling module configured to generate a model of the target space based on the space geometry and the obstacle data; and
a path module configured to generate path instructions for the immersive capture vehicle controller, wherein the path instructions avoid obstacles and facilitate capturing the target space based on the model.
11. The system of claim 6, further comprising a physical interface operatively coupled to the chassis, wherein the physical interface is configured to facilitate smooth maneuver of the chassis and the immersive camera module through the target space.
12. The system of claim 6, further comprising:
an adjustment module of the chassis;
a shock-absorbing module of the chassis configured to stabilize the immersive camera module; and
a pivot-plumb component of the chassis configured to stabilize the immersive camera module.
13. A method, comprising:
providing an immersive camera module configured to capture a target space at discrete locations of the target space spaced apart from one another by distance intervals predetermined by a desired capture rate based on a target resolution;
recording a first image via the immersive camera module at a first one of the discrete locations of the target space;
recording a second image via the immersive camera module at a second one of the discrete locations of the target space offset from the first one of the discrete locations of the target space; and
simultaneously while recording, smoothly maneuvering the immersive camera module through the target space between the discrete locations of the target space; and
continuously stitching the first and the second images to create a travelable comprehensive immersion configured to seamlessly combine the capture of the target space at the discrete locations of the target space, the travelable comprehensive immersion including a synthesized view of a third location of the target space different from each of the first and second ones of the discrete locations of the target space, wherein the immersive camera module is not present at the third location of the target space or configured to record images at the third location of the target space.
14. The method of claim 13, further comprising:
providing an immersive camera module including a camera mounting block having a plurality of camera mounting sites and a plurality of cameras mounted to the plurality of camera mounting sites; and
providing a chassis operatively coupled with the immersive camera module, the chassis configured to smoothly maneuver the immersive camera module through the target space.
15. The method of claim 14, further comprising:
providing a vehicle; and
providing a vehicle controller,
wherein the chassis is mounted to the vehicle, and wherein the vehicle is configured to smoothly maneuver the chassis through the target space between the discrete locations of the target space.
16. The method of claim 13, further comprising generating a path through the target space prior to recording and maneuvering.
17. The method of claim 16, further comprising:
providing a sensor module configured to collect space geometry and obstacle data within the target space; and
generating a model of the target space based on the space geometry and the obstacle data, wherein the path is based on the model.
18. The method of claim 13, further comprising:
outputting the travelable comprehensive immersion including the synthesized view of the third location to a client device; and
navigating the travelable comprehensive immersion on the client device.
19. A system, comprising:
an immersion engine configured to access a travelable comprehensive immersion of a target space, the travelable comprehensive immersion being modified to remove a wall identified in the target space such that unmodified portions of the travelable comprehensive immersion include recorded media of the target space and a modified portion of the travelable comprehensive immersion displays a portion of the wall identified in the target space as being removed therefrom,
wherein the travelable comprehensive immersion is based on continuously stitching at least one image at a first location of the target space proximate a first side of the wall identified in the target space to produce a first location immersion, continuously stitching at least one image from a second location of the target space proximate a second side of the wall identified in the target space opposite the first side to produce a second location immersion, and continuously stitching the first location immersion and the second location immersion to create the modified portion of the travelable comprehensive immersion.
20. The system of claim 19, further comprising:
a display configured to display the travelable comprehensive immersion as provided by the immersion engine, wherein the immersion engine is configured to control maneuver and view through the travelable comprehensive immersion based on user input; and
a control configured to provide the user input to the immersion engine.
US17/531,040 2016-06-06 2021-11-19 Immersive capture and review Abandoned US20220078345A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/531,040 US20220078345A1 (en) 2016-06-06 2021-11-19 Immersive capture and review

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201662346234P 2016-06-06 2016-06-06
US15/613,704 US11212437B2 (en) 2016-06-06 2017-06-05 Immersive capture and review
US17/531,040 US20220078345A1 (en) 2016-06-06 2021-11-19 Immersive capture and review

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/613,704 Continuation US11212437B2 (en) 2016-06-06 2017-06-05 Immersive capture and review

Publications (1)

Publication Number Publication Date
US20220078345A1 true US20220078345A1 (en) 2022-03-10

Family

ID=60483687

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/613,704 Active US11212437B2 (en) 2016-06-06 2017-06-05 Immersive capture and review
US17/531,040 Abandoned US20220078345A1 (en) 2016-06-06 2021-11-19 Immersive capture and review

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/613,704 Active US11212437B2 (en) 2016-06-06 2017-06-05 Immersive capture and review

Country Status (1)

Country Link
US (2) US11212437B2 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9389677B2 (en) * 2011-10-24 2016-07-12 Kenleigh C. Hobby Smart helmet
US9856856B2 (en) 2014-08-21 2018-01-02 Identiflight International, Llc Imaging array for bird or bat detection and identification
ES2821735T3 (en) 2014-08-21 2021-04-27 Identiflight Int Llc Bird detection system and procedure
US10126634B1 (en) * 2015-03-18 2018-11-13 Davo Scheich Variable radius camera mount
US11212437B2 (en) * 2016-06-06 2021-12-28 Bryan COLIN Immersive capture and review
US10623453B2 (en) * 2017-07-25 2020-04-14 Unity IPR ApS System and method for device synchronization in augmented reality
US10638906B2 (en) * 2017-12-15 2020-05-05 Neato Robotics, Inc. Conversion of cleaning robot camera images to floorplan for user interaction
JP7067503B2 (en) * 2019-01-29 2022-05-16 トヨタ自動車株式会社 Information processing equipment and information processing methods, programs
US10863085B2 (en) * 2019-02-28 2020-12-08 Harman International Industries, Incorporated Positioning and orienting cameras to extend an angle of view
US11375104B2 (en) * 2019-08-15 2022-06-28 Apple Inc. System for producing a continuous image from separate image sources
US20210237264A1 (en) * 2020-02-03 2021-08-05 Eli Altaras Method and devices for a smart tripod
US10845681B1 (en) 2020-08-17 2020-11-24 Stephen Michael Buice Camera apparatus for hiding a camera operator while capturing 360-degree images or video footage

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110214072A1 (en) * 2008-11-05 2011-09-01 Pierre-Alain Lindemann System and method for creating and broadcasting interactive panoramic walk-through applications
US20150278604A1 (en) * 2014-03-30 2015-10-01 Gary Stephen Shuster Systems, Devices And Methods For Person And Object Tracking And Data Exchange
US20160165215A1 (en) * 2014-12-04 2016-06-09 Futurewei Technologies Inc. System and method for generalized view morphing over a multi-camera mesh
US9479732B1 (en) * 2015-11-10 2016-10-25 Irobot Corporation Immersive video teleconferencing robot
US20170023944A1 (en) * 2011-01-28 2017-01-26 Irobot Corporation Time-dependent navigation of telepresence robots
US11212437B2 (en) * 2016-06-06 2021-12-28 Bryan COLIN Immersive capture and review

Family Cites Families (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8316450B2 (en) * 2000-10-10 2012-11-20 Addn Click, Inc. System for inserting/overlaying markers, data packets and objects relative to viewable content and enabling live social networking, N-dimensional virtual environments and/or other value derivable from the content
US6889120B2 (en) * 2002-12-14 2005-05-03 Hewlett-Packard Development Company, L.P. Mutually-immersive mobile telepresence with gaze and eye contact preservation
US7388981B2 (en) * 2003-02-27 2008-06-17 Hewlett-Packard Development Company, L.P. Telepresence system with automatic preservation of user head size
GB0313866D0 (en) 2003-06-14 2003-07-23 Impressive Ideas Ltd Display system for recorded media
US7949616B2 (en) * 2004-06-01 2011-05-24 George Samuel Levy Telepresence by human-assisted remote controlled devices and robots
US7546552B2 (en) 2006-05-16 2009-06-09 Space Needle Llc System and method of attracting, surveying, and marketing to consumers
US8094182B2 (en) * 2006-11-16 2012-01-10 Imove, Inc. Distributed video sensor panoramic imaging system
US8086071B2 (en) * 2007-10-30 2011-12-27 Navteq North America, Llc System and method for revealing occluded objects in an image dataset
US8267601B2 (en) * 2008-11-04 2012-09-18 James Cameron Platform for stereoscopy for hand-held film/video camera stabilizers
TWI405457B (en) * 2008-12-18 2013-08-11 Ind Tech Res Inst Multi-target tracking system, method and smart node using active camera handoff
US20100231687A1 (en) 2009-03-16 2010-09-16 Chase Real Estate Services Corporation System and method for capturing, combining and displaying 360-degree "panoramic" or "spherical" digital pictures, images and/or videos, along with traditional directional digital images and videos of a site, including a site audit, or a location, building complex, room, object or event
US8527113B2 (en) * 2009-08-07 2013-09-03 Irobot Corporation Remote vehicle
CA2776306A1 (en) * 2009-10-07 2011-04-14 Nigel J. Greaves Gimbaled handle stabilizing controller assembly
US9014848B2 (en) * 2010-05-20 2015-04-21 Irobot Corporation Mobile robot system
US8918213B2 (en) * 2010-05-20 2014-12-23 Irobot Corporation Mobile human interface robot
US8694553B2 (en) 2010-06-07 2014-04-08 Gary Stephen Shuster Creation and use of virtual places
US8963954B2 (en) * 2010-06-30 2015-02-24 Nokia Corporation Methods, apparatuses and computer program products for providing a constant level of information in augmented reality
US8705892B2 (en) 2010-10-26 2014-04-22 3Ditize Sl Generating three-dimensional virtual tours from two-dimensional images
US9071709B2 (en) * 2011-03-31 2015-06-30 Nokia Technologies Oy Method and apparatus for providing collaboration between remote and on-site users of indirect augmented reality
US8706473B2 (en) * 2011-09-13 2014-04-22 Cisco Technology, Inc. System and method for insertion and removal of video objects
US9298986B2 (en) * 2011-12-09 2016-03-29 Gameonstream Inc. Systems and methods for video processing
US20130176451A1 (en) 2012-01-11 2013-07-11 Matteo Minetto System for acquiring views for producing high definition virtual strolls
US20140089281A1 (en) * 2012-09-22 2014-03-27 Tourwrist, Inc. Systems and Methods for Selecting and Displaying Supplemental Panoramic Data
US8958911B2 (en) * 2012-02-29 2015-02-17 Irobot Corporation Mobile robot
WO2013176758A1 (en) * 2012-05-22 2013-11-28 Intouch Technologies, Inc. Clinical workflows utilizing autonomous and semi-autonomous telemedicine devices
US8831780B2 (en) * 2012-07-05 2014-09-09 Stanislav Zelivinski System and method for creating virtual presence
US20140037281A1 (en) * 2012-08-03 2014-02-06 Peter L. Carney Camera stabilization apparatus and method of use
US20140316614A1 (en) 2012-12-17 2014-10-23 David L. Newman Drone for collecting images and system for categorizing image data
US8909391B1 (en) 2012-12-28 2014-12-09 Google Inc. Responsive navigation of an unmanned aerial vehicle to a remedial facility
US9402026B2 (en) 2013-01-05 2016-07-26 Circular Logic Systems, Inc. Spherical panoramic image camera rig
US9761045B1 (en) * 2013-03-15 2017-09-12 Bentley Systems, Incorporated Dynamic and selective model clipping for enhanced augmented hypermodel visualization
US20150145887A1 (en) * 2013-11-25 2015-05-28 Qualcomm Incorporated Persistent head-mounted content display
US9536351B1 (en) * 2014-02-03 2017-01-03 Bentley Systems, Incorporated Third person view augmented reality
US10416666B2 (en) * 2014-03-26 2019-09-17 Unanimous A. I., Inc. Methods and systems for collaborative control of a remote vehicle
US9911454B2 (en) 2014-05-29 2018-03-06 Jaunt Inc. Camera array including camera modules
GB2543913B (en) * 2015-10-30 2019-05-08 Walmart Apollo Llc Virtual conference room
US10304247B2 (en) * 2015-12-09 2019-05-28 Microsoft Technology Licensing, Llc Third party holographic portal
US11449061B2 (en) * 2016-02-29 2022-09-20 AI Incorporated Obstacle recognition method for autonomous robots
US10244211B2 (en) * 2016-02-29 2019-03-26 Microsoft Technology Licensing, Llc Immersive interactive telepresence
US10191486B2 (en) * 2016-03-28 2019-01-29 Aveopt, Inc. Unmanned surveyor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110214072A1 (en) * 2008-11-05 2011-09-01 Pierre-Alain Lindemann System and method for creating and broadcasting interactive panoramic walk-through applications
US20170023944A1 (en) * 2011-01-28 2017-01-26 Irobot Corporation Time-dependent navigation of telepresence robots
US20150278604A1 (en) * 2014-03-30 2015-10-01 Gary Stephen Shuster Systems, Devices And Methods For Person And Object Tracking And Data Exchange
US20160165215A1 (en) * 2014-12-04 2016-06-09 Futurewei Technologies Inc. System and method for generalized view morphing over a multi-camera mesh
US9479732B1 (en) * 2015-11-10 2016-10-25 Irobot Corporation Immersive video teleconferencing robot
US11212437B2 (en) * 2016-06-06 2021-12-28 Bryan COLIN Immersive capture and review

Also Published As

Publication number Publication date
US20170353658A1 (en) 2017-12-07
US11212437B2 (en) 2021-12-28

Similar Documents

Publication Publication Date Title
US20220078345A1 (en) Immersive capture and review
JP6770061B2 (en) Methods and devices for playing video content anytime, anywhere
US9626786B1 (en) Virtual-scene control device
CN110944727B (en) System and method for controlling virtual camera
Galvane et al. Directing cinematographic drones
US9729765B2 (en) Mobile virtual cinematography system
CN106683197A (en) VR (virtual reality) and AR (augmented reality) technology fused building exhibition system and VR and AR technology fused building exhibition method
JP7059937B2 (en) Control device for movable image pickup device, control method and program for movable image pickup device
US20130083173A1 (en) Virtual spectator experience with a personal audio/visual apparatus
US9799136B2 (en) System, method and apparatus for rapid film pre-visualization
CN107636534A (en) General sphere catching method
US20170182406A1 (en) Adaptive group interactive motion control system and method for 2d and 3d video
US20190335166A1 (en) Deriving 3d volumetric level of interest data for 3d scenes from viewer consumption data
US11823334B2 (en) Efficient capture and delivery of walkable and interactive virtual reality or 360 degree video
US20160314596A1 (en) Camera view presentation method and system
CN107957772B (en) Processing method for collecting VR image in real scene and method for realizing VR experience
KR20160049986A (en) Method for generating a target trajectory of a camera embarked on a drone and corresponding system
CN108377361B (en) Display control method and device for monitoring video
WO2009093136A2 (en) Image capture and motion picture generation
US20180124374A1 (en) System and Method for Reducing System Requirements for a Virtual Reality 360 Display
Wang et al. Vr exploration assistance through automatic occlusion removal
Chen et al. ARPilot: designing and investigating AR shooting interfaces on mobile devices for drone videography
Hösl Understanding and designing for control in camera operation
DeHart Directing audience attention: cinematic composition in 360 natural history films
JP2020150297A (en) Remote camera system, control system, video output method, virtual camera work system, and program

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION