US20200242280A1 - System and methods of visualizing an environment - Google Patents
System and methods of visualizing an environment Download PDFInfo
- Publication number
- US20200242280A1 US20200242280A1 US16/777,809 US202016777809A US2020242280A1 US 20200242280 A1 US20200242280 A1 US 20200242280A1 US 202016777809 A US202016777809 A US 202016777809A US 2020242280 A1 US2020242280 A1 US 2020242280A1
- Authority
- US
- United States
- Prior art keywords
- virtual
- virtual representation
- physical object
- data
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 70
- 238000012800 visualization Methods 0.000 claims abstract description 21
- 238000011960 computer-aided design Methods 0.000 claims description 4
- 238000013461 design Methods 0.000 claims description 2
- 238000012360 testing method Methods 0.000 abstract description 46
- 230000003190 augmentative effect Effects 0.000 abstract description 11
- 230000003993 interaction Effects 0.000 abstract description 4
- 230000000007 visual effect Effects 0.000 description 17
- 238000009877 rendering Methods 0.000 description 13
- 230000000875 corresponding effect Effects 0.000 description 11
- 238000003384 imaging method Methods 0.000 description 8
- 238000012544 monitoring process Methods 0.000 description 8
- 230000002776 aggregation Effects 0.000 description 7
- 238000004220 aggregation Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 7
- 230000033001 locomotion Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 230000007613 environmental effect Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 238000002955 isolation Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000012634 optical imaging Methods 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000007519 figuring Methods 0.000 description 1
- 231100001261 hazardous Toxicity 0.000 description 1
- 239000000383 hazardous chemical Substances 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000004297 night vision Effects 0.000 description 1
- 238000013031 physical testing Methods 0.000 description 1
- 238000000275 quality assurance Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000005204 segregation Methods 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/12—Geometric CAD characterised by design entry means specially adapted for CAD, e.g. graphical user interfaces [GUI] specially adapted for CAD
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4084—Scaling of whole images or parts thereof, e.g. expanding or contracting in the transform domain, e.g. fast Fourier transform [FFT] domain scaling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/65—Network streaming protocols, e.g. real-time transport protocol [RTP] or real-time control protocol [RTCP]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2111/00—Details relating to CAD techniques
- G06F2111/18—Details relating to CAD techniques using virtual or augmented reality
Definitions
- Conventional monitoring systems include one or more cameras and/or other sensors that receive individual data feeds and present the information to a user individually from the segregated feeds of each camera or sensor. For example, a user may capture one or more video images of an object. The data feeds may be recorded for play back at a later time or may be displayed to a user in real time.
- Conventional systems may include user interface dashboards for simultaneously or selectively displaying one or more of the data feeds from the one or more cameras or sensors.
- a user typically has to select which feed they are interested in and manipulate the playback.
- Conventional systems display the playback on conventional two dimensional screens. Information or alarms may be set or based on the individual sensor to draw attention to a critical condition. However, the information is presented in isolation without a comprehensive correlation to the original physical object being monitored or the physical environment.
- the conventional feedback on conventional two dimensional displays does not provide the same information as the three dimensional environment provides. Details can be lost when viewing a three dimensional environment on a two dimensional screen.
- Exemplary embodiments of the monitoring system described herein may provide comprehensive, three-dimensional (3D) visualization.
- Exemplary embodiments described herein include a virtual presence system.
- Exemplary embodiments may include a presence system and method of providing visualization that displays and permits virtual interaction with three-dimensional (3-D) data sets. Exemplary embodiments permit visualization through Virtual Reality (VR) and Augmented Reality (AR) solutions while preserving temporal and spatial registration.
- VR Virtual Reality
- AR Augmented Reality
- display options are described herein in terms of AR/VR, other display options are also included herein, including, without limitation, flat screen approximations of the three dimensional rendering viewable on a flat screen or in augmented or virtual reality.
- Exemplary embodiments may include devices for receiving data including, for example, imaging systems, temperature sensing, optical imaging in various wavelengths, electrical sensor, mechanical sensors, other data sources, and combinations thereof.
- Exemplary embodiments include a system of receiving data about an object of interest. Exemplary embodiments are configured to superimpose the received data onto a three dimensional virtual object, where the three dimensional virtual object is a representation of the physical object of interest. In an exemplary embodiment, the system is configured to receive information regarding the physical object. The system components may be positioned in a known location and orientation relative to the physical object such that data received from the system components may be overlaid onto the virtual representation of physical test object. Exemplary embodiments of the system may be used to render, for example, collective video feeds into a realistic virtual reality environment.
- Exemplary embodiments of the system may have any combination of segments, including, for example, a viewer system segment, a sensor node segment, and a network segment.
- the viewer system segment may include the human to machine interface, such as a display system.
- the sensor node segment may include any combination of data collection nodes, system master timing and/or synchronization, and processing and/or storage tasks local to the system.
- the network segment may include physical data traffic infrastructure including, for example, switches, routers, cabling, etc.
- the system may include process piece in which the system is calibrated and initiated. Calibration may include setting and configuring sensor nodes and/or physical mapping of the sensor nodes to the facility hardware, data feeds, and observed object. Exemplary embodiments of the process piece may align sensor nodes such that baseline three dimensional (or other visualization from the viewer system segment) representation of the observed object is aligned.
- FIGS. 1A-1B illustrate exemplary environments and objects under observation that may benefit from embodiments described herein.
- FIG. 2 illustrates an exemplary method according to embodiments described herein.
- FIG. 3 illustrates an exemplary high level system diagram of a test cell presence system according to embodiments described herein.
- FIG. 4 illustrates an exemplary component system diagram of a test cell presence system according to embodiments described herein.
- FIGS. 5A-5D illustrate exemplary visualizations from the test cell presence system according to embodiments described herein.
- FIG. 6 illustrates an exemplary test environment to illustrate the methods visualization according to embodiments described herein.
- FIGS. 7A-7C illustrate exemplary data feeds from the test environment of FIG. 6 .
- FIG. 8 illustrates an exemplary virtual representation of the test environment of FIG. 6 for visualization in three dimensions according to embodiments described herein.
- FIG. 9 illustrates an exemplary system architecture according to embodiments described herein.
- FIGS. 10A-10C illustrate exemplary dimensional mesh virtual representations to illustrate exemplary embodiments of systems and methods for generating a virtual object.
- Exemplary embodiments may include Virtual Reality (VR) and Augmented Reality (AR) solutions that display and interact with three dimensional data sets while preserving temporal and spatial registration.
- display options are described herein in terms of virtual reality, other display options are also included herein, including, without limitation, flat screen approximations of the three dimensional rendering viewable in augmented reality.
- Exemplary embodiments may include devices for receiving data including, for example, imaging systems, temperature sensing, optical imaging in various wavelengths, electrical sensor, mechanical sensors, and combinations thereof.
- Exemplary embodiments described herein may be hardware-agnostic and not tied to a specific VR or AR product and/or brand, allowing the customer to leverage appropriate VR/AR technology evolutions as they materialize.
- Exemplary embodiments include a system of receiving data about an object (including an environment or multiple objects) under observation. Exemplary embodiments are configured to superimpose the received data onto a three dimensional virtual object, where the three dimensional virtual object is a representation of the physical object.
- FIGS. 1A-1B illustrate exemplary applications in which embodiments of the system and method for visualization described herein may be used.
- FIG. 1A illustrates a medical procedure in which a medical professional is performing the medical procedure on a patient.
- the procedure may be captured through one or more cameras and replayed or observed in real time.
- Such recording or display may be used for training purposes or for maintaining a record of the procedure or other purpose.
- the captured visual data feeds would be replayed in isolation, such as through display on one or more two dimensional monitors.
- two dimensional segregated presentation of the information may not provide the same experience for a viewer as the opportunity to observe the actual procedure in three dimensions. Information about relative positions may be lost on a viewer merely watching individual feeds from one or more camera feeds.
- FIG. 1A illustrates a medical procedure in which a medical professional is performing the medical procedure on a patient.
- the procedure may be captured through one or more cameras and replayed or observed in real time.
- Such recording or display may be used for training
- FIG. 1B illustrates an exemplary object that may be observed during operation, such as in a facility or industrial application.
- One or more cameras may be used to monitor, record, observe or combinations thereof the object during use.
- the system may capture, display, or record the information for observation in real time or at a later time.
- Such applications may permit remote viewing or monitoring, facility monitoring, infrastructure monitoring, quality assurance, fault detection, forensic assessment for recovery or isolation during a fault occurrence, or in damage assessments after a fault occurrence.
- each feed may be observed and/or recorded individually.
- a user thereafter observes the various individual feeds.
- one or more feeds may be visually present and visible to a user (such as a user displaying two separate video feeds simultaneously), there is conventionally not a convenient way to integrate the information for a better or complete understanding of the object under observation.
- FIG. 2 illustrates a flow diagram for methods of visualizing data of an object under observation by overlaying received data onto a virtual object corresponding to the physical object.
- the exemplary method includes receiving information about the physical object, providing a virtual object corresponding to the physical object, receiving information from one or more sources, and overlaying the received information onto the virtual object to provide an integrated view of the test object.
- an exemplary method include providing a physical environment including an object for observation.
- the physical object may be any physical object, group of objects, or environment for observation.
- Observation is intended to be inclusive of any objective including visual observation as well as specific monitoring, physical testing (such as run time monitoring/testing or environmental monitoring/testing).
- Run time and environmental testing may include operating an object in different environments, including dynamic (changing) environments of temperature, pressure, humidity, vibration, acceleration, movement, etc.
- Observation may also include any observable attribute of an object, not necessarily limited to visual observations.
- observations may be through sensed information, such as temperature, speed, object input (such as power, current, etc.), object output (such as exhaust, power, current, light, heat, etc.), and any combination thereof.
- the exemplary method includes providing a presence system according to embodiments described herein.
- the presence system may include one or more data sources to observe the physical object of step 202 .
- the observations may be through any combination of attributes.
- the test cell presence system comprises one or more cameras.
- the exemplary cameras may be in one or more bandwidths, such as for visual observation in different spectrums, including without limitation, visual, infrared (IR), ultra violet (UV), or other frequency such as for night vision, heat detection, etc.
- the one or more data sources may be any combination of sensors.
- the sensor may be, for example, IR, vibration, UV, visual, audial, temperature, speed, current, composition, etc.
- the method includes providing a virtual representation of the physical object.
- exemplary embodiments may include creating an accurate three-dimensional representation of the physical setup including the object for observation and/or test hardware components.
- Exemplary embodiments may use modeling or other rendering to create a virtual representation of an exemplary physical environment including the object under observation.
- test ready computer aided design (CAD) models may be used as a basis for the virtual object. Any method for creating a virtual representation of the physical environment and/or test object are within the scope of the instant application.
- methods for generating a virtual representation from a physical object or environment may include laser scan photometric scan, or other detector, system, or method of generating a three-dimensional rendering.
- Exemplary methods to create accurate three dimensional renders of the object and/or test cell hardware and/or environment may include any combination of steps, including, without limitation, CAD modeling of the object and/or component parts, object detection and rendering through one or more sensors, image recognition, and combinations thereof.
- the physical environment including the object for observation may be mapped to the virtual object.
- the system may therefore be calibrated and/or initialized such that the physical mapping of the facility hardware, data feeds, and the observed object correspond to and properly align when overlaid onto the virtual representation.
- the system components may be positioned in a known location and orientation relative to the physical object such that data received from the system components may be overlaid onto the virtual representation of physical object.
- Other calibration systems and methods may also be used.
- manual alignment may be used to align the visual feedback to the overlaid virtual object.
- the manual alignment may be performed in physical positioning of the sensors and camera, in electronic or software manipulation of the alignment of the overlay to the virtual objects and combinations thereof.
- the system may be automated to detected a position of the sensors and determine a corresponding alignment for the sensor feed for overlaying on the virtual representation. For example, image recognition techniques may be used to identify a position on a camera feed to correspond with a position on the virtual representation.
- the system may integrate one or more sensors into a data feed such that the data feed is in a predetermined location relative to a sensor for determining its position relative to the test object or other known environmental position. The data feed may therefore be able to self-locate and its data feed overlaid on the virtual object automatically.
- Exemplary embodiments may include combinations of automatic and manual calibrations. For example, the system may be manually calibrated to a set up orientation.
- sensors may be permitted to move, rotated, or otherwise reposition.
- the repositioning of the sensors may be performed through command signals to mechanical/electrical components such that the repositioning is by a known amount.
- the system may thereafter automatically recalibrate based on the known changes to the system configuration.
- the system may also initially automatically calibrate, but may permit manual adjustments to improve or correct automatic determinations.
- the method includes receiving information regarding the physical environment, including, for example, the physical object under observation.
- the system may be configured to receive information from any of the described data sources or other source.
- Information may come from data of the one or more data sources, including cameras, sensors, etc.
- the information may come from sensed data, analyzed data, received data, data input, etc.
- the method may include manipulating the received data in some way.
- the system may be configured to aggregate the data sources for representation on the virtual object.
- the system may aggregate the data sources by aligning the data sources.
- the data may be aggregated by synchronizing the feeds in time.
- the data may be aggregated by aligning the data relative to a corresponding relative physical location.
- data may be overlaid, duplicated, filtered, and combinations thereof for portions of data sources that overlap.
- one or more data sources may provide a panoramic view of a test object, but may include overlapping areas between data sources.
- Aggregating the information may include aligning the feeds, and filtering overlapping data. Filtering may be by averaging information, removing information, etc.
- Exemplary embodiments may also include the addition of dynamic data sources.
- a user input through the user interface may generate data that can be appended to a data source or data stream or visual representation or recreation.
- a user may look at the virtual representation using the user interface as described herein.
- the user may provide an input to the system, such as through an electronic controller (for example, a button push or movement queue).
- the user input may provide a tag or other input to the system that can be stored with the data for recreation or review in real time or replay.
- the tag may permit a user to enter in additional information, such as notes, or observation queues, or may simply identify points of observation to permit searching, training, record keeping, or other data manipulation at a later time.
- the system may perform other data analysis or synthesis.
- the system may be configured to reduce a fidelity of one or more data sources to improve band width transmission. Fidelity may be reduced based on level of observation.
- the lower fidelity (less data) may correspond to more distant points of view or larger areas of observation, while a higher fidelity (more data) may be provided for more specific areas of observation.
- the system may be configured to identify or receive areas of interest in which higher areas of fidelity are desired and/or lower areas of interest, either of which may indicate the converse to the system.
- the fidelity may also be set based on received information, historical information, rates of change, etc.
- the system may reduce the fidelity as the received information. If the received information is changing, close to or within or outside of a range of observation or predefined or determined range, or other criteria, the system may be configured to receive or capture a higher fidelity. Fidelity may be, for example, a sampling rate of a given sensor or density of information such as in higher resolution. Exemplary embodiments may also perform analysis of one or more data sources or feeds for event detection. The system may be configured to adjust a fidelity of information based on the detection of an adverse or known event or other system criteria.
- the method may include storing the information.
- the system may be configured to store any combination of information.
- the system may store the raw data feeds from the one or more data sources.
- the system may store the aggregated data sources.
- the system may store any analyzed, synthesized, or any combination of manipulated data.
- the system may also store the visualization of step 216 .
- Exemplary embodiments of the system may be used to render, for example, collective video feeds into a realistic virtual reality environment.
- the method may include rendering information onto the virtual representation of the physical object.
- the visualization may be through any digital interface, such as a monitor, screen, virtual reality display, augmented reality display, etc.
- the visualization may be through augmented reality and/or virtual reality or other three dimensional digital display (referred to collectively herein as digital reality).
- the virtual representation of the physical object may be rendered and displayed in three dimensions.
- the information corresponding to the physical environment may be overlaid onto the virtual representation such that the received information is depicted visually directly over, onto, or proximate the virtual object.
- the user may therefore receive an approximation of the physical object during the observation in virtual space as if observing directly in physical space.
- the representation and/or overlay may alter the visual of the representation for the viewer such that it is not the same as a direct observation of a physical object. This may be, for example when a temperature or camera detecting in a non-visual spectrum is used and overlaid such that the virtually rendered object with information overlaid thereon may be represented in color corresponding to temperature, similar to a three dimensional heat map.
- Exemplary embodiments described herein include systems and methods for providing a virtual presence system in which an object may be observed.
- the observation may include additional information beyond (or in addition to) visual inspection, such as through different frequencies, temperature, or other sensor information, and/or may include remote inspection by a viewer removed from the test location or facility.
- FIG. 3 illustrates an exemplary block representation of a virtual presence system according to embodiments described herein.
- exemplary embodiments of a virtual presence system 300 may include any combination of segments, including, without limitation, a view system segment 304 , a sensor node segment 302 , and a network segment 306 .
- the method may also include a processing segment 302 A.
- Exemplary embodiments are described herein in terms of different segments for example and explanation only. The system does not require any specific integration or segregation of segments. For example, any combination of components may be used as would be apparent to a person of skill in the art.
- the view system segment may include a user interface for displaying the results described herein.
- the view system segment 304 may include any combination of displays, including interaction stations 312 that permit user input and machine output including any digital display (augmented reality, virtual reality, 2-D screen, hologram, etc.).
- the user interface may be through a display or human machine interface.
- An exemplary embodiment of the display includes a virtual reality or augmented reality display/user interface.
- Other user interfaces may include digital displays such as 2-D screens, projectors, holograms, or other visual display system.
- Exemplary embodiments of the system are configured to display a virtual rendering of the object under observation with or without an environment around the object.
- the object may be the environment itself, and does not require a specific component for observation.
- the system is configured to display virtual representations of information about the physical environment including the physical object overlaid onto, positioned adjacent, or otherwise in relation to the virtual rendering of the object.
- the representations of information is a camera feed conformed about the virtual rendering of the object such that the display of the representation of information with the virtual object is a recreated three dimensional view corresponding to the physical object under observation as seen by one or more sensors, including one or more cameras.
- Other information may be overlaid or displayed on the virtual rendering of the object, such as, for example, color coded areas, call outs, text, or other display of information in relation to the virtual rendering of the test object corresponding to the information of the physical object.
- the sensor node segment 302 of the system 300 may include any combination of sensors, controls, processing, or other components 308 for collecting the information for display.
- An exemplary embodiment of the sensor node segment is configured to receive data from the physical environment and/or physical object.
- the nodes may include any sensor, such as a camera, thermal detector, etc.
- the sensor node segment may also include components for system master timing and/or synchronization, one or more processors, and one or more memory for storing tasks and/or data associated with the system, and/or controlling the one or more sensors or other sensor node segment components.
- the sensor node segment or one or more components of the sensor node segment may be positioned within an observation environment boundary.
- the observation environment boundary may segregate the observation environment from a remainder of the environment and/or one or more users.
- the observation environment boundary may be used to contain an environment, such that temperature, pressure, humidity, and other environment factors may be controls, as well as contain chemicals, exhaust, heat, or other hazardous or unhealthy conditions from human observers.
- the system 300 may be calibrated in a processing segment 302 A.
- the processing segment 302 A may include the calibration of sensor nodes from the sensor nodes segment 302 , including illumination and camera performance parameters, and physical mapping of sensor nodes to the physical object and/or facility hardware and data feeds.
- the alignment of sensor nodes baselines the 3D visualization and may complete the system initialization.
- the calibration of the virtual presence system may be manual, automated or a combination thereof.
- the network segment 306 may include one or more components such as network hardware, timing, communication, etc. 314 .
- An exemplary embodiment of the network segment includes methods and components for communication between different components of the system and/or to or from the test environment.
- the physical data traffic infrastructure including switches, routers, and cabling that connects the system components.
- FIG. 4 illustrates an exemplary test system architecture according to embodiments described herein for the virtual presence system.
- the virtual presence system 400 includes a location for the physical object 402 .
- the physical object 402 is any physical object for observation and/or testing in the environment.
- the observation environment is defined by the observation environment boundary.
- the observation environment boundary may be a physical boundary or separation or may simply be imposed by the field of view or detection of the one or more sensors.
- the test environment boundary permits the delineation, separation, and/or control of the observation environment including the physical object under observation.
- the observation environment boundary may be sealed, such as to control pressure environments, may be sealed and/or vented to contain hazardous materials, or may include other structures, components, and features as the environmental needs dictate for performing the desired observation of the physical object.
- Exemplary embodiments include sensors 406 and control systems 408 within the observation environment for receiving and transmitting data about the physical object 402 .
- the sensors may be any combination of data receiving inputs, such as different video-source sensors 404 (e.g. cameras) utilized to provide video coverage across different wavebands. Any combination of sensor types, quantities, qualities, locations, etc. may be used within the scope of the present disclosure.
- Different sensors and cameras are illustrated in FIG. 4 as cameras C 1 -C 4 and sensors S 1 -S 4 .
- a camera is a type of sensor.
- Sensor hardware may be based upon a specific test environment and designed or configured for specific imaging requirements and may vary by installation or test.
- Exemplary control systems 408 may be ruggedized depending on the local test unit environment and hardware used for the command/control unit.
- the system may provide other components based on the test environment, such as, for example shock isolation.
- the control system 408 may manage video traffic and providing accurate timing across sensor nodes.
- the control system 408 may be positioned based upon cable lengths, and environmental considerations.
- the control system 408 may include memory to provide local data storage.
- Exemplary embodiments of the test cell presence system 400 may include a data aggregation hub 410 .
- the data aggregation hub may include one or more processor and one or more memory and other components for managing the synchronization, video routing, command traffic, or other features of the network segment described herein.
- the aggregation hub 410 may receive the data feeds from the sensors within the test environment.
- the data aggregation hub 410 may also be configured to perform any of the data aggregation, analysis, filtering, synthesizing or other modification of the raw data from the sensors from the test environment.
- the data aggregation hub may be proximate the test environment or may be remote therefrom.
- Exemplary embodiments of the system may include a viewer system segment including user visual displays. Any 2-D screen 412 or user display device may also be used. Alternatively, or in addition thereto, any digital (either virtual, augmented, or holographic) reality system 414 may be used.
- the digital reality display may use “inside-out” tracking with all tracking hardware present on a headset. Other tracking and control inputs may also be used. For example, a controller, such as a handheld remote may be used.
- the exemplary tracking and control components may be used to alter the view of the digital reality by changing perspective, zooming, changing display information/inputs, or combinations thereof. Exemplary embodiments may reduce the connections needed between the headset and the rest of the system.
- FIG. 4 illustrates exemplary virtual representations of a physical object as viewed through a digital display 414 A, 414 B, 412 according to embodiments described herein.
- the exemplary representations of the virtual reality displays 414 A, 414 B illustrate the same object with different information overlaid on the virtual object to provide examples of how information can be provided to a user through a three dimensional virtual representation of the physical object.
- FIG. 5A-5D illustrate an exemplary display options in which information is display to a user in combination with the virtual representation of the physical object.
- non-imaging data may be provided with the virtual representation of the test object on a display as a pop up.
- Exemplary embodiments of the system and method are configured to receive different sources and types of data.
- the received data may not include visual or imaging data that can be grafted onto the shape or virtual model of the physical object for a direct overlay of the data onto the virtual representation of the physical object.
- this non-conforming information may be displayed in other ways.
- the information may display on a pop up or information display window displaying the data approximate to or with a virtual object indicating the source of the information.
- color coding or symbol corresponding or representing the displayed data may be used as an overlay of the virtual representation of the physical test object.
- FIG. 5A if a temperature sensor is determined to be out of range, the location of the temperature sensor on the virtual representation of the physical object may change color or a symbol (illustrated as a star in FIG. 5A ) may be used to draw attention to that location of the virtual representation of the physical object.
- Different temperature color codes may be used to correspond to or indicate different things, such as in range, out of range, high, low, or approximate temperature range, etc. Also as illustrated in FIG.
- text information or other information from a data source may be provided as an overlay positioned in proximity to the virtual representation of the test object corresponding to the source of data represented in the overlay. As illustrated the temperature associated with the symbol displayed on the virtual representation is displayed to a user.
- FIGS. 5A-5D illustrate exemplary demonstrations of a rendered sample virtual environment to demonstrate the notional system user experience. Temperature sensors and other non-imaging data sources could display their status via colored or symbolic indicators on the VR model ( FIG. 5A ). Detailed information may appear when the data feed node is “looked at” by the user. Navigation in the virtual environment may control, engage, interact with, and/or view different portions of the system. For example, the system may be configured to detect head movements, which may be used to control the short distance travel and precision positioning of the virtual display environment. Exemplary embodiments may also include hand controllers or other inputs that may be used to trigger long distance movement within the virtual display environment.
- the system may include a hand held controller that may include a joy stick, buttons, toggle, or other controller(s).
- the user may select to “move to” a given target via the thumb-stick command or other input. Left, or right movement of the thumb-stick may be used to adjust the view's rotation, while reverse movements may be activated by pushing backwards on the thumb-stick.
- Exemplary embodiments may suspend or otherwise manipulate the display of the virtual environment, such as through fade in/out, blinking, screen freeze, or other transition to minimize user disorientation during teleporting or rotation from one view or viewing area of the virtual environment to another.
- the system may be configured to receive or define locations for rapid repositioning.
- the hand controller may superimpose representative user interface controls and information display as illustrated herein.
- the interface (as illustrated in FIG. 5D ) may be normally hidden within the virtual display environment and may be called up via an input, such as a menu button or head motion input, and then operated with a controller, input, or gesture recognition.
- Exemplary embodiments described herein include different implementations for the system, ranging from a simple distribution of a small number of cameras that are approximately located in a less-than-detailed three dimensional model to a high-fidelity rendering of the physical object with live video feeds from numerous precisely aligned and calibrated cameras draped seamlessly onto an accurate three dimensional model of the physical object.
- Exemplary embodiments permit the operator(s) to set up fixed “virtual” display feeds that deliver information to a standard two dimensional display. This enables other personnel to view selected imagery feeds without virtual reality.
- the system may be used to render the user's virtual reality point of view to a two dimensional display, and/or permit the user of the two dimensional display to rotate or navigate the virtual object through the two dimensional interface.
- the two-dimensional display may also provide information about the three dimensional user perspective.
- an exemplary virtual representation of the test object may be displayed with the perspective position of the viewer through a virtual reality or other three dimensional display system is indicated to represent the focus, perspective, and view of the three dimensional viewer to the two dimensional viewer.
- the headset/lenses 502 of the three dimensional viewer are represented on the virtual representation of the physical environment to indicate position and direction of the three dimensional viewer.
- Exemplary embodiments may permit multiple users to engage with the system simultaneously through any combination of user interface alternatives described herein.
- one or more users may experience the digital reality as well as one or more other users may experience through two-dimensional displays or even single dimension data feeds.
- the system may be configured to permit users to control and interact through the user interface either independently, such that each user can manipulate their personal view and receiving corresponding data feeds, or collaboratively such that views may be shared or manipulated collectively, or any combination thereof.
- Exemplary embodiments of the system described herein may include storage for retaining raw data feeds, analyzed data feeds, visual feeds, and any combination thereof.
- the system may capture, store, and replay the user interaction with the system during a use session such that the visual experience of a user may be captured and replayed.
- the system may also record raw feeds such as from sensors such that specific information may be replayed as desired.
- the system may also be configured to further analyze, manipulate, or otherwise handle any of the retained data in order to replay any combination of information or generate new information or displays.
- the virtual representation and/or one or more data feeds may be stored such that a user may recreate the three dimensional display on demand. The user may thereafter interact with the system, such as through movement detection or other user input, to manipulate the user display dynamically during the replay session.
- the system may be configured to record one or more data feeds from one or more sensors.
- a medical procedure may be captured and recorded from one or more cameras.
- the system may also include one or more systems or methods for generating a virtual dimensional mesh in which to overlay the camera feeds.
- the system may permit remote observation and/or replay of the procedure through a three-dimensional display system.
- the observer, through the system described herein, may interact with the display such that the medical procedure can be observed from different directions, replayed, paused, zoomed in or out, or otherwise manipulated.
- One or more users may simultaneously or separately replay or interact with the system such that the display experience can be shared or can remain independent and separate.
- the system and methods described herein may, for example, be used for training after a procedure is complete.
- the training session may use recordings of an actual procedure, but permit a trainer to pause the procedure or provide specific perspective useful to the training session without interfering with the procedure itself.
- an industrial process or operation may be observed and/or monitored.
- the remote user may be separate from the environment of the object under observation for any reason, such as health concerns, physical location separation, etc.
- the system may permit the three dimensional observation of the object to retain perspective of the object under observation that may be lost if observed on a conventional two dimensional display.
- the system may also be configured to take in data feeds from other sources.
- the inserted medical device may have a tracker or other location detecting sensor at an end of the device.
- the system may be configured to provide a three dimensional image of the patient undergoing the procedure, and may provide an additional virtual object representation in the three dimensional display representing the location of the medical device as determined by the location detecting sensor and may position the virtual object representation relative to the virtual patient based on the relative position of the location the physical device to the physical patient.
- the location of an object is provided as merely exemplary, any additional information may be received and displayed in the system. Referring back to FIG.
- the system may permit a user to observe the object and overlay additional information, such as a visual feed, temperature, pressure, etc. data.
- additional information may be represented as text overlaid in a position approximate or correlated to the location on the virtual object.
- the additional information may also be overlaid on the dimensional mesh or virtual representation, such as by a color coding or symbolic scheme.
- FIGS. 6-8 illustrates an exemplary system reconstruction to illustrate the system and concepts described herein.
- a box 602 is chosen as a test object for observation. The box is virtually modeled and a three dimensional mesh model is used to render the video feeds from three camera sources C 1 , C 2 , C 3 onto the virtual representation of the physical object.
- the test environment includes the target object 602 under observation and three cameras C 1 , C 2 , C 3 .
- a bust figuring is used to illustrate an imaging obstruction in one camera feed, C 1 . Therefore, since the system has no knowledge that the figure is an obstruction, the system renders the bust onto the model.
- FIG. 7A illustrates the image received from camera C 1 ; FIG.
- FIG. 7B illustrates the camera feed from camera C 2
- FIG. 7C illustrates the feed from camera C 3
- the bust figure appears in the image of camera C 1 in front of the target object 602
- FIG. 8 illustrates the virtual representation of the physical object with the information from the camera feeds superimposed onto the virtual model.
- the image of the bust is integrated onto the side of the virtual representation of the cube as the system is unaware that the feed is obstructed and does not correspond to the model representing the physical object.
- the system may include depth sensors as one or more components of the virtual presence system.
- the depth sensors may be used to generate the three dimensional mesh or structure for modeling the virtual representation of the physical environment.
- An exemplary therefore, may include a system and method of providing or receiving depth sensor outputs for use in an exemplary embodiment to create a three dimensional mesh for use in the virtual object overlay.
- a three dimensional rendering method may include the user of a depth sensor, either in combination or separate from the camera or video feed.
- a combined color camera depth sensor is used. Exemplary embodiments can be used to create a three dimensional mesh for the perspective of the camera.
- FIGS. 10A-10C illustrate exemplary renderings of a virtual object with image overlay based on a dimensional mesh created from one or more depth sensors as described herein.
- FIG. 10A illustrates the front
- FIG. 10B illustrates a side
- FIG. 10C illustrates a top view of an exemplary depth sensor output for use in an exemplary embodiment to create a three dimensional mesh for use in the virtual object overlay.
- a three dimensional rendering method may include the user of a depth sensor, either in combination or separate from the camera or video feed.
- a combined color camera depth sensor is used. Exemplary embodiments can be used to create a three dimensional mesh for the perspective of the camera.
- the virtual object or dimensional mesh for overlaying information or rendering a video display for three dimensional display may be dynamic such that the dimensional mesh may change and correspond to the physical environment in real time or semi-real time.
- the system may be configured to update the dimensional mesh for a portion of the virtual representation while maintaining other portions static. For example, if the observation is on a surgery or industrial operation in which only a portion of the object of observation is moving or changing, only that portion of the virtual representation needs to be updated. Accordingly, the system may be configured to automatically detect changes and update portions of the virtual representation accordingly and/or may be programed to update the virtual representation for an identified region of the virtual representation.
- Exemplary embodiments may be configured to resolve small objects (for example, ⁇ 0.05 inches or less). Exemplary embodiments of the system and methods described herein may allow the user to “walk around” in digital reality and monitor critical joints, items and connection points.
- the system may deliver multi-spectral sensing capability (visible and infrared wavebands, as examples) with continuous, real-time 3D video feeds.
- the system architecture may support integration of active viewing of other data sources (temperatures, pressures, data feeds, etc.). As a system, exemplary embodiments permit faster visualization and a more comprehensive understanding of the operational environment, helping detect minor issues before they grow into major problems.
- Exemplary embodiments may include system architectures that may consider both the large amounts of real-time data required for rendering the test cell presence into virtual or augmented reality and the practical limitations of today's state-of-the-art computers.
- Exemplary embodiments of a virtual presence system implementation may include sensible camera selection, appropriate network design, intelligent bandwidth management, and practical considerations about the physical environment coverage requirements.
- Large-scale, high-resolution viewing of the physical environment unit may include a form of video compression, or a method for video feed switching implemented as a “Level of Detail” viewing capability. “Level of Detail” may automatically (or manually) reduce the resolution of the camera field into digital reality or for display depending on the virtual distance between the viewer and camera and actual resolution of the display.
- Level of Detail may adjust the resolution or other fidelity (sampling rate, etc.) displayed in digital reality or other display methods depending on a virtual distance between a virtual viewing perspective and the virtual representation of the test object. For example, if a user through the digital reality interface moves closer to the virtual representation of the test object, the fidelity or resolution of the display may increase, while the fidelity or resolution may be reduced as the viewer digitally moves further away from the virtual representation of the test object. These methods may be incorporated to preserve transmission bandwidth.
- Exemplary embodiments may use any combination of hundreds of potential cameras. Any combination of cameras, sensors, and data sources may be used in any combination. Therefore, there may be a single camera or any number of multiple cameras, sensors, or other data feeds or sources.
- the cameras vary by waveband, image type, focal plane size, pixel pitch, frame rate, data output format, interface, environmental performance range, and other parameters.
- the system includes hard-mounted sensors with fixed focal length lenses. The fixed location and focal length of the cameras may provide for easier calibration and mesh overlay of the received data on the rendered virtual object.
- the system may also use variable locations and/or focal lengths in which the system may be manually or automatically recalibrated according to embodiments described herein.
- highly stabilized and steerable custom imaging systems may be used that provide accurate and repeatable positioning.
- Calibration of exemplary embodiments described herein may include white balance, and other performance parameters.
- Calibration may include the physical three dimensional mapping of the physical system components, physical object under observation, and the relative alignment of the senor nodes to the three dimensional map.
- a calibration process permits sensor node alignment and permits the proper generation of three dimensional imagery from the two dimensional video feeds.
- Calibration may be used to establish various intrinsic and extrinsic parameters of the respective sensor nodes and may record them as part of an initialization process. Intrinsic parameters (lens focal length, camera pixel pitch, etc.) remain fixed throughout the lifetime of the sensor node, while extensive parameters such as sensor node position and orientation may vary due to operational needs. The use of fixed system reference points and rigid mounting techniques helps minimize recalibration burdens.
- the system may also include dynamic or controllable intrinsic parameters, such as camera position, orientation, focal length, etc.
- the system may be configured to detect a change in a dynamic intrinsic parameter and recalibrate the system accordingly.
- the recalibration may be automatic, manual, or a combination thereof.
- the system may also include one or more identification sensors to assist in calibration. For example, the system may detect or determine a location, use visual or other data recognition to relate a data stream to the virtual representation to permit calibration and data overlay to the virtual representation.
- Exemplary embodiments of the system including a multi-camera system can benefit from using a master timing device and master clock.
- exemplary embodiments of a system architecture is illustrated in FIG. 9 .
- the system may include any combination of switch controllers 902 coupled to any combination of cameras C 1 -Cn.
- One embodiment may use IEEE 1558 compliant cameras to simplify system level timing synchronization and synchronizing all computers, cameras, and networking equipment in a system.
- Other standards, protocols, components, methods, and combinations thereof may also be used for timing, synchronization, or amalgamating data.
- a master clock can aid in determining and fixing any sources of latency that may occur.
- the system may include any combination of data aggregation hub 912 or other analytics components as described herein.
- the system may include any combination of digital displays 904 for rendering the virtual object in conjunction with information from the one or more data sources.
- the system may be integrated into a conventional or previous system architecture 910 and protected through a fire wall 908 and have access to the system network or internet 906 .
- An exemplary network may include a physical topology that supports future increases in camera count and capability.
- the transfer of video from cameras to optional local data storage nodes may also be used to minimize sharing of links and allows direct calculation of the bandwidth and storage capacity requirements.
- Long-haul links such as those between the test area and display area, may use fiber.
- Remaining links may be copper, unless greater resolution is required.
- Exemplary embodiments of system components of the system network may include managed equipment, in that they have a dedicated management interface and maintain operational statistics. This visibility into network behavior may be used to verify the configurations and expected results against real system operations.
- Exemplary embodiments seek to minimize unmonitored choke points in which excessive flows of video data converge.
- the network structure may be used to increase flexibility in resource allocation and can expand to incorporate additional cameras and storage nodes on an as-needed basis.
- Exemplary embodiments may use various configurations of video compression or various compression techniques such as H.264, H.265, JPG, and JPG2000. These compression methods could reduce the amount of bandwidth required by the network but introduce latency and require some form of processing power and may also reduce overall image fidelity.
- Exemplary embodiments of the system described herein can be based in software and/or hardware. While some specific embodiments of the invention have been shown the invention is not to be limited to these embodiments. For example, most functions performed by electronic hardware components may be duplicated by software emulation. Thus, a software program written to accomplish those same functions may emulate the functionality of the hardware components in input-output circuitry.
- the invention is to be understood as not limited by the specific embodiments described herein, but only by scope of the appended claims.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Multimedia (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Signal Processing (AREA)
- Architecture (AREA)
- Human Computer Interaction (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- This application claims priority to U.S. Provisional Patent Application No. 62/798,951, filed Jan. 30, 2019, which incorporated by reference herein in its entirety.
- Conventional monitoring systems include one or more cameras and/or other sensors that receive individual data feeds and present the information to a user individually from the segregated feeds of each camera or sensor. For example, a user may capture one or more video images of an object. The data feeds may be recorded for play back at a later time or may be displayed to a user in real time. Conventional systems may include user interface dashboards for simultaneously or selectively displaying one or more of the data feeds from the one or more cameras or sensors. However, a user typically has to select which feed they are interested in and manipulate the playback. Conventional systems display the playback on conventional two dimensional screens. Information or alarms may be set or based on the individual sensor to draw attention to a critical condition. However, the information is presented in isolation without a comprehensive correlation to the original physical object being monitored or the physical environment.
- To improve visualization of an area, a simple “lots of cameras with a really fast network” solution may be considered. However, such a solution will not suffice to cover an entire desired area while simultaneously delivering uncompressed video feeds with the desired resolution. To achieve the desired resolution a large number of cameras would be required that would overload practical system bandwidth limitations.
- In addition, the conventional feedback on conventional two dimensional displays does not provide the same information as the three dimensional environment provides. Details can be lost when viewing a three dimensional environment on a two dimensional screen.
- Exemplary embodiments of the monitoring system described herein may provide comprehensive, three-dimensional (3D) visualization. Exemplary embodiments described herein include a virtual presence system.
- Exemplary embodiments may include a presence system and method of providing visualization that displays and permits virtual interaction with three-dimensional (3-D) data sets. Exemplary embodiments permit visualization through Virtual Reality (VR) and Augmented Reality (AR) solutions while preserving temporal and spatial registration. Although display options are described herein in terms of AR/VR, other display options are also included herein, including, without limitation, flat screen approximations of the three dimensional rendering viewable on a flat screen or in augmented or virtual reality.
- Exemplary embodiments may include devices for receiving data including, for example, imaging systems, temperature sensing, optical imaging in various wavelengths, electrical sensor, mechanical sensors, other data sources, and combinations thereof.
- Exemplary embodiments include a system of receiving data about an object of interest. Exemplary embodiments are configured to superimpose the received data onto a three dimensional virtual object, where the three dimensional virtual object is a representation of the physical object of interest. In an exemplary embodiment, the system is configured to receive information regarding the physical object. The system components may be positioned in a known location and orientation relative to the physical object such that data received from the system components may be overlaid onto the virtual representation of physical test object. Exemplary embodiments of the system may be used to render, for example, collective video feeds into a realistic virtual reality environment.
- Exemplary embodiments of the system may have any combination of segments, including, for example, a viewer system segment, a sensor node segment, and a network segment. The viewer system segment may include the human to machine interface, such as a display system. The sensor node segment may include any combination of data collection nodes, system master timing and/or synchronization, and processing and/or storage tasks local to the system. The network segment may include physical data traffic infrastructure including, for example, switches, routers, cabling, etc. In an exemplary embodiment, the system may include process piece in which the system is calibrated and initiated. Calibration may include setting and configuring sensor nodes and/or physical mapping of the sensor nodes to the facility hardware, data feeds, and observed object. Exemplary embodiments of the process piece may align sensor nodes such that baseline three dimensional (or other visualization from the viewer system segment) representation of the observed object is aligned.
-
FIGS. 1A-1B illustrate exemplary environments and objects under observation that may benefit from embodiments described herein. -
FIG. 2 illustrates an exemplary method according to embodiments described herein. -
FIG. 3 illustrates an exemplary high level system diagram of a test cell presence system according to embodiments described herein. -
FIG. 4 illustrates an exemplary component system diagram of a test cell presence system according to embodiments described herein. -
FIGS. 5A-5D illustrate exemplary visualizations from the test cell presence system according to embodiments described herein. -
FIG. 6 illustrates an exemplary test environment to illustrate the methods visualization according to embodiments described herein. -
FIGS. 7A-7C illustrate exemplary data feeds from the test environment ofFIG. 6 . -
FIG. 8 illustrates an exemplary virtual representation of the test environment ofFIG. 6 for visualization in three dimensions according to embodiments described herein. -
FIG. 9 illustrates an exemplary system architecture according to embodiments described herein. -
FIGS. 10A-10C illustrate exemplary dimensional mesh virtual representations to illustrate exemplary embodiments of systems and methods for generating a virtual object. - In the following description of preferred embodiments, reference is made to the accompanying drawings which form a part hereof, and in which it is shown by way of illustration specific embodiments in which the invention can be practiced. It is to be understood that other embodiments can be used and structural changes can be made without departing from the scope of the embodiments of this invention.
- Exemplary embodiments may include Virtual Reality (VR) and Augmented Reality (AR) solutions that display and interact with three dimensional data sets while preserving temporal and spatial registration. Although display options are described herein in terms of virtual reality, other display options are also included herein, including, without limitation, flat screen approximations of the three dimensional rendering viewable in augmented reality. Exemplary embodiments may include devices for receiving data including, for example, imaging systems, temperature sensing, optical imaging in various wavelengths, electrical sensor, mechanical sensors, and combinations thereof. Exemplary embodiments described herein may be hardware-agnostic and not tied to a specific VR or AR product and/or brand, allowing the customer to leverage appropriate VR/AR technology evolutions as they materialize.
- Exemplary embodiments include a system of receiving data about an object (including an environment or multiple objects) under observation. Exemplary embodiments are configured to superimpose the received data onto a three dimensional virtual object, where the three dimensional virtual object is a representation of the physical object.
-
FIGS. 1A-1B illustrate exemplary applications in which embodiments of the system and method for visualization described herein may be used.FIG. 1A illustrates a medical procedure in which a medical professional is performing the medical procedure on a patient. The procedure may be captured through one or more cameras and replayed or observed in real time. Such recording or display may be used for training purposes or for maintaining a record of the procedure or other purpose. Conventionally, the captured visual data feeds would be replayed in isolation, such as through display on one or more two dimensional monitors. However, such two dimensional segregated presentation of the information may not provide the same experience for a viewer as the opportunity to observe the actual procedure in three dimensions. Information about relative positions may be lost on a viewer merely watching individual feeds from one or more camera feeds.FIG. 1B illustrates an exemplary object that may be observed during operation, such as in a facility or industrial application. One or more cameras may be used to monitor, record, observe or combinations thereof the object during use. The system may capture, display, or record the information for observation in real time or at a later time. Such applications may permit remote viewing or monitoring, facility monitoring, infrastructure monitoring, quality assurance, fault detection, forensic assessment for recovery or isolation during a fault occurrence, or in damage assessments after a fault occurrence. - Conventionally, each feed may be observed and/or recorded individually. A user thereafter observes the various individual feeds. Although one or more feeds may be visually present and visible to a user (such as a user displaying two separate video feeds simultaneously), there is conventionally not a convenient way to integrate the information for a better or complete understanding of the object under observation.
- Exemplary embodiments described herein provide a system and methods for providing an integrated view of an object under observation including information from one or more sources.
FIG. 2 illustrates a flow diagram for methods of visualizing data of an object under observation by overlaying received data onto a virtual object corresponding to the physical object. The exemplary method includes receiving information about the physical object, providing a virtual object corresponding to the physical object, receiving information from one or more sources, and overlaying the received information onto the virtual object to provide an integrated view of the test object. - As represented at
step 202 ofFIG. 2 , an exemplary method according to embodiments described herein include providing a physical environment including an object for observation. The physical object may be any physical object, group of objects, or environment for observation. Observation is intended to be inclusive of any objective including visual observation as well as specific monitoring, physical testing (such as run time monitoring/testing or environmental monitoring/testing). Run time and environmental testing may include operating an object in different environments, including dynamic (changing) environments of temperature, pressure, humidity, vibration, acceleration, movement, etc. Observation may also include any observable attribute of an object, not necessarily limited to visual observations. For example, observations may be through sensed information, such as temperature, speed, object input (such as power, current, etc.), object output (such as exhaust, power, current, light, heat, etc.), and any combination thereof. - As represented at
step 204, the exemplary method includes providing a presence system according to embodiments described herein. The presence system may include one or more data sources to observe the physical object ofstep 202. As described above, the observations may be through any combination of attributes. In an exemplary embodiment, the test cell presence system comprises one or more cameras. The exemplary cameras may be in one or more bandwidths, such as for visual observation in different spectrums, including without limitation, visual, infrared (IR), ultra violet (UV), or other frequency such as for night vision, heat detection, etc. The one or more data sources may be any combination of sensors. In an exemplary embodiment, the sensor may be, for example, IR, vibration, UV, visual, audial, temperature, speed, current, composition, etc. - At
step 206, the method includes providing a virtual representation of the physical object. In an exemplary embodiment, to create the overlay of the data onto the virtual object, exemplary embodiments may include creating an accurate three-dimensional representation of the physical setup including the object for observation and/or test hardware components. Exemplary embodiments may use modeling or other rendering to create a virtual representation of an exemplary physical environment including the object under observation. In an exemplary embodiment, test ready computer aided design (CAD) models may be used as a basis for the virtual object. Any method for creating a virtual representation of the physical environment and/or test object are within the scope of the instant application. For example, methods for generating a virtual representation from a physical object or environment may include laser scan photometric scan, or other detector, system, or method of generating a three-dimensional rendering. Exemplary methods to create accurate three dimensional renders of the object and/or test cell hardware and/or environment may include any combination of steps, including, without limitation, CAD modeling of the object and/or component parts, object detection and rendering through one or more sensors, image recognition, and combinations thereof. - At
step 208, the physical environment including the object for observation (test object) may be mapped to the virtual object. The system may therefore be calibrated and/or initialized such that the physical mapping of the facility hardware, data feeds, and the observed object correspond to and properly align when overlaid onto the virtual representation. In this step, the system components may be positioned in a known location and orientation relative to the physical object such that data received from the system components may be overlaid onto the virtual representation of physical object. Other calibration systems and methods may also be used. For example, manual alignment may be used to align the visual feedback to the overlaid virtual object. The manual alignment may be performed in physical positioning of the sensors and camera, in electronic or software manipulation of the alignment of the overlay to the virtual objects and combinations thereof. In an exemplary embodiment, the system may be automated to detected a position of the sensors and determine a corresponding alignment for the sensor feed for overlaying on the virtual representation. For example, image recognition techniques may be used to identify a position on a camera feed to correspond with a position on the virtual representation. In an exemplary embodiment, the system may integrate one or more sensors into a data feed such that the data feed is in a predetermined location relative to a sensor for determining its position relative to the test object or other known environmental position. The data feed may therefore be able to self-locate and its data feed overlaid on the virtual object automatically. Exemplary embodiments may include combinations of automatic and manual calibrations. For example, the system may be manually calibrated to a set up orientation. However, during a test procedure or observation sensors may be permitted to move, rotated, or otherwise reposition. The repositioning of the sensors may be performed through command signals to mechanical/electrical components such that the repositioning is by a known amount. The system may thereafter automatically recalibrate based on the known changes to the system configuration. The system may also initially automatically calibrate, but may permit manual adjustments to improve or correct automatic determinations. - As represented by
step 210, the method includes receiving information regarding the physical environment, including, for example, the physical object under observation. The system may be configured to receive information from any of the described data sources or other source. Information may come from data of the one or more data sources, including cameras, sensors, etc. The information may come from sensed data, analyzed data, received data, data input, etc. - At
step 212, the method may include manipulating the received data in some way. The system may be configured to aggregate the data sources for representation on the virtual object. The system may aggregate the data sources by aligning the data sources. For example, the data may be aggregated by synchronizing the feeds in time. The data may be aggregated by aligning the data relative to a corresponding relative physical location. For example, data may be overlaid, duplicated, filtered, and combinations thereof for portions of data sources that overlap. In an exemplary embodiment, one or more data sources may provide a panoramic view of a test object, but may include overlapping areas between data sources. Aggregating the information may include aligning the feeds, and filtering overlapping data. Filtering may be by averaging information, removing information, etc. - Exemplary embodiments may also include the addition of dynamic data sources. For example, a user input through the user interface may generate data that can be appended to a data source or data stream or visual representation or recreation. For example, a user may look at the virtual representation using the user interface as described herein. The user may provide an input to the system, such as through an electronic controller (for example, a button push or movement queue). The user input may provide a tag or other input to the system that can be stored with the data for recreation or review in real time or replay. The tag may permit a user to enter in additional information, such as notes, or observation queues, or may simply identify points of observation to permit searching, training, record keeping, or other data manipulation at a later time.
- The system may perform other data analysis or synthesis. For example, the system may be configured to reduce a fidelity of one or more data sources to improve band width transmission. Fidelity may be reduced based on level of observation. For example, the lower fidelity (less data) may correspond to more distant points of view or larger areas of observation, while a higher fidelity (more data) may be provided for more specific areas of observation. The system may be configured to identify or receive areas of interest in which higher areas of fidelity are desired and/or lower areas of interest, either of which may indicate the converse to the system. The fidelity may also be set based on received information, historical information, rates of change, etc. If the received information is within normal tolerances or a set tolerance, or within a given rate of change relative to a historical value, the system may reduce the fidelity as the received information. If the received information is changing, close to or within or outside of a range of observation or predefined or determined range, or other criteria, the system may be configured to receive or capture a higher fidelity. Fidelity may be, for example, a sampling rate of a given sensor or density of information such as in higher resolution. Exemplary embodiments may also perform analysis of one or more data sources or feeds for event detection. The system may be configured to adjust a fidelity of information based on the detection of an adverse or known event or other system criteria.
- At
step 214, the method may include storing the information. The system may be configured to store any combination of information. For example, the system may store the raw data feeds from the one or more data sources. The system may store the aggregated data sources. The system may store any analyzed, synthesized, or any combination of manipulated data. The system may also store the visualization ofstep 216. - Exemplary embodiments of the system may be used to render, for example, collective video feeds into a realistic virtual reality environment. The method, at
step 216, may include rendering information onto the virtual representation of the physical object. The visualization may be through any digital interface, such as a monitor, screen, virtual reality display, augmented reality display, etc. The visualization may be through augmented reality and/or virtual reality or other three dimensional digital display (referred to collectively herein as digital reality). In this instance, the virtual representation of the physical object may be rendered and displayed in three dimensions. The information corresponding to the physical environment may be overlaid onto the virtual representation such that the received information is depicted visually directly over, onto, or proximate the virtual object. The user may therefore receive an approximation of the physical object during the observation in virtual space as if observing directly in physical space. The representation and/or overlay may alter the visual of the representation for the viewer such that it is not the same as a direct observation of a physical object. This may be, for example when a temperature or camera detecting in a non-visual spectrum is used and overlaid such that the virtually rendered object with information overlaid thereon may be represented in color corresponding to temperature, similar to a three dimensional heat map. - Exemplary embodiments described herein include systems and methods for providing a virtual presence system in which an object may be observed. The observation may include additional information beyond (or in addition to) visual inspection, such as through different frequencies, temperature, or other sensor information, and/or may include remote inspection by a viewer removed from the test location or facility.
-
FIG. 3 illustrates an exemplary block representation of a virtual presence system according to embodiments described herein. Exemplary embodiments of avirtual presence system 300 may include any combination of segments, including, without limitation, aview system segment 304, asensor node segment 302, and anetwork segment 306. The method may also include aprocessing segment 302A. Exemplary embodiments are described herein in terms of different segments for example and explanation only. The system does not require any specific integration or segregation of segments. For example, any combination of components may be used as would be apparent to a person of skill in the art. - In an exemplary embodiment, the view system segment may include a user interface for displaying the results described herein. The
view system segment 304 may include any combination of displays, includinginteraction stations 312 that permit user input and machine output including any digital display (augmented reality, virtual reality, 2-D screen, hologram, etc.). The user interface may be through a display or human machine interface. An exemplary embodiment of the display includes a virtual reality or augmented reality display/user interface. Other user interfaces may include digital displays such as 2-D screens, projectors, holograms, or other visual display system. Exemplary embodiments of the system are configured to display a virtual rendering of the object under observation with or without an environment around the object. The object may be the environment itself, and does not require a specific component for observation. The system is configured to display virtual representations of information about the physical environment including the physical object overlaid onto, positioned adjacent, or otherwise in relation to the virtual rendering of the object. In an exemplary embodiment, the representations of information is a camera feed conformed about the virtual rendering of the object such that the display of the representation of information with the virtual object is a recreated three dimensional view corresponding to the physical object under observation as seen by one or more sensors, including one or more cameras. Other information may be overlaid or displayed on the virtual rendering of the object, such as, for example, color coded areas, call outs, text, or other display of information in relation to the virtual rendering of the test object corresponding to the information of the physical object. - The
sensor node segment 302 of thesystem 300 may include any combination of sensors, controls, processing, orother components 308 for collecting the information for display. An exemplary embodiment of the sensor node segment is configured to receive data from the physical environment and/or physical object. The nodes may include any sensor, such as a camera, thermal detector, etc. The sensor node segment may also include components for system master timing and/or synchronization, one or more processors, and one or more memory for storing tasks and/or data associated with the system, and/or controlling the one or more sensors or other sensor node segment components. In an exemplary embodiment, the sensor node segment or one or more components of the sensor node segment may be positioned within an observation environment boundary. The observation environment boundary may segregate the observation environment from a remainder of the environment and/or one or more users. The observation environment boundary may be used to contain an environment, such that temperature, pressure, humidity, and other environment factors may be controls, as well as contain chemicals, exhaust, heat, or other hazardous or unhealthy conditions from human observers. - In an exemplary embodiment of methods using embodiments of the virtual presence system, the
system 300 may be calibrated in aprocessing segment 302A. For example, theprocessing segment 302A may include the calibration of sensor nodes from thesensor nodes segment 302, including illumination and camera performance parameters, and physical mapping of sensor nodes to the physical object and/or facility hardware and data feeds. The alignment of sensor nodes baselines the 3D visualization and may complete the system initialization. As described herein, the calibration of the virtual presence system may be manual, automated or a combination thereof. - The
network segment 306 may include one or more components such as network hardware, timing, communication, etc. 314. An exemplary embodiment of the network segment includes methods and components for communication between different components of the system and/or to or from the test environment. For example, the physical data traffic infrastructure including switches, routers, and cabling that connects the system components. -
FIG. 4 illustrates an exemplary test system architecture according to embodiments described herein for the virtual presence system. - The
virtual presence system 400 includes a location for thephysical object 402. Thephysical object 402 is any physical object for observation and/or testing in the environment. The observation environment is defined by the observation environment boundary. The observation environment boundary may be a physical boundary or separation or may simply be imposed by the field of view or detection of the one or more sensors. As described herein, the test environment boundary permits the delineation, separation, and/or control of the observation environment including the physical object under observation. The observation environment boundary may be sealed, such as to control pressure environments, may be sealed and/or vented to contain hazardous materials, or may include other structures, components, and features as the environmental needs dictate for performing the desired observation of the physical object. - Exemplary embodiments include
sensors 406 andcontrol systems 408 within the observation environment for receiving and transmitting data about thephysical object 402. The sensors may be any combination of data receiving inputs, such as different video-source sensors 404 (e.g. cameras) utilized to provide video coverage across different wavebands. Any combination of sensor types, quantities, qualities, locations, etc. may be used within the scope of the present disclosure. Different sensors and cameras are illustrated inFIG. 4 as cameras C1-C4 and sensors S1-S4. As used herein a camera is a type of sensor. Sensor hardware may be based upon a specific test environment and designed or configured for specific imaging requirements and may vary by installation or test.Exemplary control systems 408 may be ruggedized depending on the local test unit environment and hardware used for the command/control unit. The system may provide other components based on the test environment, such as, for example shock isolation. Thecontrol system 408 may manage video traffic and providing accurate timing across sensor nodes. Thecontrol system 408 may be positioned based upon cable lengths, and environmental considerations. Thecontrol system 408 may include memory to provide local data storage. - Exemplary embodiments of the test
cell presence system 400 may include adata aggregation hub 410. The data aggregation hub may include one or more processor and one or more memory and other components for managing the synchronization, video routing, command traffic, or other features of the network segment described herein. Theaggregation hub 410 may receive the data feeds from the sensors within the test environment. Thedata aggregation hub 410 may also be configured to perform any of the data aggregation, analysis, filtering, synthesizing or other modification of the raw data from the sensors from the test environment. The data aggregation hub may be proximate the test environment or may be remote therefrom. - Exemplary embodiments of the system may include a viewer system segment including user visual displays. Any 2-
D screen 412 or user display device may also be used. Alternatively, or in addition thereto, any digital (either virtual, augmented, or holographic)reality system 414 may be used. In an exemplary embodiment, the digital reality display may use “inside-out” tracking with all tracking hardware present on a headset. Other tracking and control inputs may also be used. For example, a controller, such as a handheld remote may be used. The exemplary tracking and control components may be used to alter the view of the digital reality by changing perspective, zooming, changing display information/inputs, or combinations thereof. Exemplary embodiments may reduce the connections needed between the headset and the rest of the system. -
FIG. 4 illustrates exemplary virtual representations of a physical object as viewed through adigital display virtual reality displays -
FIG. 5A-5D illustrate an exemplary display options in which information is display to a user in combination with the virtual representation of the physical object. In an exemplary embodiment, non-imaging data may be provided with the virtual representation of the test object on a display as a pop up. Exemplary embodiments of the system and method are configured to receive different sources and types of data. The received data may not include visual or imaging data that can be grafted onto the shape or virtual model of the physical object for a direct overlay of the data onto the virtual representation of the physical object. However, this non-conforming information may be displayed in other ways. As illustrated inFIG. 5A-5D , the information may display on a pop up or information display window displaying the data approximate to or with a virtual object indicating the source of the information. Other display options may also be used, such as providing other virtual object overlays. For example, color coding or symbol corresponding or representing the displayed data may be used as an overlay of the virtual representation of the physical test object. As illustrated inFIG. 5A , if a temperature sensor is determined to be out of range, the location of the temperature sensor on the virtual representation of the physical object may change color or a symbol (illustrated as a star inFIG. 5A ) may be used to draw attention to that location of the virtual representation of the physical object. Different temperature color codes may be used to correspond to or indicate different things, such as in range, out of range, high, low, or approximate temperature range, etc. Also as illustrated inFIG. 5A , text information or other information from a data source may be provided as an overlay positioned in proximity to the virtual representation of the test object corresponding to the source of data represented in the overlay. As illustrated the temperature associated with the symbol displayed on the virtual representation is displayed to a user. -
FIGS. 5A-5D illustrate exemplary demonstrations of a rendered sample virtual environment to demonstrate the notional system user experience. Temperature sensors and other non-imaging data sources could display their status via colored or symbolic indicators on the VR model (FIG. 5A ). Detailed information may appear when the data feed node is “looked at” by the user. Navigation in the virtual environment may control, engage, interact with, and/or view different portions of the system. For example, the system may be configured to detect head movements, which may be used to control the short distance travel and precision positioning of the virtual display environment. Exemplary embodiments may also include hand controllers or other inputs that may be used to trigger long distance movement within the virtual display environment. For example, the system may include a hand held controller that may include a joy stick, buttons, toggle, or other controller(s). In an exemplary embodiment, the user may select to “move to” a given target via the thumb-stick command or other input. Left, or right movement of the thumb-stick may be used to adjust the view's rotation, while reverse movements may be activated by pushing backwards on the thumb-stick. Exemplary embodiments, may suspend or otherwise manipulate the display of the virtual environment, such as through fade in/out, blinking, screen freeze, or other transition to minimize user disorientation during teleporting or rotation from one view or viewing area of the virtual environment to another. As a navigation aid, the system may be configured to receive or define locations for rapid repositioning. The hand controller may superimpose representative user interface controls and information display as illustrated herein. The interface (as illustrated inFIG. 5D ) may be normally hidden within the virtual display environment and may be called up via an input, such as a menu button or head motion input, and then operated with a controller, input, or gesture recognition. - Exemplary embodiments described herein include different implementations for the system, ranging from a simple distribution of a small number of cameras that are approximately located in a less-than-detailed three dimensional model to a high-fidelity rendering of the physical object with live video feeds from numerous precisely aligned and calibrated cameras draped seamlessly onto an accurate three dimensional model of the physical object.
- Exemplary embodiments permit the operator(s) to set up fixed “virtual” display feeds that deliver information to a standard two dimensional display. This enables other personnel to view selected imagery feeds without virtual reality. The system may be used to render the user's virtual reality point of view to a two dimensional display, and/or permit the user of the two dimensional display to rotate or navigate the virtual object through the two dimensional interface. The two-dimensional display may also provide information about the three dimensional user perspective. As seen in
FIG. 5C , an exemplary virtual representation of the test object may be displayed with the perspective position of the viewer through a virtual reality or other three dimensional display system is indicated to represent the focus, perspective, and view of the three dimensional viewer to the two dimensional viewer. As illustrated, the headset/lenses 502 of the three dimensional viewer are represented on the virtual representation of the physical environment to indicate position and direction of the three dimensional viewer. - Exemplary embodiments may permit multiple users to engage with the system simultaneously through any combination of user interface alternatives described herein. For example, one or more users may experience the digital reality as well as one or more other users may experience through two-dimensional displays or even single dimension data feeds. The system may be configured to permit users to control and interact through the user interface either independently, such that each user can manipulate their personal view and receiving corresponding data feeds, or collaboratively such that views may be shared or manipulated collectively, or any combination thereof.
- Exemplary embodiments of the system described herein may include storage for retaining raw data feeds, analyzed data feeds, visual feeds, and any combination thereof. For example, the system may capture, store, and replay the user interaction with the system during a use session such that the visual experience of a user may be captured and replayed. The system may also record raw feeds such as from sensors such that specific information may be replayed as desired. The system may also be configured to further analyze, manipulate, or otherwise handle any of the retained data in order to replay any combination of information or generate new information or displays. In an exemplary embodiment, the virtual representation and/or one or more data feeds may be stored such that a user may recreate the three dimensional display on demand. The user may thereafter interact with the system, such as through movement detection or other user input, to manipulate the user display dynamically during the replay session.
- In an exemplary embodiment, the system may be configured to record one or more data feeds from one or more sensors. For example, referring back to
FIG. 1A , a medical procedure may be captured and recorded from one or more cameras. The system may also include one or more systems or methods for generating a virtual dimensional mesh in which to overlay the camera feeds. The system may permit remote observation and/or replay of the procedure through a three-dimensional display system. The observer, through the system described herein, may interact with the display such that the medical procedure can be observed from different directions, replayed, paused, zoomed in or out, or otherwise manipulated. One or more users may simultaneously or separately replay or interact with the system such that the display experience can be shared or can remain independent and separate. The system and methods described herein may, for example, be used for training after a procedure is complete. The training session may use recordings of an actual procedure, but permit a trainer to pause the procedure or provide specific perspective useful to the training session without interfering with the procedure itself. As another example, referring toFIG. 1B , an industrial process or operation may be observed and/or monitored. The remote user may be separate from the environment of the object under observation for any reason, such as health concerns, physical location separation, etc. The system may permit the three dimensional observation of the object to retain perspective of the object under observation that may be lost if observed on a conventional two dimensional display. - The system may also be configured to take in data feeds from other sources. Referring back to
FIG. 1A , the inserted medical device may have a tracker or other location detecting sensor at an end of the device. The system may be configured to provide a three dimensional image of the patient undergoing the procedure, and may provide an additional virtual object representation in the three dimensional display representing the location of the medical device as determined by the location detecting sensor and may position the virtual object representation relative to the virtual patient based on the relative position of the location the physical device to the physical patient. The location of an object is provided as merely exemplary, any additional information may be received and displayed in the system. Referring back toFIG. 1B , the system may permit a user to observe the object and overlay additional information, such as a visual feed, temperature, pressure, etc. data. The additional information may be represented as text overlaid in a position approximate or correlated to the location on the virtual object. The additional information may also be overlaid on the dimensional mesh or virtual representation, such as by a color coding or symbolic scheme. -
FIGS. 6-8 illustrates an exemplary system reconstruction to illustrate the system and concepts described herein. Abox 602 is chosen as a test object for observation. The box is virtually modeled and a three dimensional mesh model is used to render the video feeds from three camera sources C1, C2, C3 onto the virtual representation of the physical object. As illustrated inFIG. 6 , the test environment includes thetarget object 602 under observation and three cameras C1, C2, C3. As illustrated, a bust figuring is used to illustrate an imaging obstruction in one camera feed, C1. Therefore, since the system has no knowledge that the figure is an obstruction, the system renders the bust onto the model.FIG. 7A illustrates the image received from camera C1;FIG. 7B illustrates the camera feed from camera C2; andFIG. 7C illustrates the feed from camera C3. As illustrated, the bust figure appears in the image of camera C1 in front of thetarget object 602.FIG. 8 illustrates the virtual representation of the physical object with the information from the camera feeds superimposed onto the virtual model. As illustrated, the image of the bust is integrated onto the side of the virtual representation of the cube as the system is unaware that the feed is obstructed and does not correspond to the model representing the physical object. - In an exemplary embodiment, the system may include depth sensors as one or more components of the virtual presence system. The depth sensors may be used to generate the three dimensional mesh or structure for modeling the virtual representation of the physical environment. An exemplary, therefore, may include a system and method of providing or receiving depth sensor outputs for use in an exemplary embodiment to create a three dimensional mesh for use in the virtual object overlay. In an exemplary embodiment, a three dimensional rendering method may include the user of a depth sensor, either in combination or separate from the camera or video feed. In an exemplary embodiment, a combined color camera depth sensor is used. Exemplary embodiments can be used to create a three dimensional mesh for the perspective of the camera.
-
FIGS. 10A-10C illustrate exemplary renderings of a virtual object with image overlay based on a dimensional mesh created from one or more depth sensors as described herein.FIG. 10A illustrates the front, andFIG. 10B illustrates a side, andFIG. 10C illustrates a top view of an exemplary depth sensor output for use in an exemplary embodiment to create a three dimensional mesh for use in the virtual object overlay. In an exemplary embodiment, a three dimensional rendering method may include the user of a depth sensor, either in combination or separate from the camera or video feed. In an exemplary embodiment, a combined color camera depth sensor is used. Exemplary embodiments can be used to create a three dimensional mesh for the perspective of the camera. - In an exemplary embodiment, the virtual object or dimensional mesh for overlaying information or rendering a video display for three dimensional display may be dynamic such that the dimensional mesh may change and correspond to the physical environment in real time or semi-real time. To reduce processing or band width or improve fidelity, the system may be configured to update the dimensional mesh for a portion of the virtual representation while maintaining other portions static. For example, if the observation is on a surgery or industrial operation in which only a portion of the object of observation is moving or changing, only that portion of the virtual representation needs to be updated. Accordingly, the system may be configured to automatically detect changes and update portions of the virtual representation accordingly and/or may be programed to update the virtual representation for an identified region of the virtual representation.
- Exemplary embodiments may be configured to resolve small objects (for example, ˜0.05 inches or less). Exemplary embodiments of the system and methods described herein may allow the user to “walk around” in digital reality and monitor critical joints, items and connection points. The system may deliver multi-spectral sensing capability (visible and infrared wavebands, as examples) with continuous, real-time 3D video feeds. The system architecture may support integration of active viewing of other data sources (temperatures, pressures, data feeds, etc.). As a system, exemplary embodiments permit faster visualization and a more comprehensive understanding of the operational environment, helping detect minor issues before they grow into major problems.
- Exemplary embodiments may include system architectures that may consider both the large amounts of real-time data required for rendering the test cell presence into virtual or augmented reality and the practical limitations of today's state-of-the-art computers. Exemplary embodiments of a virtual presence system implementation may include sensible camera selection, appropriate network design, intelligent bandwidth management, and practical considerations about the physical environment coverage requirements. Large-scale, high-resolution viewing of the physical environment unit may include a form of video compression, or a method for video feed switching implemented as a “Level of Detail” viewing capability. “Level of Detail” may automatically (or manually) reduce the resolution of the camera field into digital reality or for display depending on the virtual distance between the viewer and camera and actual resolution of the display. “Level of Detail” may adjust the resolution or other fidelity (sampling rate, etc.) displayed in digital reality or other display methods depending on a virtual distance between a virtual viewing perspective and the virtual representation of the test object. For example, if a user through the digital reality interface moves closer to the virtual representation of the test object, the fidelity or resolution of the display may increase, while the fidelity or resolution may be reduced as the viewer digitally moves further away from the virtual representation of the test object. These methods may be incorporated to preserve transmission bandwidth.
- Exemplary embodiments may use any combination of hundreds of potential cameras. Any combination of cameras, sensors, and data sources may be used in any combination. Therefore, there may be a single camera or any number of multiple cameras, sensors, or other data feeds or sources. The cameras vary by waveband, image type, focal plane size, pixel pitch, frame rate, data output format, interface, environmental performance range, and other parameters. In an exemplary embodiment, the system includes hard-mounted sensors with fixed focal length lenses. The fixed location and focal length of the cameras may provide for easier calibration and mesh overlay of the received data on the rendered virtual object. The system may also use variable locations and/or focal lengths in which the system may be manually or automatically recalibrated according to embodiments described herein. In an exemplary embodiment, highly stabilized and steerable custom imaging systems may be used that provide accurate and repeatable positioning.
- Calibration of exemplary embodiments described herein may include white balance, and other performance parameters. Calibration may include the physical three dimensional mapping of the physical system components, physical object under observation, and the relative alignment of the senor nodes to the three dimensional map. A calibration process permits sensor node alignment and permits the proper generation of three dimensional imagery from the two dimensional video feeds. Calibration may be used to establish various intrinsic and extrinsic parameters of the respective sensor nodes and may record them as part of an initialization process. Intrinsic parameters (lens focal length, camera pixel pitch, etc.) remain fixed throughout the lifetime of the sensor node, while extensive parameters such as sensor node position and orientation may vary due to operational needs. The use of fixed system reference points and rigid mounting techniques helps minimize recalibration burdens.
- In an exemplary embodiment, the system may also include dynamic or controllable intrinsic parameters, such as camera position, orientation, focal length, etc. The system may be configured to detect a change in a dynamic intrinsic parameter and recalibrate the system accordingly. The recalibration may be automatic, manual, or a combination thereof. The system may also include one or more identification sensors to assist in calibration. For example, the system may detect or determine a location, use visual or other data recognition to relate a data stream to the virtual representation to permit calibration and data overlay to the virtual representation.
- Exemplary embodiments of the system including a multi-camera system can benefit from using a master timing device and master clock. Exemplary embodiments of a system architecture is illustrated in
FIG. 9 . The system may include any combination ofswitch controllers 902 coupled to any combination of cameras C1-Cn. One embodiment may use IEEE 1558 compliant cameras to simplify system level timing synchronization and synchronizing all computers, cameras, and networking equipment in a system. Other standards, protocols, components, methods, and combinations thereof may also be used for timing, synchronization, or amalgamating data. A master clock can aid in determining and fixing any sources of latency that may occur. The system may include any combination ofdata aggregation hub 912 or other analytics components as described herein. The system may include any combination ofdigital displays 904 for rendering the virtual object in conjunction with information from the one or more data sources. The system may be integrated into a conventional orprevious system architecture 910 and protected through afire wall 908 and have access to the system network orinternet 906. - An exemplary network may include a physical topology that supports future increases in camera count and capability. The transfer of video from cameras to optional local data storage nodes may also be used to minimize sharing of links and allows direct calculation of the bandwidth and storage capacity requirements. Long-haul links, such as those between the test area and display area, may use fiber. Remaining links may be copper, unless greater resolution is required.
- Exemplary embodiments of system components of the system network may include managed equipment, in that they have a dedicated management interface and maintain operational statistics. This visibility into network behavior may be used to verify the configurations and expected results against real system operations.
- Exemplary embodiments seek to minimize unmonitored choke points in which excessive flows of video data converge. The network structure may be used to increase flexibility in resource allocation and can expand to incorporate additional cameras and storage nodes on an as-needed basis.
- Exemplary embodiments may use various configurations of video compression or various compression techniques such as H.264, H.265, JPG, and JPG2000. These compression methods could reduce the amount of bandwidth required by the network but introduce latency and require some form of processing power and may also reduce overall image fidelity.
- Exemplary embodiments of the system described herein can be based in software and/or hardware. While some specific embodiments of the invention have been shown the invention is not to be limited to these embodiments. For example, most functions performed by electronic hardware components may be duplicated by software emulation. Thus, a software program written to accomplish those same functions may emulate the functionality of the hardware components in input-output circuitry. The invention is to be understood as not limited by the specific embodiments described herein, but only by scope of the appended claims.
- Although embodiments of this invention have been described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of embodiments of this invention as defined by the appended claims.
Claims (14)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/777,809 US20200242280A1 (en) | 2019-01-30 | 2020-01-30 | System and methods of visualizing an environment |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962798951P | 2019-01-30 | 2019-01-30 | |
US16/777,809 US20200242280A1 (en) | 2019-01-30 | 2020-01-30 | System and methods of visualizing an environment |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200242280A1 true US20200242280A1 (en) | 2020-07-30 |
Family
ID=71731426
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/777,805 Active 2040-10-01 US11681834B2 (en) | 2019-01-30 | 2020-01-30 | Test cell presence system and methods of visualizing a test environment |
US16/777,809 Abandoned US20200242280A1 (en) | 2019-01-30 | 2020-01-30 | System and methods of visualizing an environment |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/777,805 Active 2040-10-01 US11681834B2 (en) | 2019-01-30 | 2020-01-30 | Test cell presence system and methods of visualizing a test environment |
Country Status (1)
Country | Link |
---|---|
US (2) | US11681834B2 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113674430A (en) * | 2021-08-24 | 2021-11-19 | 上海电气集团股份有限公司 | Virtual model positioning and registering method and device, augmented reality equipment and storage medium |
US11681415B2 (en) * | 2018-10-31 | 2023-06-20 | Apple Inc. | Near-viewing notification techniques |
US11750794B2 (en) | 2015-03-24 | 2023-09-05 | Augmedics Ltd. | Combining video-based and optic-based augmented reality in a near eye display |
US11801115B2 (en) | 2019-12-22 | 2023-10-31 | Augmedics Ltd. | Mirroring in image guided surgery |
US11896445B2 (en) | 2021-07-07 | 2024-02-13 | Augmedics Ltd. | Iliac pin and adapter |
US11974887B2 (en) | 2018-05-02 | 2024-05-07 | Augmedics Ltd. | Registration marker for an augmented reality system |
US11980506B2 (en) | 2019-07-29 | 2024-05-14 | Augmedics Ltd. | Fiducial marker |
US11980429B2 (en) | 2018-11-26 | 2024-05-14 | Augmedics Ltd. | Tracking methods for image-guided surgery |
US12044856B2 (en) | 2022-09-13 | 2024-07-23 | Augmedics Ltd. | Configurable augmented reality eyewear for image-guided medical intervention |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11836965B2 (en) * | 2020-08-12 | 2023-12-05 | Niantic, Inc. | Determining visual overlap of images by using box embeddings |
US20240062476A1 (en) * | 2022-08-22 | 2024-02-22 | Hewlett-Packard Development Company, L.P. | Augmented reality presentations of information from quantification instruments |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100315416A1 (en) * | 2007-12-10 | 2010-12-16 | Abb Research Ltd. | Computer implemented method and system for remote inspection of an industrial process |
Family Cites Families (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7737965B2 (en) | 2005-06-09 | 2010-06-15 | Honeywell International Inc. | Handheld synthetic vision device |
US20080310707A1 (en) | 2007-06-15 | 2008-12-18 | Microsoft Corporation | Virtual reality enhancement using real world data |
KR101085390B1 (en) | 2008-04-30 | 2011-11-21 | 주식회사 코아로직 | Image presenting method and apparatus for 3D navigation, and mobile apparatus comprising the same apparatus |
US8963943B2 (en) | 2009-12-18 | 2015-02-24 | Electronics And Telecommunications Research Institute | Three-dimensional urban modeling apparatus and method |
US20110169832A1 (en) * | 2010-01-11 | 2011-07-14 | Roy-G-Biv Corporation | 3D Motion Interface Systems and Methods |
US8880341B2 (en) | 2010-08-30 | 2014-11-04 | Alpine Electronics, Inc. | Method and apparatus for displaying three-dimensional terrain and route guidance |
US8884984B2 (en) | 2010-10-15 | 2014-11-11 | Microsoft Corporation | Fusing virtual content into real content |
US8924475B2 (en) * | 2011-01-05 | 2014-12-30 | Kazuo Morishita | Emergency response center |
US20120264510A1 (en) | 2011-04-12 | 2012-10-18 | Microsoft Corporation | Integrated virtual environment |
EP2518444B1 (en) | 2011-04-29 | 2014-06-11 | Harman Becker Automotive Systems GmbH | Navigation device and method of determining a height coordinate |
US9286712B2 (en) | 2013-03-15 | 2016-03-15 | Google Inc. | System and method for approximating cartographic projections by linear transformation |
US20150206343A1 (en) | 2014-01-17 | 2015-07-23 | Nokia Corporation | Method and apparatus for evaluating environmental structures for in-situ content augmentation |
JP6376807B2 (en) | 2014-04-02 | 2018-08-22 | キヤノン株式会社 | Display device, display control method, and program |
EP3206184A1 (en) * | 2016-02-11 | 2017-08-16 | NXP USA, Inc. | Apparatus, method and system for adjusting predefined calibration data for generating a perspective view |
US10665019B2 (en) | 2016-03-24 | 2020-05-26 | Qualcomm Incorporated | Spatial relationships for integration of visual images of physical environment into virtual reality |
CN109690634A (en) | 2016-09-23 | 2019-04-26 | 苹果公司 | Augmented reality display |
CA2957977C (en) | 2017-02-15 | 2019-03-26 | Synaptive Medical (Barbados) Inc. | Sensored surgical tool and surgical intraoperative tracking and imaging system incorporating same |
US10417810B2 (en) | 2017-05-31 | 2019-09-17 | Verizon Patent And Licensing Inc. | Methods and systems for rendering virtual reality content based on two-dimensional (“2D”) captured imagery of a three-dimensional (“3D”) scene |
US10841537B2 (en) | 2017-06-09 | 2020-11-17 | Pcms Holdings, Inc. | Spatially faithful telepresence supporting varying geometries and moving users |
US10497177B1 (en) | 2017-09-19 | 2019-12-03 | Bentley Systems, Incorporated | Tool for onsite augmentation of reality meshes |
EP3759693A4 (en) | 2018-02-27 | 2021-11-24 | Magic Leap, Inc. | Matching meshes for virtual avatars |
US10984586B2 (en) | 2018-07-27 | 2021-04-20 | Microsoft Technology Licensing, Llc | Spatial mapping fusion from diverse sensing sources |
US10271040B1 (en) | 2018-08-09 | 2019-04-23 | Alive 3D | Dynamic angle viewing system |
JP7534311B2 (en) | 2018-10-09 | 2024-08-14 | レソナイ インコーポレイテッド | Systems and methods for 3D scene augmentation and reconstruction - Patents.com |
WO2020099251A1 (en) | 2018-11-15 | 2020-05-22 | Koninklijke Philips N.V. | Systematic positioning of virtual objects for mixed reality |
-
2020
- 2020-01-30 US US16/777,805 patent/US11681834B2/en active Active
- 2020-01-30 US US16/777,809 patent/US20200242280A1/en not_active Abandoned
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100315416A1 (en) * | 2007-12-10 | 2010-12-16 | Abb Research Ltd. | Computer implemented method and system for remote inspection of an industrial process |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11750794B2 (en) | 2015-03-24 | 2023-09-05 | Augmedics Ltd. | Combining video-based and optic-based augmented reality in a near eye display |
US12069233B2 (en) | 2015-03-24 | 2024-08-20 | Augmedics Ltd. | Head-mounted augmented reality near eye display device |
US12063345B2 (en) | 2015-03-24 | 2024-08-13 | Augmedics Ltd. | Systems for facilitating augmented reality-assisted medical procedures |
US11980507B2 (en) | 2018-05-02 | 2024-05-14 | Augmedics Ltd. | Registration of a fiducial marker for an augmented reality system |
US11974887B2 (en) | 2018-05-02 | 2024-05-07 | Augmedics Ltd. | Registration marker for an augmented reality system |
US11980508B2 (en) | 2018-05-02 | 2024-05-14 | Augmedics Ltd. | Registration of a fiducial marker for an augmented reality system |
US11681415B2 (en) * | 2018-10-31 | 2023-06-20 | Apple Inc. | Near-viewing notification techniques |
US11980429B2 (en) | 2018-11-26 | 2024-05-14 | Augmedics Ltd. | Tracking methods for image-guided surgery |
US11980506B2 (en) | 2019-07-29 | 2024-05-14 | Augmedics Ltd. | Fiducial marker |
US11801115B2 (en) | 2019-12-22 | 2023-10-31 | Augmedics Ltd. | Mirroring in image guided surgery |
US12076196B2 (en) | 2019-12-22 | 2024-09-03 | Augmedics Ltd. | Mirroring in image guided surgery |
US11896445B2 (en) | 2021-07-07 | 2024-02-13 | Augmedics Ltd. | Iliac pin and adapter |
CN113674430A (en) * | 2021-08-24 | 2021-11-19 | 上海电气集团股份有限公司 | Virtual model positioning and registering method and device, augmented reality equipment and storage medium |
US12044856B2 (en) | 2022-09-13 | 2024-07-23 | Augmedics Ltd. | Configurable augmented reality eyewear for image-guided medical intervention |
US12044858B2 (en) | 2022-09-13 | 2024-07-23 | Augmedics Ltd. | Adjustable augmented reality eyewear for image-guided medical intervention |
Also Published As
Publication number | Publication date |
---|---|
US11681834B2 (en) | 2023-06-20 |
US20200358833A1 (en) | 2020-11-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200242280A1 (en) | System and methods of visualizing an environment | |
US11301199B2 (en) | Multi-viewpoint switched shooting system and method | |
JP2020514900A5 (en) | ||
RU2738220C1 (en) | Display control device, display control method and storage medium | |
RU2019131371A (en) | MIXED REALITY VIEWER SYSTEM AND METHOD FOR IT | |
KR101695249B1 (en) | Method and system for presenting security image | |
JP4700476B2 (en) | Multi-view video composition device and multi-view video composition system | |
CN106657906A (en) | Information equipment monitoring system with function of self-adaptive scenario virtual reality | |
KR20070041492A (en) | Method and system for performing video flashlight | |
CN104883557A (en) | Real time holographic projection method, device and system | |
CN109636763B (en) | Intelligent compound eye monitoring system | |
JP2010504711A (en) | Video surveillance system and method for tracking moving objects in a geospatial model | |
US20220351751A1 (en) | Camera tracking system for live compositing | |
CN108594999A (en) | Control method and device for panoramic picture display systems | |
EP3882866B1 (en) | Information processing system, information processing method, and program | |
AU2019271924B2 (en) | System and method for adjusting an image for a vehicle mounted camera | |
CN117278734B (en) | Rocket launching immersive viewing system | |
WO2021223667A1 (en) | System and method for video processing using a virtual reality device | |
US20240054739A1 (en) | Information processing apparatus, information processing method, and storage medium | |
JP2000152216A (en) | Video output system | |
WO2022205026A1 (en) | Product display method and apparatus, and electronic device | |
KR20050113488A (en) | Disital video recorder monitoring system and monitoring method | |
CN117669976A (en) | Three-dimensional visualization system device and method for railway traction substation | |
CN118264783A (en) | Fusion method, device, equipment and storage medium of two-dimensional video and three-dimensional scene | |
JP2023070220A (en) | Camera operation simulation device and program thereof, and camera image generation device and program thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
AS | Assignment |
Owner name: AUGMNTR, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PAVLOFF, ALEXANDER GEORGE;REDD, BRYAN LAFAYETTE;REEL/FRAME:058915/0286 Effective date: 20200505 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION COUNTED, NOT YET MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |