NZ715526B2 - System and method for role negotiation in multi-reality environments - Google Patents
System and method for role negotiation in multi-reality environments Download PDFInfo
- Publication number
- NZ715526B2 NZ715526B2 NZ715526A NZ71552614A NZ715526B2 NZ 715526 B2 NZ715526 B2 NZ 715526B2 NZ 715526 A NZ715526 A NZ 715526A NZ 71552614 A NZ71552614 A NZ 71552614A NZ 715526 B2 NZ715526 B2 NZ 715526B2
- Authority
- NZ
- New Zealand
- Prior art keywords
- role
- devices
- local
- remote
- user
- Prior art date
Links
- 230000005540 biological transmission Effects 0.000 claims abstract description 74
- 238000009877 rendering Methods 0.000 claims abstract description 46
- 230000002452 interceptive Effects 0.000 claims abstract description 20
- 238000004891 communication Methods 0.000 claims description 44
- 230000015654 memory Effects 0.000 claims description 17
- 230000004044 response Effects 0.000 claims description 17
- 230000003993 interaction Effects 0.000 claims description 16
- 230000000051 modifying Effects 0.000 claims description 3
- KISFEBPWFCGRGN-UHFFFAOYSA-M sodium;2-(2,4-dichlorophenoxy)ethyl sulfate Chemical compound [Na+].[O-]S(=O)(=O)OCCOC1=CC=C(Cl)C=C1Cl KISFEBPWFCGRGN-UHFFFAOYSA-M 0.000 claims 2
- 230000001419 dependent Effects 0.000 claims 1
- 238000000034 method Methods 0.000 description 25
- 239000000203 mixture Substances 0.000 description 18
- 210000003128 Head Anatomy 0.000 description 15
- 210000004247 Hand Anatomy 0.000 description 13
- 230000000875 corresponding Effects 0.000 description 12
- 238000001356 surgical procedure Methods 0.000 description 12
- 210000003484 anatomy Anatomy 0.000 description 10
- 230000001131 transforming Effects 0.000 description 8
- 239000011521 glass Substances 0.000 description 7
- 230000000694 effects Effects 0.000 description 5
- 230000003287 optical Effects 0.000 description 4
- 230000001960 triggered Effects 0.000 description 4
- 108010010803 Gelatin Proteins 0.000 description 3
- 239000002131 composite material Substances 0.000 description 3
- 230000002708 enhancing Effects 0.000 description 3
- 229920000159 gelatin Polymers 0.000 description 3
- 239000008273 gelatin Substances 0.000 description 3
- 235000019322 gelatine Nutrition 0.000 description 3
- 235000011852 gelatine desserts Nutrition 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000006011 modification reaction Methods 0.000 description 3
- 230000002093 peripheral Effects 0.000 description 3
- 238000004805 robotic Methods 0.000 description 3
- 238000000844 transformation Methods 0.000 description 3
- 210000000941 Bile Anatomy 0.000 description 2
- 210000003127 Knee Anatomy 0.000 description 2
- 229940035295 Ting Drugs 0.000 description 2
- 230000003190 augmentative Effects 0.000 description 2
- 238000004883 computer application Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000002059 diagnostic imaging Methods 0.000 description 2
- 239000006185 dispersion Substances 0.000 description 2
- 239000003814 drug Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000000523 sample Substances 0.000 description 2
- 238000002560 therapeutic procedure Methods 0.000 description 2
- NLZUEZXRPGMBCV-UHFFFAOYSA-N Butylhydroxytoluene Chemical compound CC1=CC(C(C)(C)C)=C(O)C(C(C)(C)C)=C1 NLZUEZXRPGMBCV-UHFFFAOYSA-N 0.000 description 1
- 230000036536 Cave Effects 0.000 description 1
- 206010022114 Injury Diseases 0.000 description 1
- 241000229754 Iva xanthiifolia Species 0.000 description 1
- 241000219823 Medicago Species 0.000 description 1
- 241001182492 Nes Species 0.000 description 1
- 241001059682 Stereopsis Species 0.000 description 1
- 210000000707 Wrist Anatomy 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000003247 decreasing Effects 0.000 description 1
- 230000003111 delayed Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002674 endoscopic surgery Methods 0.000 description 1
- 150000002500 ions Chemical class 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 201000001552 phobic disease Diseases 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000002432 robotic surgery Methods 0.000 description 1
- 230000002104 routine Effects 0.000 description 1
- 230000001953 sensory Effects 0.000 description 1
- 230000001360 synchronised Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Abstract
Provided herein are methods and systems for role negotiation with multiple sources. A method for role negotiation can comprise rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements. A plurality of role designations can be received, each role designation associated with one of a plurality of devices, wherein at least one of the plurality of devices is a remote device located remotely from another of the plurality of devices. The common field of interest can be updated based upon the plurality of role designations, wherein each of the plurality of role designations defines an interactive functionality associated with the respective device of the plurality of devices. The ingress and/or egress of information to the one or more of the plurality of devices is managed based upon one or more of the plurality of role designations, wherein each of the plurality of role designations defines a transmission rule for managing ingress and egress of information associated with the respective device of the plurality of devices, wherein the transmission rule comprises one or more of a latency parameter and a bandwidth parameter, and wherein the transmission rule defines a portion of image processing to be offloaded to a proxy server. her of the elements. A plurality of role designations can be received, each role designation associated with one of a plurality of devices, wherein at least one of the plurality of devices is a remote device located remotely from another of the plurality of devices. The common field of interest can be updated based upon the plurality of role designations, wherein each of the plurality of role designations defines an interactive functionality associated with the respective device of the plurality of devices. The ingress and/or egress of information to the one or more of the plurality of devices is managed based upon one or more of the plurality of role designations, wherein each of the plurality of role designations defines a transmission rule for managing ingress and egress of information associated with the respective device of the plurality of devices, wherein the transmission rule comprises one or more of a latency parameter and a bandwidth parameter, and wherein the transmission rule defines a portion of image processing to be offloaded to a proxy server.
Description
SYSTEM AND METHOD FOR ROLE NEGOTIATION IN MULTI-
REALITY ENVIRONMENTS
CROSS NCE TO RELATED PATENT APPLICATION
This application claims priority to US. Patent Application No.
13/929,080 filed June 27, 2013, herein incorporated by reference in its ty.
BACKGROUND
Mixed reality systems enable physically separated users to engage in
multi—reality sessions in real—time. These systems may perform some degree of
image processing to enable such sessions. Users participating in reality
ns with other concurrent users may be operating in a particular role relative
to the other users. However, mixed reality systems do not provide a sufficient
means to ate changes to a user’s role in real-time.
SUMMARY
It is to be understood that both the following general description and the
following detailed description are exemplary and explanatory only and are not
restrictive, as claimed. Disclosed are systems and methods for role switching in
multi-reality environments. In an aspect, the systems and methods of the t
disclosure can comprise designating a role for one or more users. As such,
processing of one or more images or video can be configured for the designated
role. In another aspect, the systems and methods of the t disclosure can
comprise negotiating one or more roles for one or more users. Interactive
functionality can be ated with the one or more roles and can be selectively
assigned to one or more users to provide an improved user experience.
Methods are described for role negotiation. One method can comprise
rendering a common field of interest that reflects a presence of a plurality of
elements, wherein at least one of the elements is a remote element located
remotely from another of the elements. A plurality of role designations can be
received. Each role designation can be associated with one of a ity of
devices, wherein at least one of the ity of devices is a remote device
located remotely from another of the plurality of devices. The common field of
interest can be updated based upon the plurality of role designations, wherein
each of the plurality of role designations defines an interactive functionality
associated with the respective device of the plurality of devices.
Another method can comprise rendering a common field of interest that
reflects a presence of a plurality of elements, wherein at least one of the elements
is a remote element located remotely from another of the elements. A plurality of
role designations can be ed, each role designation associated with one of a
plurality of devices. At least one of the plurality of devices is a remote device
located remotely from another of the plurality of devices. One or more of the
ingress and egress of information can be managed to one or more of the plurality
of devices based upon one or more of the plurality of role designations, wherein
each of the plurality of role designations defines a transmission rule for
managing ingress and egress of ation ated with the respective
device of the plurality of devices.
A system for image processing is described. The system can comprise a
display configured for displaying a common field of interest, a sensor configured
for obtaining image data, a processor in signal ication with the y
and the sensor. The processor can be configured to render a common field of
interest that reflects a presence of a plurality of elements, wherein at least one of
the ts is a remote element located remotely from another of the elements.
The processor can be configured to receive a role designation. The processor can
be configured to manage one or more of the ingress and egress of information
based upon the role designation. The processor can be configured to update the
common field of st based upon the one or more of the ingress and egress of
information and output the updated common field of interest to the display.
Additional advantages will be set forth in part in the description which
s or may be learned by practice. The ages will be realized and
attained by means of the elements and combinations particularly pointed out in
the appended inventive concepts. It is to be understood that both the foregoing
general ption and the ing detailed ption are exemplary and
explanatory only and are not to be considered restrictive.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying gs, which are incorporated in and constitute a
part of this specification, illustrate embodiments and together with the
description, serve to explain the principles of the methods and systems provided:
Figure 1 illustrates virtual interactive ce;
Figure 2A illustrates virtual interactive presence;
Figure 2B illustrates a local expert assisting a remote user;
Figure 3 illustrates an exemplary system architecture;
Figure 4A illustrates an exemplary process med by the system of Figure 3;
Figure 4B illustrates an exemplary virtual interactive presence schematic;
Figure 4C illustrates an exemplary user interface;
Figure 4D illustrates an ary user interface;
Figure 4E rates an exemplary user interface;
Figure 4F rates an exemplary user interface;
Figure 4G illustrates an exemplary user interface;
Figure 5A illustrates an exemplary system architecture;
Figure 5B illustrates an exemplary method;
Figure 5C illustrates an exemplary system architecture;
Figure 5D illustrates an exemplary system architecture;
Figure 5E illustrates an exemplary system architecture;
Figure 6A illustrates an exemplary method;
Figure 6B illustrates an exemplary method;
Figure 6C illustrates an exemplary method;
Figure 7A illustrates an exemplary l presence system;
Figure 7B illustrates ary processes performed within a cs server;
Figure 7C illustrates ary processes performed within a network server;
Figure 8 illustrates a side view of an exemplary VIP display;
Figure 9 illustrates a user’s view of an exemplary VIP display;
Figure 10 illustrates a user’s view of an exemplary VIP display;
Figure 11 illustrates an exemplary method;
Figure 12 illustrates another exemplary method;
Figure 13 illustrates virtual presence in a remote surgical environment;
Figure 14 illustrates merging of medical imaging with an operative field; and
Figure 15 illustrates an exemplary operational nment.
DETAILED DESCRIPTION
Before the present s and systems are disclosed and described, it is
to be understood that the methods and s are not limited to c
synthetic s, specific components, or to particular compositions, as such
may, of course, vary. It is also to be understood that the terminology used herein
is for the purpose of describing particular embodiments only and is not intended
to be limiting.
As used in the specification and the appended inventive concepts, the
singular forms “a,” “an,” and “the” include plural referents unless the context
y dictates otherwise.
Ranges may be expressed herein as from “about” one particular value,
and/or to ” another particular value. When such a range is expressed,
another embodiment includes from the one particular value and/or to the other
ular value. Similarly, when values are expressed as approximations, by use
of the antecedent “about,” it will be understood that the ular value forms
another embodiment. It will be further understood that the endpoints of each of
the ranges are significant both in relation to the other endpoint, and
independently of the other endpoint.
“Optional” or “optionally” means that the subsequently described event
or circumstance may or may not occur, and that the description includes
instances where said event or circumstance occurs and instances where it does
not.
Throughout the description and claims of this specification, the word
“comprise” and variations of the word, such as “comprising” and “comprises,”
means “including but not limited to,” and is not intended to exclude, for
e, other additives, components, integers or steps. “Exemplary” means
“an example of” and is not intended to convey an indication of a preferred or
ideal embodiment.
Disclosed are components that can be used to perform the disclosed
methods and s. These and other components are disclosed herein, and it is
understood that when combinations, subsets, interactions, groups, etc. of these
components are disclosed that while specific reference of each various individual
and collective combinations and permutation of these may not be explicitly
sed, each is specifically contemplated and described herein, for all
methods and systems. This applies to all aspects of this application including,
but not limited to, steps in disclosed methods. Thus, if there are a variety of
additional steps that can be performed it is understood that each of these
additional steps can be performed with any c embodiment or combination
of embodiments of the disclosed methods.
The present methods and systems may be understood more y by
reference to the following detailed description of preferred embodiments and the
Examples ed therein and to the Figures and their previous and following
description.
Disclosed are methods and systems for role designation with multiple
video s. The disclosed methods and s can utilize virtual reality.
Virtual reality (VR) refers to a computer—based ation which provides a
human—computer interface such that the computer and its devices create a
y environment which is dynamically controlled by the actions of the
individual, so that the environment appears “real” to the user. With VR, there is
communication n a computer system and a user. The computer creates a
sensory environment for the user to ence which may be, in one aspect,
multisensory (although this is not essential) and the computer creates a sense of
reality in response to user inputs.
In one ary aspect, the system disclosed can utilize at least two
types of VR, Immersive and Non-immersive. Immersive VR creates the on
that the user is actually in a different environment. In one aspect, the system
accomplishes this through the use of such devices as Head Mounted Displays
(HMD’s), nes, and input devices such as gloves or wands. In another
aspect, in order to enhance to realism of the experience, a plurality of Degrees of
Freedom (DOF’s) are utilized, which the software can simulate. Generally, the
more the DOF’s, the better the m of the experience. Exemplary DOF's
include, without limitation: X,Y,Z, roll, pitch, and yaw.
Non-immersive VR creates an environment that is differentiable from the
user's surrounding environment. It does not give the illusion that the user is
transported to another world. Non—immersive VR works by creating a 3-
dimensional image and surround sound through the use of stereo projection
systems, computer monitors, and/or stereo speakers. Non—immersive VR can be
run from a personal computer t added hardware.
In one , movement in Immersive VR can be realized by a system
h the use of optical, acoustical, magnetic, or mechanical hardware called
trackers. Preferably, the input devices have as many of these trackers as
possible, so that movement can be more accurately represented. For instance,
Virtual gloves can have up to 3 trackers for each index, and more for the palm
and wrist, so that the user can grab and press objects. In one aspect, the trackers
can be equipped with positioning sensors, that tell a computer which direction
the input is facing and how the input device is tilted in all directions. This gives
a sensor with six degrees of freedom.
Output s bring the user to the virtual world. An example of an
output device that can be used in the present system include, without tion,
head mounted displays (HMD) in the form of glasses or s, which allow a
user to wear a display system on their head. One approach to the HMD is to use
a single Liquid Crystal Display (LCD), wide enough to cover both eyes.
Another approach is to have two separated ys — one for each eye. This
takes at more er power, since the images displayed are different.
Each display has a separate image rendered from the correct angle in the
environment. Eye—tracking can be combined with HMDs. This can allow, for
example, surgeons to move their eyes to the part of an image they want to
enhance.
Another example of an output device that can be used in an embodiment
of the present system is shuttered glasses. This device updates an image to each
eye every other frame, with the shutter closed on the other eye. Shuttered
glasses require a very high frame rate in order to keep the images from
flickering. This device is used for stereo monitors, and gives an accurate 3—d
representation of a 2—d object, but does not immerse the user in the Virtual world.
Another output device that can be used in an embodiment of the present
system is a screen with multiple projectors. The screen can be either a plane or
bent. A challenge when using multiple tors on the same screen is that
there can be visible edges between the projections. This can be remedied be
using a soft—edge system wherein the projection goes more and more transparent
at the edges and the projections p. This produces an almost perfect
transition between the images. In order to achieve a desired 3D effect, red
s can be used. Special glasses can be used, that alternate between making
the glass either tely opaque or completely transparent. When the left eye
is opaque, the right one is transparent. This is synchronized to the projectors that
are projecting corresponding images on the screen.
In another aspect, a Cave Automatic Virtual Environment (CAVE) can
also be used in the present system. A CAVE can use mirrors in a cube—shaped
room to project stereo images onto the walls, giving the illusion that you are
standing in a virtual world. The world is constantly updated using trackers, and
the user is allowed to move around almost completely uninhibited.
Disclosed are methods and systems for role ation. Such methods
and systems can render a number of elements/participants lly present into a
field of interest in a manner such that the users can interact for any given
purpose, such as the ry of remote expertise. A field of interest can
comprise g amounts of “real” and “virtual” elements, depending on a point
of view. Elements can include any “real” or “virtual” object, subject,
participant, or image representation. Various components of the disclosed
methods and systems are illustrated in
A common field of interest 101 can be a field within which elements are
physically and/or virtually present. Point of Reality (or Point of View) can refer
to the vantage of the element/participant that is experiencing the common field
of interest. In exemplary points of y, or points of view, are shown
at 102 and 103, representing displays. The common field of interest 101 can
appear similar from both es, or points of view, but each comprises
differing combinations of local (physical) and remote (virtual)
elements/participants.
Local ts can be elements and/or participants which are physically
present in the common field of interest. In element A 105 is a local
2014/044679
element for field A 104 and is physically present in field A 104. Element B 107
is a local t for field B 106 and is physically present in field B 106. It is
understood that virtual elements (not shown) can be inserted or overlaid in field
A 104 and/or field B 106, as desired.
Remote ts can be elements and/or participants that are not
physically present in the common field of interest. They are experienced as
ally present” from any other local vantage point. As shown in
element B 107 is a remote t to field A 104 and is virtually present in field
A 104. Element A 105 is a remote element in field B 106 and is virtually present
in field B 106.
Methods for rendering a virtual interactive presence by combining local
and remote elements and/or participants can comprise one or more of the
following steps. A common local field can be rendered in a manner that reflects
the presence of the field, elements and/or participants. As shown in ,
Participant A can experience real elements in field A through a viewer. The
common local field can be rendered such that it is experienced remotely in a
manner that enables remote participants to experience it rly to the local
persons. As shown in , this is illustrated by Participant A experiencing
element B as virtually present in field A.
Remote persons can insert themselves and/or interact with the virtual
field as rendered to them. For e, Participant A can insert hands,
instruments, etc. into field A and interact with the virtual e1ement(s) B. Viewer
B can view a ‘virtual compliment’ to this, with Viewer B’s real elements
interacting with Participant A’s virtual elements.
The common local field can be continuously updated such that the
presence of the remote participants can be rendered in real time. For e,
the remote scene can be the most up—to—date available with the time lag n
the remote capture and the local render kept as low as possible. Conversely, if
there is a need to introduce a timing ence, this can be accomplished as well.
The common local field can be scaled to a size and depth to meaningfully
match the local scene. And the common local field can be configurable, such
that remote elements can be made more or less transparent, removed entirely, or
otherwise altered to suit the needs of the local user.
Each field is ed by a digital . The resulting image is
physically distorted from its reality, based upon the physical characteristics of
the camera. A processor, therefore, receives and displays a “physically” distorted
version of the local reality. Likewise, a digital camera also captures the remote
field(s), but the incoming stream is relayed through a transmission device and
across a network. The processor, therefore, receives the remote stream that
contains both physical and transmission-based tion. The processor must
then apply a series of transformations that removes the physical and
transmission—based distortion from the common local field.
The local participants can experience the virtually present ipants in
a manner that s continuous interaction in the common local field.
illustrates a local expert assisting a remote user. The hands of the local expert
201 are slightly arent and superimposed into the field that is viewed by the
remote user. The remote user can view the local expert’s hands, the remote
user’s hands and a puzzle located at the remote user’s location. The local expert
is assisting the remote user in assembling a puzzle.
illustrates an exemplary image processing system 300. As shown,
the system 300 can se a first display 302 and a second display 304
configured for displaying one or more of an image, a video, a composite
video/image, and a common field of interest, for example. r, it is
understood that any number of displays can be included in the system 300. In
certain aspects, the second display 304 can be disposed remotely from the first
display 302. As an example, each of the first display 302 and the second display
304 can be configured to render the common field of interest thereon. As a
further example, each of the first display 302 and the second display 304 can be
configured to render at least one of the local field and the remote field thereon.
In n aspects, at least one of the first display 302 and the second display 304
can be a VIP display, as described in further detail herein. However, it is
understood that each of the first display 302 and the second display 304 can be
any type of y including a monoscopic display and a stereoscopic display,
for example. It is understood that any number of any type of display can be used.
A first sensor 306 can be in signal communication with at least the first
display 302 and can be red for obtaining image data such as virtual
presence data, for example. In certain aspects, the first sensor 306 can be one or
more of a camera, an infrared sensor, a light sensor, a RADAR device, a
SONAR device, a depth scan , and the like. It is understood that the first
sensor 306 can be any device or system capable of capturing/obtaining image
data representative of at least one of a “real” t” and a “virtual” element.
A second sensor 308 can be in signal ication with at least the
second display 304 and can be configured for obtaining image data such as
virtual presence data, for example. In certain aspects, the second sensor 308 can
be one or more of a camera, an infrared sensor, a light sensor, a RADAR device,
a SONAR device, a computed tomography device, a magnetic resonance
imaging device, a depth scan sensor, and the like. It is understood that the
second sensor 308 can be any device or system capable of ing/obtaining
image data representative of at least one of a “rea ” element” and a “virtual”
element. It is further understood that any number of s can be used.
A plurality of processors 310, 312 can be in direct or indirect signal
communication with at least one of the first display 302, the second display 304,
the first sensor 306, and the second sensor 308. Each of the processors 310, 312
can be configured to render the image data collected by the s 306, 308
onto at least one of the displays 302, 304. It is tood that the processors
310, 312 can be configured to modify the image data and the ant image for
transmission and display. It is further understood that any number of processors
can be used, including one. In certain aspects, the system 300 comprises the
processor 310, 312 in data communication with each other.
In certain aspects, each of the displays 302, 304 can comprise an
associated one of the processors 310, 312 for rendering images onto the displays
302, 304. Each of the sors 310, 312, or another system sing a
processor, can communicate with each other through a network connection. For
example, remote sites can connect via the Internet or other network. Tasks can
be divided amongst each of the processors 310, 312. For example, one of the
processors 310, 312 can be configured as a graphics processor or graphics server
and can gather images from one of the sensors 306, 308 and/or a network ,
perform image composition tasks, and drive one or more of the displays 302,
304.
2014/044679
In an aspect, one or more of the processors 310, 312 can be configured to
render an image. As an example, one or more of the processors 310, 312 can be
red to render a common field of interest that reflects a presence of a
plurality of elements based upon the image data obtained by at least one of the
sensors 306, 308. As a further example, at least one of the elements rendered in
the common field of interest can be a remote element ally located
remotely from another of the elements. The processors 310, 312 can also be
configured to render/output the common field of st to at least one of the
displays 302, 304. As an e, the processors 310, 312 can render
interaction between a remote user and a local user in the common field of
interest. As a r e the presence of the remote element can be
rendered in real time to the local user and the presence of a local element can be
rendered in real time to the remote user.
In an aspect, one or more roles can be designated (e.g., defined, selected,
received, generated) for one or more users. As an example, a role can be
designated as an abstraction that triggers a logical execution of one or more
related programs by a processor. Such programs can, as an example, trigger a
modification in the arrangement, organization, and/or presentation of a cal
user interface as displayed to a user. As another example, such programs can
affect the processing of images and/or Video and/or audio by modifying the
processing and/or post—processing of one or more images and/or Video streams
and/or audio streams. As a further example, these ms can affect the
rendering of images, Video, and/or audio presented to one or more users.
In an aspect, processing of images can be implemented Via a local
processor prior to transmission to a remote processor. For example, image
iting can occur at a local processor prior to transmission. As such,
images and/or or video received by a remote processor do not require
compositing at the remote processor and can be accepted by a codec. As a
further example, role designation can be implemented as an implicit role
designation that occurs when a system performs image composition versus a
system that merely receives processed . Software such as emulators,
clients, applications and the like can be used to facilitate the operation of a
particular role designation.
In an aspect, a helper role can comprise the manipulation of data flow
and data presentation. As an example, the helper role can comprise a particular
arrangement of a software pipeline, such as a series of sing elements for
which the output of one processing element is the input for the successive
processing element. As a further e, images captured by a local camera
and processed by a local processor can be merged with images captured by one
or more remote cameras and processed by one or more remote processors in a
manner that is associated with the helper role.
In an aspect, a particular texture processing can be associated with the
helper role. As an example, an image can be mapped to a screen according to a
specified coordinate transformation, wherein inputs are given as parameters to a
fragment or pixel . As a r example, one or more programmable
functions associated with the helper role can be used to modify the traits (e.g.,
color, z-depth, alpha value) of one or more pixels of a given image. As another
example, an image can have background subtraction, background removal,
object detection, or similar algorithms applied to it for selective display of a
region of the image.
In an aspect, for the helper role, one or more images captured by a local
camera and processed by a local processor can be ed with an opacity of
less than one, giving them a degree of transparency. One or more images
captured by a remote camera and processed by a remote processor can be
rendered with an opacity of about one, thereby providing substantially complete
y. As an example, one or more images of a remote “scene of interest” (e. g.,
surgery, maintenance site) can be ed to fit a full display screen. As r
example, one or more images of a remote user can be tiled or arranged on the
display screen (e.g., picture-in-picture). As a further example, one or more
images of a remote user’s environment can be tiled or arranged on the display
screen.
In an aspect, an or role can comprise the manipulation of data flow
and data tation. As an example, the operator role can comprise a particular
arrangement of a software pipeline, such as a series of processing ts for
which the output of one processing element is the input for the successive
processing element. As a further example, images captured by a local camera
and processed by a local processor can be merged with images captured by one
or more remote cameras and processed by one or more remote processors in a
manner that is associated with the operator role.
In an aspect, the operator role can se the manipulation of one or
more input hardware devices. As an example, one or more video capture cards
can be enabled to capture one or more images. As another example, a local
processor can write one or more images through said video capture card to a
circular or ring buffer. As another example, one or more images can be written
to a queue. As a further example, one or more images can be written to a stack.
In an aspect, a ular texture processing can be associated with the
or role. As an example, an image can be mapped to a screen according to a
specified coordinate transformation, wherein inputs are given as parameters to a
fragment or pixel shader. As a further e, one or more programmable
functions associated with the operator role can be used to modify the traits (e.g.,
color, h, alpha value) of one or more pixels of a given image. As another
example, an image can have background subtraction, background removal,
object detection, or similar algorithms applied to it for selective display of a
region of the image.
In an aspect, for the or role, one or more images captured by a
local camera and sed by a local processor can be rendered with an opacity
of about one. As an example, one or more images of a local “scene of interest”
can be rendered with ntially full y. As a further example, one or
more images ed by a local camera and processed by a local processor can
be rendered with an opacity of less than one, providing a degree of transparency.
One or more images captured by a remote camera and processed by a remote
processor can be rendered with an opacity of less than one, thereby providing a
degree of transparency. As another example, one or more images of a remote
user can be tiled or arranged on the y screen (e.g., picture—in—picture). As
a further example, one or more images of a remote user’s environment can be
tiled or arranged on the display screen.
In an aspect, a heads up role can comprise the manipulation of data flow
and data presentation. As an example, the heads up role can comprise a
particular arrangement of a software pipeline, such as a series of processing
elements for which the output of one processing element is the input for the
successive processing element. As a further example, images captured by a local
camera and processed by a local processor can be merged with images captured
by one or more remote cameras and processed by one or more remote processors
in a manner that is associated with the heads up role.
In an aspect, the heads up role can comprise processing images ed
by one or more local cameras to facilitate display alongside images captured by
one or more remote s and processed by one or more remote sors.
As an example, the local and remote images can be tiled. As another example,
one or more local images can be ed in a manner utilizing a ity of the
screen, with one or more remote images displayed in a relatively smaller
window. As a further example, one or more remote images can be rendered in a
manner utilizing a majority of the screen, with one or more local images
displayed in a relatively smaller . Other roles and associated processing
can be used.
In an aspect, a subscriber role can comprise the manipulation of data flow
and data presentation. As an example, the subscriber role can comprise a
particular ement of a software pipeline, such as a series of sing
elements for which the output of one sing element is the input for the
successive processing element. As a further example, images captured by one or
more remote cameras and processed by one or more remote processors can be
merged and displayed to the subscriber in a manner that is associated with the
subscriber role.
In an aspect, the iber role can comprise processing images captured
by one or more remote cameras and processed by one or more remote
processors. As an example, while designated as the subscriber role, a local
processor can generally receive, but not send, video images. As a further
example, one or more remote images can be rendered in a manner ing a
majority of the screen. Other roles and associated processing can be used.
In an aspect, a sender role can comprise the manipulation of data flow
and data presentation. As an example, the sender role can comprise a ular
arrangement of a software pipeline, such as a series of processing elements for
which the output of one processing element is the input for the successive
processing element. As a further example, images ed by a local camera
and processed by a local sor can be sent to one or more remote processors
in a manner that is associated with the sender role.
In an aspect, the sender role can comprise sending images captured by
one or more local s and processed by one or more local processors. As an
example, a sender can generally send, but not receive, video images. Other roles
and associated sing can be used.
In an aspect, an strator role can comprise selectively designating
which of a plurality of devices (e.g., processors) can take an active role in
processing content (e. g., images, audio, etc.) captured by one or more remote
cameras and which of a plurality of devices can take a passive role of receiving
processed content (e. g., images, audio, etc.). As an example, the device
designated in the administrator role can designate one or more devices as
administrators, operators, helpers, heads up, iber, or s. In an aspect,
the images rendered in a common field of interest can be modified to illustrate
one or more of the designated roles of the devices in communication with the
common field of interest. As an example, the operator role can be shown with
near zero transparency, while the helper roles can be shown with varying levels
of transparency to distinguish between one or more s. As a further
example, subscribers can be designated as an icon to illustrate that a subscriber is
present, but not transmitting image data. As yet a further example, senders can
be shown with a distinct characteristic (e.g., color, opacity, outline, icon, etc.) so
that viewers of the common field of interest can recognize that a sender is
transmitting image information, but is not receiving and/or viewing the common
field.
In an aspect, an active Viewer role can comprise selectively manipulating
one or more aspects of a viewed common field. As an example, the device
designated in the active viewer role can adjust transparency/opacity of images
originating from one or more devices in one or more of a heads up,
administrator, or, helper, or sender role. As another example, the
manipulations made by the device in the active viewer role would generally only
affect the presentation of the common field as viewed locally, and not propagate
to other devices. As a r e, active viewers can manipulate one or
more non—imaging aspects of a workspace containing the common field,
including presentation of meta-data, contextually relevant workspace data or
information, and the like. As another e, active Viewers can manipulate
image data originating from outside the common field (e.g., MRI, X-ray,
infrared, and the like).
In an aspect, one or more devices in one or more roles can implement
additional features well known to those skilled in the art. These features include
but are not d to, text—based communication, audio recording, and
text—based Video annotation.
In an aspect, processing of images can be implemented via a local
processor prior to transmission to a remote processor. For example image
processing can occur at a local processor prior to transmission. As a further
example, role designation can be implemented as an implicit role designation
that occurs when a system ms image composition versus a system that
merely es processed images.
illustrates exemplary process 400 that can be performed with at
least one of the processors 310, 312. Other processor and/or computing devices
can be used to perform the process 400. In step 402, a request for a role change
or role designation can be transmitted/received. In an aspect, a button push on a
graphical user interface initiates an event which facilitates transmission of a
“role change request” from the graphical user interface of a first ation to
the graphical user interface of one or more other applications. As an example,
the role change request can comprise a designation or specification of the role a
local user desires the local application to assume. As an example, the role change
request can be a ication comprising the desired role as a string literal.
As another example, the role change request could be a communication
comprising the desired role as an integer mapped to a string in a table of a
database.
In an aspect, triggering the transmission/receipt of a role change request
can be facilitated by one or more logical or physical events. As an example, an
event can be triggered Via an input from a user. As r es, an event
can be triggered via gesture recognition, speech ition, the triggering of an
accelerometer, a swipe, a pinch, or a touch, or a combination thereof.
2014/044679
In an aspect, triggering the transmission/receipt of a role change request
can be facilitated by one or more processors performing monitoring/active
management of the common field. As an example, a processor could utilize
machine vision and/or image processing techniques to detect image
characteristics that suggest one or more users desire to take on one or more roles.
As another example, a processor could detect the amount of activity (motion) in
the common field, and dynamically allocate roles based on teristics of the
activity. As a further example, a processor could force role changes at particular
points during a multi—reality session as defined by a t or fined
program or role schedule.
In an aspect, the role change request can be transmitted to a local client
program, which can be defined as a computer application or computer program
that communicates with a server to access a e. As an example, the local
client program can transmit the request to a server over a network. The local
client program can transmit the request to a server running on the same
computer, avoiding having to traverse a network. The server can be comprised as
a computer hardware or er software that provides access to a service for a
client program. As a further e, the server can transmit the role change
request to one or more client programs as specified by the request. In an aspect,
each client program can transmit the request to a remote graphical user interface
of a remote application.
In step 404, when the graphical user interface receives the t, one or
more ional statements (e. g., N statements) can be executed to
determine whether to accept the request. In an aspect, a sequence of conditional
statements can be executed by a local processor to determine whether to accept
the change role request.
In an aspect, the local processor can perform a check to ine
whether a role corresponding to the role specified in the change role request can
be located and/or generated. As an example, if no corresponding role is located,
a negative (i.e., FALSE) response can be transmitted to a local client, at step
406. As an example, the local client can transmit the response to a computing
device. The computing device can transmit the response to one or more remote
s. The remote clients can present the response to corresponding graphical
user interfaces. As a further example, if a corresponding role is located, the role
(and the associated functionality) can be used to l the tation of
images via one or more displays. In an aspect, an identifier such as a character or
string can be used to fy one or more roles, at step 408. As an example, the
identifier can be used to associate particular processing paths and/or presentation
configurations with one or more role designations. Accordingly, role
designations can be selected, processed, located, lated, and the like based
upon the identifier associated therewith.
In an aspect, a processor can m a validation check to determine
whether the local application is already in the corresponding role. If true, the
sor can initiate the sending of an tive response (i.e., TRUE) from
the local graphical user interface of the local application to the remote graphical
user interface of one or more remote applications. If the corresponding role is
not equal to the local application’s current role, the current role of the local
application can be reassigned or designated to the corresponding role. As such,
the arrangement, organization, and/or presentation of a graphical user interface
as displayed to one or more users can be modified. As an example, a software
pipeline associated with the designated role can be configured.
In an aspect, pseudocode for updating a current role presented via an
interface can comprise:
IF (corresponding_role_found)
role <— corresponding_role
IF (role = current_role)
SEND true
ELSE
current_role <— role
SEND true
ENDIF
ELSE
SEND false
ENDIF
illustrates an exemplary schematic of a multi-reality session. In
an aspect, a first user 412 at a first location 414 having a field of interest 416 can
be in communication with a second user 418 at a second location 420. As an
example, the first user 412 can be in the operator role, while the second user 418
can be in the helper role. As such, the field of st 416 can be shared between
the first user 412 and the second user 418. As a further example, the field of
st 416 can be configured to present images in a first configuration
associated with the first user 412 designated as the operator role. ing a
role change, the first user 412 can assume the helper role, while the second user
418 can assume the operator role. As such, the field of interest 416 can be shared
between the first user 412 and the second user 418 and can be d to present
images in a second configuration based on the second user 418 designated as the
operator role.
In an aspect, processing of images can be implemented Via a local
processor prior to transmission to a remote processor. For example image
compositing can occur at a local processor prior to transmission. As such,
images and/or or Video ed by a remote sor does not require
compositing and can be accepted by a codec. As a further example, role
designation can be implemented as an implicit role designation that occurs when
a system ms image composition versus a system that merely es
processed images.
illustrates an exemplary rendering of the field of interest 416
configured to illustrate the first user 412 designated in the operator role. In an
aspect, illustrates the field of interest 416 from the perspective of the
first user 412. In another aspect, one or more Video capture cards can be d
to capture one or more images of a local field of interest (e.g., prosthetic knee).
The images of the field of interest 416 can be rendered with an opacity of
substantially one. Images of a hand 420 of the first user 412 and a hand 422 of
the second user 418 can be rendered with an opacity of less than one, while
images of a face 424 of the second user 418 can be tiled or arranged on the
display with an opacity of substantially one.
illustrates an exemplary rendering of the field of interest 416
configured to illustrate the second user 418 designated in the helper role. In an
aspect, illustrates the field of interest 416 from the perspective of the
second user 418. The images of the field of interest 416 can be rendered with an
opacity of substantially one. Images of the hand 420 of the first user 412 and the
hand 422 of the second user 418 can be rendered with an opacity of less than
one, while images of the face 426 of the first user 412 can be tiled or arranged on
the display with an opacity of ntially one.
In an aspect, a selection of a graphical button rendered on a touch-
sensitive display can be used to initiate a role change request. As an example, the
graphical button push can initiate the transmission of a role change t from
a graphical user interface rendered by a first processor to a graphical user
ace rendered by a second sor. As a further example, the graphical
button can specify a role that the second user desires to .
In an , illustrates an exemplary rendering of a graphical
user interface 428 to one or more users. One or more graphical buttons 430 can
allow a user to make a role designation such as "Operator," r," and/or
"Heads Up." A video window 432 can display one or more images. Additional
options may be exposed to the user, which can allow further manipulation of the
video image in a multi-reality session. As an example, a graphical "blend value"
slider 434 can allow a user to determine the relative proportions of real and
virtual images viewed. As another example, a graphical "exposure value" slider
436 can allow a user to te the exposure properties of a local or remote
camera. As a further example, a graphical "telestration" menu 438 can allow a
user to perform virtual drawings on top of a video image. A cal
"preferences" button 440 and graphical "end session" button 442 can allow a
user to modulate other session parameters and end the current session with a
remote user, respectively.
illustrates an exemplary rendering of the field of interest 416
configured to illustrate the second user 418 designated in the or role. In an
aspect, illustrates the field of interest 416 from the perspective of the
second user 418. In another aspect, one or more video capture cards can be
enabled to capture one or more images of a local field of interest (e.g., prosthetic
knee). The images of the field of interest 416 can be rendered with an opacity of
substantially one. Images of the hand 420 of the first user 412 and the hand 422
of the second user 418 can be rendered with an opacity of less than one, while
images of the face 426 of the first user 412 can be tiled or arranged on the
display with an opacity of substantially one.
illustrates an exemplary rendering of the field of interest 416
configured to illustrate the first user 412 ated in the helper role. In an
aspect, illustrates the field of interest 416 from the perspective of the
first user 412. The images of the field of st 416 can be rendered with an
opacity of substantially one. Images of the hand 420 of the first user 412 and the
hand 422 of the second user 418 can be rendered with an opacity of less than
one, while images of the face 424 of the second user 418 can be tiled or arranged
on the display with an opacity of substantially one.
In an aspect, a role (and the associated onality) can be used to
l the presentation of images via one or more displays. In an aspect, an
identifier such as a character or string can be used to identify one or more roles.
As an example, the identifier can be used to associate particular processing paths
and/or presentation configurations with one or more role designations. As such,
the arrangement, organization, and/or presentation of a graphical user interface
as displayed to one or more users can be modified. As an example, a software
pipeline associated with the designated role can be configured.
In an aspect, the methods disclosed herein can be implemented via one or
more system architectures. As shown in , the architecture can e,
but are not limited to, one or more user devices 501a, 501b, 501c, 501d, 501e, a
computing device 502 (e.g., a central server) and communication links 504 (e. g.,
connections) between one or more of the user devices and/or a user device and
the computing device 502. In an aspect, the user devices 5013, 501b, 501e,
501d, 501e can transmit and/or receive one or more images to/from other ones of
the user s 5013, 501b, 501e, 501d, 501e as part of a multi—reality
environment s.
In an aspect, one or more of the user devices 501a, 501b, 501e, 501d,
501e can capture/process images from a local camera/processor. As an example,
these images could then be merged with one or more images captured/processed
by a remote camera/processor. In some aspects, it is desirable to modify certain
parameters such as y or bandwidth in order to promote a higher quality
user ence for user devices in specified roles. Thus, the general purpose
architecture illustrated in can be modified in order to achieve these
ends. As an example, the architecture can utilize additional processing paths
(e.g., communication links 504) and/or computing s (e. g., computing
device 502) to reduce bandwidth requirements for and/or latency experienced by
one or more user devices (e.g., user devices 5013, 501b, 501e, 501d, 501e). As a
further example, the ecture can be d to offload processing
requirements from one or more user devices.
In an aspect, one or more roles can be ated (e.g., defined, selected,
received, generated) for one or more user devices 501a, 501b, 501e, 501d, 501e.
As an example a role can be designated as an abstraction that triggers a l
execution of one or more related programs by a processor associated with one or
more user devices 50121, 501b, 501C, 501d, 501e. Such programs can, as an
example, trigger a modification in the arrangement, organization, and/or
presentation of a cal user interface as displayed to a user. As another
example, such programs can affect the sing of images and/or video by
ing the processing and/or post—processing of one or more images. As a
further example, these programs can affect the rendering of images or video
presented to one or more users.
In an aspect, processing of images can be ented locally to one of
the user devices 5013, 501b, 501e, 501d, 501e prior to transmission to a remote
processor (e. g., another user devices 501a, 501b, 501e, 501d, 501e or ing
device 502). For example, image compositing can occur at a local processor
prior to transmission. As such, images and/or or video received by a remote
processor does not require compositing and can be accepted by a codec. As a
further example, role designation can be implemented as an implicit role
designation that occurs when a system performs image composition versus a
system that merely receives processed images.
In an aspect, a user (e. g., user devices 501a, 501b, 501c, 501d, 501e) can
request to assume an active role such as the operator or helper by ting a role
change request. As an example, a button push on a graphical user interface can
initiate an event which facilitates the transmission of a role change request from
the graphical user interface of a first application to a central server (e. g.,
computing device 502). As a r example, the role change request can
comprise a designation or specification of the role a local user desires the local
application to assume. As an example, the role change request can be a
communication comprising the desired role as a string literal. As another
example, the role change request could be a communication comprising the
desired role as an integer mapped to a string in a table of a database.
In an aspect, triggering the transmission/receipt of a role change request
can be facilitated by one or more logical or physical events. As an example, an
event can be triggered via an input from a user. As further es, an event
can be triggered via gesture recognition, speech recognition, the triggering of an
accelerometer, a swipe, a pinch, or a touch, or a combination thereof.
In an aspect, a role change request can be transmitted from a graphical
user interface to a local client program, which can be defined as a computer
application or computer program that communicates with a server to access a
service. As an example, the local client program can transmit the request to a
server over a network. The local client program can transmit the request to a
server running on the same computer, avoiding having to traverse a network.
The server can be comprised as a computer hardware or computer software that
provides access to a e for a client program. Once the request is ed by
the server, the server can perform a role change check to determine whether the
role change request can be accepted. In an aspect, the server can transmit the
results of the role Change check to one or more remote client programs. In an
aspect, each remote client program can it the t to an associated
graphical user ace of an application.
illustrates an ary method of negotiating roles. In step
510, a computing device (e.g., computing device 502 ()) can receive a
role change request. Upon receipt of the request, one or more conditional
statements (e.g., IF-THEN statements) can be executed to determine whether to
accept the request, at step 512. In an aspect, a sequence of conditional statements
can be executed by the computing device to determine whether to accept the
change role request.
In an aspect, the computing device can perform a role change check to
determine r a role corresponding to the role specified in the change role
request can be d and/or ted. The computing device can then return
an affirmative or ve response, at step 514. As an example, if no
corresponding role is located, a negative (i.e., FALSE) response can be returned.
As a further example, if a corresponding role is located, the processor can
perform an onal check to determine whether a current user participating in
the session holds the role specified by the role change request. In an , if no
user is located who holds the requested role, an affirmative response can be
returned. In r aspect, if a user is located who holds the requested role,
another check can be executed to determine whether the d user has
ed a “hold” forbidding role change operations. As an example, if no hold
exists, the computing device can return an affirmative (i.e., TRUE) response. As
another example, if a hold does exist on the role, the ing device can
return a negative response.
In an aspect, pseudocode for updating a current role presented via an
interface can comprise:
IF (corresponding_role_found)
role <— ponding_role
IF (role IN roles)
IF (current_user_with_role_has_hold)
RETURN false
ELSE
RETURN true
ENDIF
ELSE
RETURN true
ENDIF
ELSE
RETURN false
ENDIF
In an aspect, the value returned (e.g., an affirmative or negative response)
in step 514 can trigger a series of logical events executed by one or more
devices. As an example, if a negative response is returned, the computing device
can send a “denied” message to a remote client (e.g., user devices 5013, 501b,
501C, 501d, 501e requesting the role change). A remote client (e. g., user devices
5013, 501b, 501C, 501d, 501e) can then transmit this message to an associated
graphical user interface of an application. In an aspect, the “denied” message can
instruct a cal user interface to “do nothing” and thus maintain its current
state.
illustrates an exemplary negotiation of a role change request. In
an aspect, user device 501a can be designated as a helper role and a user device
501b can be designated as an operator role. User s 501e, 501d, 501e can
be designated as a subscriber role. Communication links 504a, 504b can be
established with a computing device 502 allowing, for example, unidirectional
transmission (e. g., operating under a unidirectional transmission rule governing
the transmission of one or more images, a video stream, or the like) in the
direction of the ing device 502. Communication links 504e, 504d, 504e
can be established with the computing device 502 allowing, for example,
unidirectional transmission in the direction of user devices 501e, 501e, 501d,
respectively. Communication link 504f can be established between the user
device 5013 and the user device 501b allowing, for example, multi-directional
transmission of data such as one or more images in a video stream (e. g.,
operating under a multi-directional transmission rule governing the transmission
of one or more images, a video stream, or the like).
In an , user device 501e can te a role change request. If the
requested role (e.g., helper) is identified as already being taken by a second user
device such as user device 50121 in the session, a check can be executed by a
computing device 502 to determine whether the graphical user interface
associated with the user device 5013 has specified a hold forbidding role change
operations. If no hold exists, a change to subscriber message can be sent by the
computing device 502 to a remote client associated with the user device 50121.
As a further example, the client can then it this message to an associated
local graphical user interface. The user device 501a can then be ated as a
subscriber role. As an example, the computing device 502 can modify the
ication links 504a and establish a modified ication links 504a'
allowing, for example, a ectional transmission of data in the direction of
user device 501 a. As an example, the graphical user interface associated with the
user device 5012 can then send a message to a local application instructing it to
terminate the sending of one or more images in a specified video stream to a
remote ation associated with user device 501b designed as e.g., operator.
In an aspect, if an affirmative response is returned in step 514 (),
computing device 502 can send an accept message to the associated client of
user device 501c requesting a role change to a specified role (e.g., helper). The
associated client of user device 501c can then transmit this message to a local
cal user interface. The user device 501c can then be designated as a
requested role (e.g., helper). As an example, the ing device 502 can
modify communication link 504C and establish a modified communication link
504c' allowing the, as an e, ectional transmission of data in the
direction of ing device 502. The local graphical user interface associated
with the user device 501c can then send a start send message to a local
application, which can then initiate the sending of one or more images in a
specified video stream to the computing device 502. Communication link 504f
can be removed and communication link 504g can be established between the
user device 501c and the user device 501b ng, for example, multi—
directional, transmission of data such as one or more images in a video stream.
The local graphical user interface associated with the user device 501c can then
send a start send message to a local application, which can then initiate the
sending of one or more images in a specified video stream to user device 501b
designated as e. g., operator. As an example, the user device 501c, user device
501b, and computing device 502 can then optionally perform a spatial image
registration process.
In another , user device 501c can initiate a role change request. If
the requested role (e.g., helper) is identified as not already being taken by
another user device in the session, an tive value can be return by the
computing device 502. As an example, the computing device 502 can send an
accepted message to a client associated with the user device 501c requesting a
role change to a specified role. As a further example, the client can then transmit
this message to an associated local graphical user ace. The user device 501c
is now ated as being in the requested role (e.g., helper). The computing
device 502 can establish a communication link allowing the, as an example,
unidirectional in the direction of the computing device 502, transmission of, for
example, one or more images in a video stream from the user device 501c. As an
example, the graphical user interface associated with the user device 501e can
then send a “start send” message to an associated local application, instructing it
to initiate the sending of one or more images in a video stream to ing
device 502. As a further example, a communication link can be established
between the user device 501c and the other of the user device designated as e. g.,
operator or helper (if t) allowing, for example, multi-directional
ission of data such as one or more images in a video stream between the
user s 501C and the other of the user device designated as e.g., operator or
helper (if present). The local graphical user interface associated with the user
device 501e can then send a “start send” e to an associated local
application, which can then initiate the sending of one or more images in a
specified video stream to the other of the user device designated as e. g., operator
or helper (if present). As an example, the user device 501e, computing device
502, and one or more other user devices in for e, the other of the helper or
operator role, can then optionally perform a spatial image registration process.
illustrates an exemplary negotiation of a role change request. In
an aspect, an intermediary device or proxy server 506 can perform processing,
thereby offloading processing requirements from computing device 502 and
decreasing latency experienced by one or more users. As an example, the
presence of a proxy server 506 can accommodate one or more users in e. g., a
helper role. In an exemplary case, as shown in , user s 501d,
501e can be designated as being in a helper role and user device 501b can be
designated as being in an operator role. User s 5013, 501e can be
designated as being in a subscriber role. The proxy server 506 can perform some
degree of image processing, combining one or ts from one or more
images received from user devices 501d, 501e. Communication links 504d,
504b can be established with the computing device 502, allowing the, as an
example, unidirectional in the direction of the ing device 502,
transmission of, for example, one or more images in a video stream.
Communication links 5043, 504c can be established with the computing device
502 allowing the, as an example, ectional in the direction of user devices
5013, 501e, respectively, transmission of, for example, one or more images in a
video stream. Communication link 504c can be established between the user
WO 10517
device 501b and the proxy server 506 allowing the, as an example, multi—
directional, transmission of, for example, one or more images in a video stream.
Communication links 504f, 504g can be established with a proxy server 506,
allowing the, as an example, multi-directional transmission of, for example, one
or more images in a video stream.
In another aspect, user device 501c can initiate a role change request. If
the requested role (e.g., helper) is identified as already being taken by user
device 501d in the session, a check can be executed by a computing device 502
processor to determine whether the graphical user interface ated with the
user device 501d has specified a “hold” forbidding role change operations. If no
hold exists, a “change to subscriber” message can be sent by the computing
device 502 to a remote client associated with the user device 501d. As a r
e, the client can then transmit this message to an associated local
graphical user interface. The user device 501d is now designated as being in the
subscriber role. As an example, the computing device 502 can establish
communication link 504c with the user device 501d allowing the, as an e,
unidirectional in the direction of user device 501d, transmission of, for example,
one or more images in a video stream. As another example, the graphical user
interface associated with the user device 501d can then send a message to the
local ation associated with the second user to terminate the sending of one
or more video images to an application ated with a proxy server 506. As an
example, the computing device 502 can send a “stop send” message to the proxy
server 506 instructing it to ate the g of one or more video images to
an application associated with the user device 501d.
In an aspect, a computing device 502 can send an “accepted” message to
the associated client of user device 501c requesting a role change to a ied
role (e.g., helper). The associated client of user device 501c can then transmit
this message to a local graphical user interface. The user device 501c is now
designated as being in the requested role (e. g., helper). The computing device
502 can then send an “establish connection” message to the proxy server 506
instructing it to establish a communication link 504f with the user device 501c
allowing the, as an example, multi-directional, transmission of, for example, one
or more images in a video stream. As an example, the computing device 502 can
send a “start send” message to the user device 501c, instructing it to initiate the
sending of one or more images in a video stream to the proxy server 506. The
proxy server 506 can perform some degree of image sing, combining one
or elements from one or more images received from user devices 501c, 501c. As
another example, the computing device 502 can send a “start send” message to
the proxy server 506, instructing it to initiate the sending of one or more images
in a video stream to the user device 501c newly ated as helper. The proxy
server 506 can initiate the sending of one or more images in a video stream to
one or more of the user device 501e, 501 c. As an example, the proxy server 506,
computing device 502, and user 501b designated as e. g., operator can then
optionally perform a l image registration process.
illustrates an ary negotiation of a role change request.
User device 5012 can be designated as being in a helper role and user device
501c can be designated as being in an operator role. Communication link 504e
can be established n user device 501a designated as helper and user
device 501c ated as operator, allowing, as an example, multi—direction
transmission of, for example, one or more images in a video stream. User
devices 501d, 501c, 501b can be designated as being in a sender role.
Communication links 504d, 504b, 504a can be established with user device
501a designated in a helper role, allowing, as an example, unidirectional in the
direction of user device 5013, transmission of, for example, one or more images
in a video stream. Communication links 504i, 504h, 504e, 504g, 504f can be
ished between user devices 501e, 501d, 5012, 501c, 501b, respectively,
and a ing device 502, allowing multidirectional message-based
communication.
User device 501d can initiate a role change request. If the requested role
(e. g., operator) is identified as already being taken by user device 501c in the
session, a check can be executed by a computing device 502 processor to
determine whether the graphical user interface associated with the user device
501e has specified a “hold” ding role change operations. If no hold exists,
a “change to sender” message can be sent by the computing device 502 to a
remote client associated with the user device 501e. As a further example, the
client can then transmit this message to an associated local graphical user
interface. The user device 501e is now designated as being in the sender role. As
an example, the computing device 502 can send a “modify connection” message
to a remote client associated with user device 5013. The client can then transmit
this e to an ated local graphical user interface. The graphical user
ace associated with the user device 50121 can then modify the
communication link 504e to establish a modified communication link 504e’
ng the, as an example, unidirectional in the direction of the user device
5013, transmission of, for example, one or more images in a video stream from
the user device 501e. As a further example, the graphical user interface
associated with the user device 501a can modify the communication link 504d to
establish modified communication link 504d’ allowing, as an example, the
multi-directional transmission of, for example, one or more images in a video
stream between the user devices 501a, 501d.
In an aspect, the graphical user interface associated with the user device
501a can send a “stop send” message to an associated local application,
instructing it to terminate the sending of one or more images in a video stream to
the user device 501e. The ing device 502 can then send an “accepted”
message to the remote client associated with the user device 501d, allowing the
user that the role change request to a specified role (e.g., operator) has been
accepted. The remote client can then transmit this message to an associated local
graphical user interface. The user device 501d can now be designated as being in
the operator role. Concurrently, the graphical user interface associated with the
user device 5013 can send a “start send” message to an associated local
ation, instructing it to initiate the sending of one more images in a video
stream to user device 501d. As an example, the user device 501d and user device
5012 can optionally perform a Spatial image registration process.
illustrates an exemplary spatial image registration process. In
step 602, a computing device can generate a registration image ting of a
known registration n. As an example, this pattern can comprise dots,
squares, or other shapes. In step 604, the computing device can write the locally
generated registration image to a display. In step 606, the computing device can
read the registration image back from the display, and compute a first
atical transformation, at 608, between the image generated at step 602
and the image as read back at step 606. As an example, the computing device
can receive a registration image from the application associated with the user
designated as helper. The computing device can write the registration image to a
display and subsequently read it back. The computing device can then compute a
second transformation n the ack image and the image as originally
ed from the application of the helper. As a further example, the computing
device can additionally receive a registration image from the application of the
user designed as operator. The computing device can write the registration image
to a display and subsequently read it back. The computing device can then
compute a third transformation between the read-back image and the image as
originally received from the application of the operator. One or more of the first,
second, and third transformations can then be utilized by the computing device
to bring one or more images of the images generated or ed from one or
more of the local processor, user designated as “operator”, and user designated
as “helper” into the same 2-dimensional (2D) space, allowing proper spatial
relationships to be maintained.
illustrates an exemplary role negotiation process. In step 612, a
common field of interest can be rendered that reflects a presence of a plurality of
elements. In an aspect, at least one of the ts is a remote element located
remotely from another of the ts. In step 614, one or more role
designations can be received. In an aspect, each role designation can be
associated with one of a plurality of devices. As an example, at least one of the
plurality of devices is a remote device located remotely from r of the
plurality of devices. In step 616, the common field of interest can be d
based upon the received one or more role designations. Each of the one or more
role designations can define an interactive functionality ated with the
respective device of the plurality of devices. Interactive functionality can
comprise one or more of sending and receiving image data. One or more of the
role designations can define a different interactive functionality compared to
r of the role designations.
illustrates an exemplary role negotiation process. In step 622, a
common field of interest can be ed that s a ce of a plurality of
elements. In an aspect, at least one of the elements is a remote element located
remotely from another of the elements. In step 624, one or more role
ations can be received. In an aspect, each role designation can be
associated with one of a plurality of devices. As an example, at least one of the
ity of devices is a remote device located remotely from another of the
plurality of devices. In step 626, one or more of an ingress and egress of
information (e. g., image information, audio information, etc.) to/from one or
more of the plurality of devices can be managed based upon one or more of the
plurality of role designations. In an aspect, each of the role designations can
define a transmission rule for managing ingress and egress of image information
associated with the respective device of the plurality of devices.
In an , the operator role can define (e. g., be associated with, cause
to be invoked, implement, etc.) a transmission rule comprising one or more
aspects for managing ingress and egress of image and audio information
associated with a tive device of a plurality of devices. A device in the
operator role can perform peer-to—peer exchange of image/audio information
with a user device in the helper role or a proxy server combining images/audio
from one or more helper s. Accordingly, latency can be minimized for the
device in the operator role as transmission through a third-party central server
can be avoided. In an aspect, a device ng the or role can be utilized
by a user functioning in an active role such as a task performer. As an example, a
task performer could be performing a surgery, repairing advanced machinery,
inspecting an bile, operating a complex device, or similar. As such, the
task performer desires low video/audio latency. High levels of audio
y can lead to imprecise task completion and frustration on the part of the
user. In another aspect, a device in the operator role may additionally send its
locally captured /audio to a ing device to allow for processing and
dispersion to one or more devices designated in a subscriber role.
In an aspect, the helper role can define (e. g., be associated with, cause to
be invoked, implement, etc.) a transmission rule comprising one or more aspects
for ng ingress and egress of image and audio information associated with
a respective device of a plurality of devices. A device in the helper role or a
proxy server combining images/audio from one or more devices designated in
the helper role can perform peer-to-peer exchange of image/audio information
with a user device in the operator role. A device assuming the helper role can
function as a device for a remote expert, for example, a knowledgeable or highly
trained individual assisting a less skilled or knowledgeable individual. In certain
instances, increased latency may tolerable for the remote expert user, as he is
lly not attempting to physically complete a task. If there is more than one
device in the helper role, a proxy server may be used to perform combined
processing and send/receive. In doing this, tructure/system bandwidth
requirements are reduced and sing is offloaded from the helper devices. A
device in the helper role or a proxy server combining images/audio from one or
more helpers may additionally send images/audio to a computing device for
processing and dispersion to one or more devices designated as subscriber.
The subscriber role can define a ission rule comprising one or
more aspects for managing ingress and egress of image and audio information
associated with a respective device of a plurality of s. A device in the
subscriber role can receive image/audio information from a computing device
performing some degree of image processing. In general, a user of a device in
the subscriber role passively watches, but does not actively ipate in, the
completion of task. Thus, increased latency is generally ble.
The sender role can define a transmission rule sing one or more
aspects for managing ingress and egress of image and audio information
associated with a respective device of a ity of devices. A device in the
sender role will generally send image/audio information to a device in the helper
role performing some degree of image processing. In general, a user of a device
in the sender role passively sends image/audio information.
] In an aspect, a transmission rule can relate to managing ingress and
egress of image and audio information associated with a tive device of a
plurality of devices. As an example, the transmission rule can define bandwidth
limits for one or more devices. In an aspect, when network traffic to/from a
device reaches a bandwidth limit, the transmission rule can define an operational
protocol to reduce the traffic load associated with the device. As an e, an
operational protocol associated with a transmission rule can comprise
transmitting a message, document, or ure-based communication rather
than the content itself.
In an aspect, transmission of the e, document, or procedure—based
communication can be initiated by one or more devices to one or more other
devices to indicate/effect s to roles or the common field. As an example,
updates regarding s in the role of one or more devices can be transmitted
to one or more other devices. As another example, a device in an administrator
role can propagate a message to one or more other devices with an instruction to
view a particular image or texture with a d transparency/opacity. As a
further example, updates regarding the in—session ce of one or more other
devices can be transmitted to one or more other devices.
illustrates an ary l presence system. One such
system can be used by each remote participant that is to join the same session.
Each system can communicate with each other through a network tion.
For example, remote sites can connect via the intemet. Tasks can be divided
amongst a plurality of computers in each system. For example, one computer (a
graphics server) can gather images from local cameras and a network server,
perform the stereo image composition tasks, and drive a local stereoscopic
display system. As a further example, the processor(s) 310 of system 300 can be
embodied by the graphics .
rates exemplary processes that can be performed with the
graphics server. Images can be gathered into local data structures (frame rings).
Local images can be gathered from a plurality of cameras, for example two
cameras. Remote images can be provided by the network server via a high—
speed remote direct memory access (RDMA) connection, for example. These
images can be combined so that the remote user and the local user can be seen in
the same scene (as in . This composite result can be transmitted to a
local scopic display system. A second computer can act as the network
server, which can perform network ng/decoding tasks as well as depth
map generation, for example.
illustrates exemplary processes that can be performed with the
network server. Local images gathered from the graphics server via the RDMA
connection can be analyzed and mapped with depth information, encoded for
efficient network transmission, and sent to an external network connection to be
received by a corresponding network server at the remote site. Simultaneously,
encoded images and depth maps can be received from the remote site, decoded,
and provided to the local graphics server via the RDMA connection.
The system can be user—controlled by a control terminal connected to the
network server; the user can then access and control the graphics server via the
dedicated network connection to the network server.
Parameters of virtual interactive ce can be configured depending
on the system used. rable parameters include, but are not limited to, size
of l elements, presence of virtual elements (opaque, translucent, etc.), time
of virtual presence (time can be configured to be delayed, slowed, sed,
etc.), superimposition of elements such that any combination of virtual and real
can be superimposed and/or 'fitted' over one another, and the like.
illustrates a side view of an exemplary VIP display.
illustrates a user’s view of an ary VIP display. illustrates a
user’s view of an exemplary VIP display.
] As used herein, a “local” field of interest can refer to a local physical
field and local user, thus making every other field remote. Each field can be
local to its local al user, but remote to other users. The composite of the
fields can be a common field of interest. This is distinct from common “virtual
worlds” in that there can be components of “real” within the local rendering of
the common field of interest and interactions can be between actual video (and
other) renderings of physical objects and not just graphic avatars representing
users and objects. The methods and systems provided allow for virtual
interactive ce to modify/optimize a physical domain by the interplay of
real and l.
In an aspect, illustrated in , provided are methods for virtual
interactive presence comprising rendering a common field of st that
reflects the physical presence of a remote user and a local user at 1101, rendering
interaction between the remote user and the local user in the common field of
interest at 1102, and uously updating the common field of interest such
that the presence of the remote user is rendered in real time to the local user and
the presence of the local user is rendered in real time to the remote user at 1103.
The common field of interest can be rendered such that the remote user
experiences the common field of interest similarly to the local user. The local
user can experience the remote user’s physical presence in a manner that enables
continuous interaction in the common field of st with the remote user. The
methods can further comprise rendering the physical presence of a local object in
the common field and ing interaction between the local user and the local
object in the common field. The methods can further comprise rendering the
physical presence of a local object in the common field of interest and rendering
interaction between the remote user and the local object in the common field of
interest.
In another aspect, illustrated in , provided are methods for virtual
interactive presence comprising rendering a local field of interest that reflects the
physical presence of a local object, a volumetric image of the local object, and a
local user at 1201, rendering interaction between the local , the volumetric
image, and the local user in the local field of interest at 1202, and continuously
updating the local field of interest such that the ce of the local object and
the volumetric image of the local object is rendered in real time to the local user
at 1203.
The local object can be, for example, a patient and the volumetric image
of the local object can be, for example, a medical image of a part of the patient.
However, the local object can be any object of interest and the image of the local
object can be any accurate rendering of that object. For example, could be an
bile engine and a 3D graphic of the engine, etc.
] The medical image can be, for example, one of, an x—ray image, an MRI
image, or a CT image. The methods can further comprise superimposing, by the
local user, the volumetric image onto the local object. The mposition can
be med automatically by a er.
The methods can further comprise adjusting, by the local user, a property
of the volumetric image. The property can be one or more of transparency,
l location, and scale.
The methods can r comprise rendering a local tool in the local field
of interest. The methods can further comprise rendering the local tool in
accurate spatial relation to the rendering of the local object. The tool can be any
type of tool, for example, a surgical tool.
In another , provided are systems for virtual presence, comprising a
WO 10517
virtual presence display, configured for ying a common field of interest, a
local sensor, configured for obtaining local virtual ce data, a network
interface, configured for transmitting local virtual presence data and receiving
remote virtual presence data, and a processor, coupled to the virtual presence
display, the local sensor, and the network interface, wherein the processor is
configured to perform steps comprising, rendering a common field of interest
that reflects the physical presence of a remote user and a local user based on the
local virtual presence data and the remote virtual presence data, rendering
interaction between the remote user and the local user in the common field of
interest, continuously updating the common field of interest such that the
presence of the remote user is rendered in real time to the local user and the
ce of the local user is rendered in real time to the remote user, and
outputting the common field of interest to the virtual presence display.
The virtual presence display can be one or more of a stereoscopic
display, a monoscopic display (such as a CRT, LCD, etc.), and the like. The
sensor can be one or more of a camera, an infrared , a depth scan sensor,
and the like. The common field of interest can be ed such that the remote
user experiences the common field of interest similarly to the local user. The
local user can experience the remote user’s physical presence in a manner that
enables continuous interaction in the common field of interest with the remote
user.
The processor can be further configured to perform steps comprising
rendering the physical presence of a local object in the common field of interest
and rendering interaction between the local user and the local object in the
common field of interest.
The processor can be further configured to perform steps sing
rendering the physical presence of a local object in the common field of interest
and rendering ction between the remote user and the local object in the
common field of st.
Further provided are systems for l ce, comprising a virtual
presence display, red for displaying a local field of interest, a local
sensor, configured for obtaining local virtual presence data, a processor, coupled
to the virtual presence display and the local sensor, wherein the processor is
red to perform steps comprising, rendering a local field of interest that
reflects the al ce of a local object and a local user based on the local
virtual presence data and a volumetric image of the local object, rendering
interaction between the local object, the volumetric image, and the local user in
the local field of interest, continuously updating the local field of interest such
that the presence of the local object and the volumetric image of the local object
is rendered in real time to the local user, and outputting the local field of interest
to the l presence display.
The virtual presence y can be one or more of a stereoscopic
display, a monoscopic display (such as a CRT, LCD, etc.), and the like. The
sensor can be one or more of a camera, an infrared sensor, a depth scan sensor,
and the like.
The local object can be, for example, a patient and the volumetric image
of the local object can be, for example, a medical image of a part of the t.
The medical image can be, for example, one of, an x-ray image, an MRI image,
or a CT image. However, the local object can be any object of interest and the
image of the local object can be any accurate ing of that object. For
example, could be an automobile engine and a 3D graphic of the engine, etc.
The processor can be r configured to perform steps comprising
superimposing, by the local user, the volumetric image onto the local object.
The processor can be further configured to perform steps comprising adjusting,
by the local user, a property of the volumetric image. The property can be one
or more of transparency, spatial location, and scale.
The processor can be further configured to m steps sing
rendering a local tool in the local field of interest. The processor can be further
configured to perform steps comprising rendering the local tool in te
spatial relation to the rendered local object.
The disclosed methods and systems can have broad applications. For
example, surgery, gaming, mechanics, munitions, battle field presence,
instructional efforts (training) and/or any other situation where interaction is part
of the scenario.
Also disclosed are methods and s that enable a remote expert to be
virtually present within a local surgical field. Virtual interactive presence can be
used to enable two ns remote from each other to interactively perform a
surgical procedure. The methods and system enable two or more operators to be
virtually t, and interactive, within the same real operative field, thus
supporting remote assistance and exporting surgical expertise.
The methods and s can also be used to superimpose g data
of the operative anatomy onto the anatomy itself for guidance and orientation
(augmented reality). The s and systems can be used for training of
students. The methods and systems augment and enhance the field of robotics
by virtually bringing an expert into the robotic field to guide the robot operator.
The methods and systems are able to endoscopic procedures by inserting
the expert's hands directly into the endoscopic field for guidance. The methods
and systems expand remote surgery by providing the assistance of a remote
expert to an actual local surgeon, whose basic skills can handle emergencies, and
who will learn from the virtual interaction. The methods and systems can be
used at trauma sites and other medical environments. The methods and systems
can be used to provide remote assistance in other areas such as engineering,
construction, architecture, and the like. The methods and systems disclosed can
be used to transmit expertise to a remote 'site of need', merge contemporary
imaging directly into the surgical field, and train surgical ts
An exemplary remote al assistance system for transmitting surgical
maneuvers of a local expert to a remote surgeon for the purpose of
guiding/assisting the remote surgeon is illustrated in . The remote
al field can be viewed by the remote surgeon with a binocular video
system. The Video system can show the field with his hands and instruments
performing the procedure. The viewing system can be referred to as a surgical
videoscope.
The binocular video rendering of the remote field can be itted to
the local expert), who can view the (now virtual) stereoscopic rendering of the
procedure through a second surgical videoscope system. The local expert can
insert his hands into the l field, thus seeing his real hands within the virtual
field.
The video image of the local expert's hands can be transmitted back to
the remote surgeon's surgical videoscope system superimposed into the real
field. The remote surgeon can then see the expert's Virtual hands within his
surgical field in a lly/anatomically relevant t. With this system, the
local expert can use his hands to show the remote surgeon how to perform the
case.
] Exemplary elements of the system can comprise a remote station where
the remote surgeon can perform the operative procedure, a remote surgical
Videoscope system comprised of, for example, a fixed stereoscopic Videoscope
that may resemble a mounted microscope. This apparatus can be used by the
remote surgeon to View the operative field. Any other type of le VIP
display can be used. The system can project the binocular video image to a
similar local surgical Videoscope at a local station. The local surgical
Videoscope can receive the binocular Video image of the remote procedure and
allow the local expert to view it. The local Videoscope can View the local
surgeons hands as they move within the Virtual remote field as viewed through
the local Videoscope. The local Videoscope can then transmit the local expert's
hands back to the remote Videoscope so that the remote surgeon can see the
expert's Virtual hands within the real field.
With this system, the local expert can show the remote surgeon the
riate maneuvers that result in successful completion of the case. The
remote surgeon can have a basic skill set to carry out the new procedure.
Therefore, the local expert can simply demonstrates to the remote surgeon new
ways to apply the skill set. This system does not have to supplant the remote
surgeon, but can be used to e his/her capability. The remote surgeon can
be on hand to rapidly deal with any emergencies. Time delay is minimized
because the remote surgeon can use his/her own hands to perform the task,
eliminating the need for the local expert to manipulate remote robotic
apparatuses.
Also sed are methods and systems for merging contemporary
medical imaging onto an operative field. A volume image can be obtained of the
operative field. For e, a volume MRI of the head, prior to the surgical
procedure. The image data can be reconstructed into a three dimensional
rendering of the anatomy. This rendering can be transmitted to the surgical
Videoscope that will be used to View the ive field. Through the
WO 10517
videoscope, the surgeon can View this 3D rendering in a translucent manner
superimposed onto the surgical field. In this case, the surgeon would see a
rendered head superimposed on the real head. Using software tools in the
al videoscope interface, the surgeon can rotate and scale the ed
image until it “fits” the real head. The videoscope system can allow the surgeon
to differentially fade the rendered head and real head so that the surgeon can
“look into” the real head and plan the surgery.
Exemplary elements of the system can comprise a surgical Videoscope
viewing system through which the surgeon views the al field. A computer
for reconstruction of a volume—acquired MRI/CT (or other) image with sufficient
tion to enable matching it to the real surgical anatomy. The volume
rendered image can be displayed through the Videoscope system so that the
surgeon can see it stereoscopically. A software interface can enable the surgeon
to vary the translucency of the rendered and real anatomy so that the rendered
anatomy can be superimposed onto the real anatomy. The surgeon can “open
up” the ed anatomy to View any/all internal details of the image as they
relate to the real anatomy. Surgical tools can be spatially registered to the
rendered anatomy so that behavior can be tracked and applied to the image.
As shown in , an example of such a task is placing small objects
inside a jar of dark gelatin so that they are not Visible to the surgeon. The task is
for the surgeon to use a long forceps to reach into the gelatin and touch or grasp
the objects. The Surgical Videoscope system can obtain a volume scan of the
gelatin jar and render the jar in three dimensions and display a binocular
rendering through the Videoscope. The n can View the rendering and the
real jar h the scope system and fit the rendered jar onto the real jar. By
differentially adjusting ucency, the surgeon can reach into the real jar with
a forceps and grasp a selected object, while avoiding other designated objects.
The grasping instrument can be spatially registered onto the volumetric
rendering of the surgical field, thereby allowing a graphic of the tool to be
displayed on the ing of the surgical field in appropriate anatomic
ation. This can provide enhanced guidance. This can be implemented by
touching ated rks on the real object (jar) with a digitizer that
communicates with the image rendering system, thus defining the obj ect/probe
relationship. Because the object (jar) is registered to the image of the jar by
superimposition, a graphic of the probe can be displayed in relation to the image
of the jar ng virtual surgery.
There are many situations in which the present system can be used. For
example, remote surgery, medical training, and tele—medicine, which can be used
for third world countries or in a military situation. Surgeons remotely located
from patients can assist other surgeons near the patient, can assist medics near
the patient, and can perform surgical operations when coupled to a robotic
surgery system. Other examples include, augmented or enhanced surgery —
normal surgery using virtual nments, an example of which is endoscopic
surgery. Surgical procedures can also be ted. Surgeons located remote
from each other may plan and practice a procedure before carrying out the
operation on a real patient.
Other applications include the preparation of patient before surgery,
medical therapy, preventative medicine, re therapy, reducing phobias,
training people with lities and skill ement, and the like.
The viewer then views the projection through passive stereoscopic
polarized s (similar to sunglasses) that route the left-eye image to the left
eye, and the right-eye image to the right eye. This provides an illusion of
stereopsis when the correctly—offset images are properly rendered by the
software. The system can be ed by other types of stereoscopic displays
with no functional detriment to the system. The stereoscopic display can
comprise at least two display projectors fitted with polarizing lenses, a back-
projection screen material that maintains light polarization upon ion,
special glasses that restrict each eye to see only light of a particular polarization,
and the Viewer. The image to be Viewed can be rendered with two slightly
different view transformations, reflecting the different locations of the ideal
viewer’s two eyes. One projector displays the image rendered for the left eye’s
position, and the other tor displays the image rendered for the right eye’s
position. The glasses restrict the light so that the left eye sees only the image
rendered for it, and the right eye sees only the image ed for it. The viewer,
presented with a reasonable stereoscopic image, will perceive depth.
] is a block diagram illustrating an ary operating
nment for performing the disclosed methods. This exemplary operating
environment is only an example of an operating environment and is not intended
to suggest any limitation as to the scope of use or functionality of operating
environment architecture. Neither should the ing environment be
interpreted as having any dependency or requirement relating to any one or
combination of components illustrated in the exemplary operating environment.
The s can be operational with numerous other general purpose or
special purpose ing system environments or configurations. Examples of
well known computing systems, environments, and/or configurations that may
be suitable for use with the system and method include, but are not d to,
personal computers, server computers, laptop devices, and multiprocessor
systems. Additional examples include set top boxes, programmable consumer
electronics, k PCs, minicomputers, mainframe computers, distributed
computing environments that include any of the above systems or devices, and
the like.
] The methods may be described in the general context of computer
instructions, such as program modules, being executed by a computer.
Generally, program modules include routines, programs, objects, components,
data structures, etc. that perform particular tasks or implement ular abstract
data types. The system and method may also be practiced in distributed
computing environments where tasks are performed by remote processing
devices that are linked through a ications network. In a distributed
computing environment, program modules may be located in both local and
remote computer storage media including memory storage devices.
The methods sed herein can be ented via one or more
general-purpose computing s in the form of a computer 1501. The
ents of the computer 1501 can include, but are not limited to, one or
more processors or processing units 1503, a system memory 1512, and a system
bus 1513 that couples various system components including the processor 1503
to the system memory 1512.
The system bus 1513 represents one or more of several possible types of
bus structures, including a memory bus or memory controller, a peripheral bus,
an accelerated graphics port, and a processor or local bus using any of a variety
of bus architectures. By way of example, such architectures can e an
Industry Standard ecture (ISA) bus, a Micro l Architecture (MCA)
bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association
(VESA) local bus, and a Peripheral Component Interconnects (PCI) bus also
known as a Mezzanine bus. This bus, and all buses specified in this description
can also be implemented over a wired or wireless network tion. The bus
1513, and all buses specified in this description can also be implemented over a
wired or wireless network connection and each of the subsystems, including the
processor 1503, a mass storage device 1504, an operating system 1505,
application software 1506, data 1507, a k adapter 1508, system memory
1512, an Input/Output ace 1510, a display adapter 1509, a y device
1511, and a human machine interface 1502, can be contained within one or
more remote computing devices 1514a,b,c at physically separate locations,
connected h buses of this form, in effect implementing a fully distributed
system.
The computer 1501 typically includes a variety of computer le
media. Such media can be any available media that is accessible by the
computer 1501 and includes both volatile and non—volatile media, removable and
non—removable media. The system memory 1512 includes computer readable
media in the form of volatile memory, such as random access memory (RAM),
and/or non-volatile memory, such as read only memory (ROM). The system
memory 1512 typically ns data such as data 1507 and/or program modules
such as operating system 1505 and application software 1506 that are
immediately accessible to and/or are presently operated on by the processing unit
1503.
] The computer 1501 may also e other removable/non-removable,
volatile/non-volatile computer storage media. By way of example,
illustrates a mass storage device 1504 which can provide latile storage of
computer code, computer readable instructions, data structures, program
modules, and other data for the computer 1501. For example, a mass storage
device 1504 can be a hard disk, a removable magnetic disk, a removable optical
disk, magnetic cassettes or other magnetic storage devices, flash memory cards,
CD—ROM, digital versatile disks (DVD) or other optical storage, random access
es (RAM), read only memories (ROM), electrically erasable
programmable read-only memory (EEPROM), and the like.
Any number of program modules can be stored on the mass storage
device 1504, including by way of example, an operating system 1505 and
application software 1506. Each of the operating system 1505 and application
software 1506 (or some combination thereof) may include elements of the
programming and the ation software 1506. Data 1507 can also be stored
on the mass e device 1504. Data 1507 can be stored in any of one or more
databases known in the art. Examples of such databases include, DB2®,
Microsoft® Access, Microsoft® SQL Server, Oracle®, mySQL, PostgreSQL,
and the like. The databases can be lized or buted across multiple
systems.
A user can enter commands and information into the computer 1501 Via
an input device (not shown). es of such input devices include, but are
not limited to, a keyboard, pointing device (e. g., a “mouse”), a microphone, a
joystick, a serial port, a scanner, tactile input devices such as gloves, and other
body coverings, and the like. These and other input devices can be connected to
the processing unit 1503 Via a human machine interface 1502 that is coupled to
the system bus 1513, but may be connected by other interface and bus structures,
such as a parallel port, game port, or a universal serial bus (USB).
] A display device 1511 can also be connected to the system bus 1513 Via
an interface, such as a y adapter 1509. A computer 1501 can have more
than one display adapter 1509 and a computer 1501 can have more than one
display device 1511. For example, a display device can be a monitor, an LCD
(Liquid Crystal Display), or a projector. In on to the display device 1511,
other output peripheral s can include components such as rs (not
shown) and a printer (not shown) which can be connected to the computer 1501
via Input/Output Interface 1510.
The computer 1501 can operate in a networked environment using
logical connections to one or more remote computing devices 1514a,b,c. By
way of example, a remote computing device can be a personal computer,
portable er, a server, a router, a network computer, a peer device or other
common network node, and so on. Logical connections between the computer
1501 and a remote computing device 1514a,b,c can be made via a local area
network (LAN) and a general wide area network (WAN). Such network
connections can be through a network r 1508. A network adapter 1508
can be implemented in both wired and wireless environments. Such networking
nments are commonplace in offices, enterprise—wide computer networks,
intranets, and the Internet 1515.
One or more VIP displays 1516a,b,c,d,e can communicate with the
computer 1501. In one aspect, VIP display 1516c can communicate with
er 1501 through the input/output interface 1510. This communication
can be wired or wireless. Remote VIP displays 1516a,b,c can communicate with
computer 1501 by communicating first with a respective remote computing
device 1514a,b,c which then communicates with computer 1501 through the
network adapter 1508 via a network such as the Internet 1515. Remote VIP
y 1516d can communicate with computer 1501 without the need for a
remote computing device. Remote VIP display 1516d can communicate via a
network, such as the Internet 1515. The VIP displays 1516a,b,c,d,e can
communicate wireless or through a wired connection. The VIP displays
1516a,b,c,d,e can communicate individual or collectively as part of a VIP
display network.
For purposes of illustration, application programs and other executable
m components such as the operating system 1505 are rated herein as
discrete blocks, although it is recognized that such programs and components
reside at various times in ent storage components of the ing device
1501, and are executed by the data processor(s) of the computer. An
implementation of application software 1506 may be stored on or transmitted
across some form of computer readable media. Computer readable media can be
any ble media that can be accessed by a computer. By way of example,
and not limitation, computer le media may comprise “computer storage
media” and “communications media.” “Computer storage media” e
volatile and non—volatile, removable and non-removable media implemented in
any method or technology for storage of ation such as computer le
instructions, data structures, program modules, or other data. er storage
media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or
other memory technology, , digital versatile disks (DVD) or other
optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other
magnetic storage devices, or any other medium which can be used to store the
desired information and which can be accessed by a computer.
Unless otherwise expressly stated, it is in no way ed that any
method set forth herein be construed as requiring that its steps be performed in a
specific order. ingly, where a method claim does not ly recite an
order to be followed by its steps or it is not otherwise specifically stated in the
inventive concepts or ptions that the steps are to be limited to a specific
order, it is no way intended that an order be inferred, in any respect. This holds
for any le non-express basis for interpretation, including: matters of logic
with respect to arrangement of steps or operational flow; plain meaning derived
from grammatical organization or punctuation; the number or type of
embodiments described in the specification.
It will be apparent to those skilled in the art that various modifications
and variations can be made in the present methods and systems without
departing from the scope or spirit. Other embodiments will be apparent to those
d in the art from consideration of the specification and practice disclosed
herein. It is intended that the specification and examples be considered as
ary only, with a true scope and spirit being indicated by the following
claims.
Claims
Claims (22)
1. A method for role negotiation comprising: rendering a common field of interest that reflects a ce of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements; ing a plurality of role designations, each role designation associated with one of a plurality of devices, wherein at least one of the plurality of devices is a remote device d remotely from another of the plurality of devices; wherein each of the role designations represents a uct that triggers a logical execution of one or more programs that, upon execution, affect one or more settings of the common field of interest including: the processing of image, audio, or video information ted to a user; the transmission of images, audio and/or video information presented to a user; and, the tation of a graphical user interface to a user; updating the common field of interest based upon the plurality of role designations, wherein each of the plurality of role designations defines an interactive functionality associated with the respective device of the plurality of receiving a role change t from a first device of the plurality of devices, wherein the role change request comprises a desired role and the desired role is independent of a location of the first device; granting the role change request based upon one or more role designation rules; updating the role designation associated with the first device to match the desired role in se to the granting the role change request; establishing one or more communication links between the first device and one or more other devices of the plurality of devices, wherein the communication links are configured to facilitate one of unidirectional and multidirectional ission based on the d role designation associated with the first device and the role designation of at least one other of the plurality of devices; wherein each role designation defines a transmission rule for managing ingress and egress of information, via a processor, associated with the respective device of the plurality of devices, n the transmission rule comprises one or more of a y parameter and a bandwidth parameter, and wherein the transmission rule defines a portion of image processing to be ded to a proxy server; managing the ingress and egress of the information via the or each established said communication link between at least two of the plurality of devices; according to the transmission rule associated with each of the plurality of role designations, outputting the managed information associated with the tive device of the plurality of devices to a display of the respective ; and outputting the common field of interest based on the updated role designation such that a display parameter of the common field of interest is dependent upon the updated role designation.
2. The method of claim 1, further comprising rendering the presence of each of the elements in real time, wherein each of the elements is registered relative to another of the elements.
3. The method of claim 1, further comprising the step of rendering interaction between the elements in the common field of interest.
4. The method of claim 1, wherein the plurality of role ations ses one or more of an operator role, a helper role, a heads up role, a subscriber role, an strator role, an active viewer role, and a sender role.
5. The method of claim 1, wherein interactive functionality comprises one or more of sending and receiving image data and wherein one or more of the plurality of role designations define a different interactive onality ed to another of the role designations.
6. The method of claim 1, further comprising the step of calibrating the common field of interest by aligning a calibration feature of a remote field and a calibration feature of a local field.
7. The method of claim 1, wherein updating the common field of st comprises modifying a display characteristic of one or more of the remote element and the local element.
8. The method of claim 1, further comprising itting a message to a user interface of the first device in response to the granting the role change request.
9. The method of claim 1, wherein when the operator role is designated, the local element is rendered either with a degree of transparency or with substantially full opacity.
10. The method of claim 9, wherein when the operator role is designated, the remote element is rendered with a degree of transparency.
11. The method of claim 1, n a heads up role may be designated, where one or more local images may be rendered in a plurality of a display screen of the first device, with one or more remote images displayed in a smaller window, or one or more remote images may be rendered in a majority of the display screen of the first device, with one or more local images displayed in a smaller window.
12. A method for role negotiation comprising: generating a common field of interest that reflects a presence of a plurality of elements, n at least one of the ts is a remote element located ly from another of the elements; causing output of the common field of interest to a plurality of devices; receiving a plurality of role designations, each role designation associated with one of the plurality of devices, n at least one of the plurality of s is a remote device located remotely from r of the plurality of devices; managing one or more of the s and egress of information to one or more of the plurality of devices based upon one or more of the plurality of role designations, wherein each of the plurality of role designations defines a transmission rule for managing ingress and egress of information associated with the respective device of the plurality of devices, wherein the transmission rule comprises one or more of a y parameter and a bandwidth ter, and wherein the transmission rule defines a portion of image processing to be offloaded to a proxy server; and ing to the transmission rule associated with each of the plurality of role designations, outputting the managed information associated with the respective device of the plurality of devices to a display of the respective device.
13. The method of claim 12, wherein the plurality of role designations ses one or more of an operator role, a helper role, a heads up role, a subscriber role, an administrator role, an active viewer role, and a sender role.
14. The method of claim 12, wherein one or more of the plurality of role designations define a different transmission rule compared to another of the role designations.
15. The method of claim 12, wherein the information comprises one or more of audio information and image information.
16. The method of claim 12, wherein the transmission rule facilitates the establishment of one or more communication links between the respective device and one or more other devices of the plurality of devices, wherein the ication links are configured to facilitate one of unidirectional and multidirectional transmission based on the updated role designation associated with the respective device and the role designation of at least one other of the plurality of devices.
17. A system for role negotiation comprising: a processor; and a memory bearing instructions that, upon execution by the processor, cause the system at least to: generate a common field of st that reflects a presence of a plurality of ts, wherein at least one of the elements is a remote element located remotely from another of the elements; receive a role ation; manage one or more of the s and egress of information based upon the role designation; update the common field of interest based upon the one or more of the ingress and egress of information; output the d common field of interest to the display; receive a role change request from a first device of a plurality of devices, wherein the role change request comprises a desired role; grant the role change request based upon one or more role designation rules; update the role designation associated with the first device to match the d role in response to the ng the role change request; and establish one or more communication links between the first device and one or more other devices of the ity of s, wherein the communication links are configured to facilitate one of unidirectional and multidirectional transmission based on the updated role designation associated with the first device and the role designation of at least one other of the plurality of devices; wherein each role designation defines a transmission rule for managing ingress and egress of information associated with the respective device of the plurality of devices, wherein the transmission rule comprises one or more of a latency parameter and a bandwidth parameter, and wherein the transmission rule defines a portion of image processing to be offloaded to a proxy server, and ing to the transmission rule associated with each role designation, outputting the managed information associated with the respective device of the plurality of devices to the display.
18. The system of claim 17, wherein the processor is further configured to continuously update the common field of interest such that the presence of the remote element is rendered in real time to a local viewer.
19. The system of claim 17, wherein the information comprises one or more of audio information and image information.
20. The system of claim 17, wherein the role designation comprises one or more of an operator role, a helper role, a heads up role, a subscriber role, an administrator role, an active viewer role, and a sender role.
21. The system of claim 17, wherein receiving a role ation comprises determining a network characteristic comprising one or more of latency and bandwidth and ing a role designation based on the determined k characteristic.
22. The system of claim 17, further comprising instructions that, upon execution by the processor, cause the system at least to: it a message to a user interface of the first device in response to the granting the role change request. W0
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/929,080 US9940750B2 (en) | 2013-06-27 | 2013-06-27 | System and method for role negotiation in multi-reality environments |
US13/929,080 | 2013-06-27 | ||
PCT/US2014/044679 WO2014210517A2 (en) | 2013-06-27 | 2014-06-27 | System and method for role negotiation in multi-reality environments |
Publications (2)
Publication Number | Publication Date |
---|---|
NZ715526A NZ715526A (en) | 2020-11-27 |
NZ715526B2 true NZ715526B2 (en) | 2021-03-02 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10482673B2 (en) | System and method for role negotiation in multi-reality environments | |
CA2896240C (en) | System and method for role-switching in multi-reality environments | |
US10622111B2 (en) | System and method for image registration of multiple video streams | |
AU2018264095B2 (en) | System and method for managing spatiotemporal uncertainty | |
AU2008270883B2 (en) | Virtual interactive presence systems and methods | |
NZ715526B2 (en) | System and method for role negotiation in multi-reality environments |