NZ619935B2 - Drag and drop of objects between applications - Google Patents
Drag and drop of objects between applications Download PDFInfo
- Publication number
- NZ619935B2 NZ619935B2 NZ619935A NZ61993512A NZ619935B2 NZ 619935 B2 NZ619935 B2 NZ 619935B2 NZ 619935 A NZ619935 A NZ 619935A NZ 61993512 A NZ61993512 A NZ 61993512A NZ 619935 B2 NZ619935 B2 NZ 619935B2
- Authority
- NZ
- New Zealand
- Prior art keywords
- processor
- window
- application
- rendering
- selection
- Prior art date
Links
- 238000009877 rendering Methods 0.000 claims abstract description 116
- 238000000034 method Methods 0.000 claims abstract description 72
- 238000003860 storage Methods 0.000 claims description 53
- 230000000694 effects Effects 0.000 claims description 16
- 150000002500 ions Chemical class 0.000 claims description 16
- 238000006243 chemical reaction Methods 0.000 claims description 13
- 230000004044 response Effects 0.000 claims description 10
- 238000005266 casting Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 7
- 210000001217 Buttocks Anatomy 0.000 claims 1
- 230000004048 modification Effects 0.000 abstract description 4
- 238000006011 modification reaction Methods 0.000 abstract description 4
- 239000000203 mixture Substances 0.000 description 10
- 230000000875 corresponding Effects 0.000 description 5
- 229940035295 Ting Drugs 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 101700062627 A1H Proteins 0.000 description 2
- 101700084722 A1H1 Proteins 0.000 description 2
- 101700061511 A1H2 Proteins 0.000 description 2
- 101700048824 A1H3 Proteins 0.000 description 2
- 101700051538 A1H4 Proteins 0.000 description 2
- 101700051076 A1HA Proteins 0.000 description 2
- 101700015578 A1HB1 Proteins 0.000 description 2
- 101700027417 A1HB2 Proteins 0.000 description 2
- 101700018074 A1I1 Proteins 0.000 description 2
- 101700039128 A1I2 Proteins 0.000 description 2
- 101700004404 A1I4 Proteins 0.000 description 2
- 101700073726 A1IA1 Proteins 0.000 description 2
- 101700075321 A1IA2 Proteins 0.000 description 2
- 101700022939 A1IA3 Proteins 0.000 description 2
- 101700022941 A1IA4 Proteins 0.000 description 2
- 101700023549 A1IA5 Proteins 0.000 description 2
- 101700040959 A1IA6 Proteins 0.000 description 2
- 101700061864 A1IA7 Proteins 0.000 description 2
- 101700071702 A1IA8 Proteins 0.000 description 2
- 101700015972 A1IB1 Proteins 0.000 description 2
- 101700078659 A1IB2 Proteins 0.000 description 2
- 101700076103 A1IB3 Proteins 0.000 description 2
- 101700056046 A1IB4 Proteins 0.000 description 2
- 101700081488 A1IB5 Proteins 0.000 description 2
- 101700062266 A1IB6 Proteins 0.000 description 2
- 101700002220 A1K Proteins 0.000 description 2
- 101700015324 A1KA Proteins 0.000 description 2
- 101700008193 A1KA1 Proteins 0.000 description 2
- 101700010369 A1KA2 Proteins 0.000 description 2
- 101700013447 A1KA3 Proteins 0.000 description 2
- 101700081640 A1KA4 Proteins 0.000 description 2
- 101700057270 A1KA5 Proteins 0.000 description 2
- 101700087084 A1KA6 Proteins 0.000 description 2
- 101700065792 A1KB Proteins 0.000 description 2
- 101700048210 A1KB1 Proteins 0.000 description 2
- 101700046590 A1KB2 Proteins 0.000 description 2
- 101700009736 A1KB3 Proteins 0.000 description 2
- 101700011865 A1KC Proteins 0.000 description 2
- 101700080679 A1L Proteins 0.000 description 2
- 101700051073 A1L1 Proteins 0.000 description 2
- 101700052658 A1L2 Proteins 0.000 description 2
- 101700008597 A1L3 Proteins 0.000 description 2
- 101700026671 A1LA Proteins 0.000 description 2
- 101700012330 A1LB1 Proteins 0.000 description 2
- 101700036775 A1LB2 Proteins 0.000 description 2
- 101700060504 A1LC Proteins 0.000 description 2
- 101700050006 A1MA1 Proteins 0.000 description 2
- 101700050259 A1MA2 Proteins 0.000 description 2
- 101700050664 A1MA3 Proteins 0.000 description 2
- 101700003843 A1MA4 Proteins 0.000 description 2
- 101700003604 A1MA5 Proteins 0.000 description 2
- 101700001262 A1MA6 Proteins 0.000 description 2
- 101700041596 A1MB Proteins 0.000 description 2
- 101700049125 A1O Proteins 0.000 description 2
- 101700017240 A1OA Proteins 0.000 description 2
- 101700024712 A1OA1 Proteins 0.000 description 2
- 101700028879 A1OA2 Proteins 0.000 description 2
- 101700032345 A1OA3 Proteins 0.000 description 2
- 101700087028 A1OB Proteins 0.000 description 2
- 101700062393 A1OB1 Proteins 0.000 description 2
- 101700081359 A1OB2 Proteins 0.000 description 2
- 101700071300 A1OB3 Proteins 0.000 description 2
- 101700031670 A1OB4 Proteins 0.000 description 2
- 101700030247 A1OB5 Proteins 0.000 description 2
- 101700014295 A1OC Proteins 0.000 description 2
- 101700068991 A1OD Proteins 0.000 description 2
- 101700008688 A1P Proteins 0.000 description 2
- 101700071148 A1X1 Proteins 0.000 description 2
- 101700020518 A1XA Proteins 0.000 description 2
- 101700017295 A1i3 Proteins 0.000 description 2
- 101700011284 A22 Proteins 0.000 description 2
- 101700067615 A311 Proteins 0.000 description 2
- 101700064616 A312 Proteins 0.000 description 2
- 101710005568 A31R Proteins 0.000 description 2
- 101710005570 A32L Proteins 0.000 description 2
- 101700044316 A331 Proteins 0.000 description 2
- 101700045658 A332 Proteins 0.000 description 2
- 101700004768 A333 Proteins 0.000 description 2
- 101700007547 A3X1 Proteins 0.000 description 2
- 101700079274 A411 Proteins 0.000 description 2
- 101700063825 A412 Proteins 0.000 description 2
- 101700039137 A413 Proteins 0.000 description 2
- 101710005559 A41L Proteins 0.000 description 2
- 101700056514 A42 Proteins 0.000 description 2
- 101700003484 A421 Proteins 0.000 description 2
- 101700048250 A422 Proteins 0.000 description 2
- 101700060284 A423 Proteins 0.000 description 2
- 101700086421 A424 Proteins 0.000 description 2
- 101710008954 A4A1 Proteins 0.000 description 2
- 101700004929 A611 Proteins 0.000 description 2
- 101700001981 A612 Proteins 0.000 description 2
- 101700009064 A71 Proteins 0.000 description 2
- 101700020790 AX1 Proteins 0.000 description 2
- 101710003793 B1D1 Proteins 0.000 description 2
- 101700038578 B1H Proteins 0.000 description 2
- 101700025656 B1H1 Proteins 0.000 description 2
- 101700025455 B1H2 Proteins 0.000 description 2
- 101700058885 B1KA Proteins 0.000 description 2
- 101700028285 B1KB Proteins 0.000 description 2
- 101700058474 B1LA Proteins 0.000 description 2
- 101700031600 B1LB Proteins 0.000 description 2
- 101700004835 B1M Proteins 0.000 description 2
- 101700054656 B1N Proteins 0.000 description 2
- 101700022877 B1O Proteins 0.000 description 2
- 101700046587 B1Q Proteins 0.000 description 2
- 101700010385 B1R Proteins 0.000 description 2
- 101700032784 B1R1 Proteins 0.000 description 2
- 101700012097 B1R2 Proteins 0.000 description 2
- 101700072176 B1S Proteins 0.000 description 2
- 101700045578 B1S1 Proteins 0.000 description 2
- 101700052720 B1S2 Proteins 0.000 description 2
- 101700046810 B1S3 Proteins 0.000 description 2
- 101700016166 B1T1 Proteins 0.000 description 2
- 101700008274 B1T2 Proteins 0.000 description 2
- 101700085024 B1U1 Proteins 0.000 description 2
- 101700070037 B1U2 Proteins 0.000 description 2
- 101700039556 B1V Proteins 0.000 description 2
- 101700001301 B2H Proteins 0.000 description 2
- 101700011411 B2I Proteins 0.000 description 2
- 101700043400 B2I1 Proteins 0.000 description 2
- 101700013212 B2I2 Proteins 0.000 description 2
- 101700037945 B2I3 Proteins 0.000 description 2
- 101700013584 B2I4 Proteins 0.000 description 2
- 101700076307 B2I5 Proteins 0.000 description 2
- 101700070759 B2J Proteins 0.000 description 2
- 101700047017 B2J1 Proteins 0.000 description 2
- 101700086457 B2J2 Proteins 0.000 description 2
- 101700030756 B2K Proteins 0.000 description 2
- 101700011185 B2KA1 Proteins 0.000 description 2
- 101700034482 B2KA2 Proteins 0.000 description 2
- 101700059671 B2KA3 Proteins 0.000 description 2
- 101700051428 B2KA4 Proteins 0.000 description 2
- 101700067858 B2KB1 Proteins 0.000 description 2
- 101700021477 B2KB2 Proteins 0.000 description 2
- 101700041272 B2KB3 Proteins 0.000 description 2
- 101700026045 B2KB4 Proteins 0.000 description 2
- 101700027558 B2KB5 Proteins 0.000 description 2
- 101700032261 B2KB6 Proteins 0.000 description 2
- 101700073146 B2KB7 Proteins 0.000 description 2
- 101700079550 B2KB8 Proteins 0.000 description 2
- 101700056037 B2KB9 Proteins 0.000 description 2
- 101700036551 B2KBA Proteins 0.000 description 2
- 101700055440 B2KBB Proteins 0.000 description 2
- 101700077277 B2KBC Proteins 0.000 description 2
- 101700056297 B2KBD Proteins 0.000 description 2
- 101700079394 B2KBE Proteins 0.000 description 2
- 101700075860 B2L1 Proteins 0.000 description 2
- 101700067766 B2L2 Proteins 0.000 description 2
- 101700017463 B31 Proteins 0.000 description 2
- 101700004120 B312 Proteins 0.000 description 2
- 101700005607 B32 Proteins 0.000 description 2
- 101710025734 BIB11 Proteins 0.000 description 2
- 101700041598 BX17 Proteins 0.000 description 2
- 101700045280 BX2 Proteins 0.000 description 2
- 101700043880 BX3 Proteins 0.000 description 2
- 101700046017 BX4 Proteins 0.000 description 2
- 101700016678 Bx8 Proteins 0.000 description 2
- 101710025150 DTPLD Proteins 0.000 description 2
- 101710005624 MVA131L Proteins 0.000 description 2
- 101710005633 MVA164R Proteins 0.000 description 2
- 101700060028 PLD1 Proteins 0.000 description 2
- 101710009126 PLDALPHA1 Proteins 0.000 description 2
- 101710005563 VACWR168 Proteins 0.000 description 2
- 101700084597 X5 Proteins 0.000 description 2
- 101700062487 X6 Proteins 0.000 description 2
- 230000004075 alteration Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000000977 initiatory Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical Effects 0.000 description 2
- 230000002093 peripheral Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- NLZUEZXRPGMBCV-UHFFFAOYSA-N Butylhydroxytoluene Chemical compound CC1=CC(C(C)(C)C)=C(O)C(C(C)(C)C)=C1 NLZUEZXRPGMBCV-UHFFFAOYSA-N 0.000 description 1
- -1 blitting Chemical class 0.000 description 1
- 230000003139 buffering Effects 0.000 description 1
- 101710027542 codAch2 Proteins 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 230000001902 propagating Effects 0.000 description 1
- 230000002104 routine Effects 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000001052 transient Effects 0.000 description 1
- 230000001960 triggered Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0486—Drag-and-drop
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
Abstract
Methods and systems directed to capturing an object (303a) rendered on the first window (V1) of a display by a first program, extracting the object (303a), permitting a user to drag the object (303a) across the display into a second window (V2) of the display containing a second program, and importing the object (303d) into the second program in substantially real-time. The drag and drop from a first window (V1) to a second window (V2) comprises rendering a borderless window and a selection (306) in the borderless window. The selection (306) comprises the object (303a) select by the user. The borderless window is then moved from the first window (V1) to the second window (V2). The drag and drop process occurs seamlessly to the user and permits a user to select one or more of a plurality of objects in one application, drag the object into a second application for modification, and drag the modified object back into the first application for real-time preview. ng the object (303d) into the second program in substantially real-time. The drag and drop from a first window (V1) to a second window (V2) comprises rendering a borderless window and a selection (306) in the borderless window. The selection (306) comprises the object (303a) select by the user. The borderless window is then moved from the first window (V1) to the second window (V2). The drag and drop process occurs seamlessly to the user and permits a user to select one or more of a plurality of objects in one application, drag the object into a second application for modification, and drag the modified object back into the first application for real-time preview.
Description
DRAG AND DROP OF OBJECTS BETWEEN APPLICATIONS
INVENTOR
Julian Michael Urbach
ASSIGNEE: OTOY, LLC
ATTORNEYS:
Greenberg Traurig, LLP
MetLife Building
200 Park Ave.
New York, NY 10166
(212) 801-9200
USPTO er Number: 76058
DRAG AND DROP OF OBJECTS BETWEEN APPLICATIONS
CROSS REFERENCE TO RELATED APPLICATIONS
This application is non-provisional application of U.S. Provisional ation No.
61/523,142, filed on August 12, 2011, entitled "DRAG AND DROP OF OBJECTS BETWEEN
APPLICATIONS," and U.S. Non-Provisional Application No. 13/571,182 filed on August 9,
2012, entitled "DRAG AND DROP OF OBJECTS BETWEEN APPLICATIONS," the entirety
ofthese applications is being incorporated herein by reference.
The present disclosure generally relates to exporting and importing a imensional
graphic object from a first application to a second application in real-time, and more
specifically the graphical representation on a user's display ofthe export/import process between
s ing the first and second applications.
BACKGROUND
Graphics programs, in general, render 2D or 3D objects by converting those objects
into draw commands, which are then fed into a graphics API, such as OpenGL or 3D.
Within the API rendering pipeline, the draw commands o various processes such as
hidden surface removal, Z-buffering, rasterization, and clipping before it is output as a 2D image
on the application user's display. Generally exporting a ular 3D object from a graphics
program, ifpossible, is an arduous process, requiring decompiling ofthe program data to retrieve
an OBJ file or other readable 3D format. Similarly, importing a file into a 3D graphics program
requires compiling the 3D object into the required format of the graphics program, and often
requires repackaging an entire 3D object library for successful object importation.
SUMMARY
The present disclosure lly relates to exporting an object from a first 3D
program for rendering in a second 3D program in real-time. In one embodiment, a er
system hosts a plurality of ation instances, each ation instance corresponding to a
local client application. The er system concurrently renders, utilizing the resources ofthe
cs processing unit ofthe computer system, the graphical output ofthe application instances
corresponding to the at least two of the local client applications in separate windows on the
computer system display. A user seeking to export a 3D object from the first application s
an object from the first window and drags the object to the second window. As the user drags
the object, it is rendered on the computer display pursuant to the user's drag ds. The
user then drops the object in the second application rendered in the second window, and the
object is imported in real-time into the second application.
In one embodiment, a first computer system hosts a first ation locally and
the second application is hosted on an application server. The computer system renders both the
local application and the remote application through its local hardware and rendering API in two
separate s on the computer system display. A user seeking to export a 3D object from
the first application selects an object from the first window and drags the object to the second
. As the user drags the object, it is rendered on the computer display pursuant to the
user's drag commands. The user then drops the object in the second application ed in the
second window, and the object is imported in real-time into the second application.
In one embodiment, a first computer system hosts a first application locally and
the second application is hosted on an application server with server-side rendering. The
computer system renders the local application using its own graphics processor and graphics
API, and the remote application is rendered by a server-side graphics APL A user seeking to
export a 3D object from the first application selects an object from the first window and drags
the object to the second window. As the user drags the object, it is rendered on the computer
display pursuant to the user's drag commands. The user then drops the object in the second
application rendered in the second window, and the object is imported in real-time into the
second ation.
In one embodiment a method for importing an object into a second application is
disclosed. The method comprises receiving, by a processor, a first user input and responsive to
the first user input, selecting, by the processor, an object rendered in a first window of a display
by a first application and a rendering APL The method further comprises extracting the object
from the first application via an engine and ing a second user input by the processor.
Responsive to second user input, the method comprises ng, by the processor, the object on
the display from the first window to a second application ed in a second window and
ying, by the processor, the object in an intermediate space between the first window and
the second window during the dragging. Responsive to the object crossing a focus border ofthe
second window, the method comprises importing the object into the second application.
In an embodiment, selecting an object in accordance with the method comprises
detouring, by the processor, the first user input to the engine, intercepting, by the sor,
draw commands from the first application to the rendering API and determining, by the
processor, the object from the draw ds of the first application. The method r
comprises selecting, by the processor, the object and other objects in accordance with a selection
algorithm. In an embodiment, the selection algorithm is configured to select all s
connected to the first object the ray hits. In an embodiment, the selection algorithm is configured
to select all objects with a same object identifier as the first object the ray hits. In an
embodiment, the selection algorithm is configured to select all objects with a same motion vector
as the first object the ray hits. In an embodiment the selection algorithm is ured to select
all objects with a same texture as the first object the ray hits.
In an embodiment, the first user input ing an object is a cursor selection
from a pointing device. In an embodiment, the first user input selecting an object comprises a
user tracing a border around the object. In an embodiment, the first user input selecting an object
comprises a selection tool that selects all contiguous pixels of a ermined set of
characteristics. In an embodiment, the first user input selecting an object is a tap on a touch
interface. In an embodiment, the first user input selecting an object is a gesture on a touch
interface.
In an embodiment, the method for determining, by the sor, the object
comprises from the draw commands r comprises, assigning, by the sor, a camera on
the near plane of a scene at the coordinates of the first user input and ray casting, by the
processor, from the camera to a far plane and selecting the first object the ray hits. The method
also comprises receiving, by the processor, further user input to expand or filter the selection
wherein expanding or filtering the selection comprises ing or deselecting, by the processor,
other objects in a scene connected to the selected object or objects.
In an embodiment, the expanding or filtering the selection comprises, selecting or
deselecting, by the sor, other objects in a scene with the same object identifier as the
selected object or objects. In an embodiment, expanding or filtering the selection comprises
selecting or deselecting, by the processor, other objects in a scene with the same motion vector as
2012/050381
the selected object or objects. In an embodiment, expanding or filtering the selection comprises,
selecting or deselecting, by the processor, other objects in a scene with a same texture as the
selected object or objects. In an embodiment, expanding or filtering the selection comprises,
selecting or deselecting, by the processor, other objects in a scene designated by the further user
input. In an embodiment, the designation process comprises receiving, by the processor, a user
input, assigning, by the processor, a camera on the near plane of the scene at the coordinates of
the user input and ray casting, by the processor, from the camera to the far plane and designating
the first object the ray hits.
In an embodiment, dragging the object on the display by the processor comprises,
rendering, by the processor, a borderless window and a selection in the borderless window,
wherein the selection ses the object or objects selected by the user. In an embodiment, in
response to ing user input to drag the borderless window from the first window to the
second window, the method comprises moving, by the processor, the borderless window across
the display pursuant to the user inputs.
In an embodiment, the method for rendering, by the processor, the selection in the
less window comprises, copying, by the processor, the draw commands associated with
the selection from the first application, inserting, by the sor, the draw commands from the
first application in the rendering API pipeline and rendering, by the processor, the draw
commands via the rendering APL
In an embodiment, the method of importing the selection to a second application
es, converting, by the processor, the selection for implementation into the second
application and rendering, by the processor the selection via the engine in the second window
during the conversion. In an embodiment, converting the ion comprises modifying, by the
processor, the draw commands into a file format utilized by the second application. In an
embodiment, the file format is an OBJ file.
] Upon completion of the conversion, the method compnses, importing, by the
processor, the ion into the second application. Upon importing the object into the second
application, the method further comprises, halting, by the processor, the engine rendering process
and rendering, by the processor, the object from within the second application.
] In an embodiment, the method of rendering the ion via the engine comprises,
inserting, by the processor, draw ds into a rendering API pipeline which is operable to
instruct the rendering API to render the selection into the second . In an embodiment,
the second application has its own rendering API, and rendering the selection from within the
second application comprises rendering, by the processor, the selection in the second window
using the second application'srendering APL
In an embodiment, the method of rendering the selection in the borderless window
comprises, obtaining, by the processor, first conditions, sing lighting and environmental
effects from the first application and second conditions, comprising lighting and environmental
effects from the second application. The method also comprises gradually applying, by the
processor, the first and second conditions ing on a distance of the borderless window
from the first and second windows.
In an embodiment, a system for exporting and importing an object from a first
ation to a second application is disclosed. In an embodiment, the object is a threedimensional
object. The system comprises a cs processing unit, a processor and a storage
medium for tangibly storing thereon program logic for execution by the processor. In an
embodiment, the storage medium can additionally se one or more of the first and second
applications. The program logic in the storage medium comprises first user input receiving
logic, executed by the processor, to receive a first user input. Selecting logic, comprised in the
storage medium and executed by the processor s an object rendered in a first window of a
display by a first application and a rendering API in response to receiving the first user input The
object is extracted from the first application by ting logic comprised on the e
medium. In addition, the processor executes second user input receiving logic to receive a
second user input, dragging logic to drag the object on the display from the first window to a
second application rendered in a second window in response to receiving the second user input
and in response to the object crossing the focus border of the second window, importing logic,
sed in the storage medium is executed by the processor, to import the object into the
second application.
In an embodiment, the selecting logic executed by the sor, to select an object
further comprises detouring logic which is also executed by the processor, to detour the first user
inputs from the first application. In addition, the selecting logic comprises intercepting logic
executed by the processor, to ept the draw ds from the first ation to the
rendering API, determining logic executed by the processor, to determine the object from the
draw commands associated with the first user input and ing logic, executed by the
processor, to select the three dimensional object and other objects in accordance with a selection
algorithm.
In an embodiment, the determining logic further compnses, ng logic,
executed by the processor, to assign a camera on the near plane ofthe scene at the coordinates of
the first user input. The determining logic executed by the processor also comprises ray casting
logic, for ray casting from the camera to the far plane and selecting the first object the ray hits.
In an ment, the dragging logic executed by the processor comprises window
ing logic, to render a borderless window, selection rendering logic, to render a selection in
the borderless window, wherein the selection comprises the object or objects selected by the user
and moving logic, to move the borderless window across the display nt to the user inputs
in se to receiving user inputs to drag the borderless window from the first window to the
second window.
In an embodiment, the selection rendering logic executed by the processor further
ses copying logic, to copy the draw commands associated with the selection, inserting
logic, to insert the draw commands in the rendering API pipeline and draw commands rendering
logic, to render the draw commands via the rendering APL In an embodiment, the selection
rendering logic further comprises first condition obtaining logic and second condition obtaining
logic, executed by the processor, to obtain first conditions, comprising the lighting and
environmental effects from the first application and second conditions, sing the lighting
and environmental effects from the second application. In addition, the selection rendering logic
executed by the processor, comprises conditions applying logic, to gradually apply the first and
second conditions depending on the distance ofthe windowless border from the first and second
windows.
In an embodiment, the importing logic ed by the processor further comprises
converting logic, for converting the selection for implementation into the second application
such that the selection is imported into the second application upon completion ofthe conversion
process, rendering logic for rendering the ion in the second window during the sion
s and halting logic, for halting the engine rendering process and ing the object from
within the second application upon importing the object into the second application. In an
embodiment, the converting logic executed by the processor for the conversion process further
ses modifying logic to modify the draw commands into a file format utilized by the
second application. In an embodiment, the file format is an OBJ file. In an embodiment, the
rendering logic further comprises inserting logic, executed by the processor, to insert draw
commands into a rendering API pipeline operable to instruct the rendering API to render the
selection into the second window. In an embodiment, second ation ing API render
the selection in the second window upon importing the object into the second application.
A computer readable storage medium, having stored thereon, instructions which
when executed by a processor, cause the processor to receive a first user input and responsive to
the first user input, select an object rendered in a first window of a display by a first application
and a rendering API . The ctions further cause the processor to extract the object from the
first application via an engine. In addition, the storage medium ses instructions to receive
a second user input and to drag, the object on the display from the first window to a second
application rendered in a second window responsive to the second user input. The storage
medium further comprises instructions to import the object into the second application
responsive to the object crossing a focus border ofthe second window, import the object into the
second application.
These and other embodiments whose features can be combined will be apparent to
those of ordinary skill in the art by reference to the following detailed description and the
accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
In the drawing s, which are not to scale, and where like reference numerals
indicate like elements throughout the several views:
FIGURE 1 illustrates an example of a computer system hosting two local
applications and exporting a 3D object from a first ation for importation into a second
application;
FIGURE 2 illustrates the l flow of the exportation and ation process,
consisting of grabbing the object from the first application, dragging the object from the first to
the second application, and dropping the object into the second application for rendering;
FIGURE 3 illustrates the flow ofthe grab process;
] FIGURE 4 illustrates the flow ofthe drag process;
] FIGURE 5 illustrates the flow ofthe drop process;
FIGURE 6 illustrates the process flow ofthe re-entry process;
FIGURE 7 illustrates a representation ofthe computer system display ing the
process integrating environment effect rendering;
FIGURE 8 illustrates an example of a computer system hosting a local application
and an application server hosting a remote second application;
FIGURE 9 illustrates an example of a computer system g a local ation
and an application server with server side rendering hosting a remote second application;
FIGURE 10 illustrates an example computer system 1000 suitable for implementing
one or more portions ofparticular embodiments.
DESCRIPTION OF EMBODIMENTS
Subject matter will now be described more fully hereinafter with reference to the
accompanying drawings, which form a part hereof, and which show, by way of illustration,
specific example embodiments. Subject matter may, however, be embodied in a variety of
different forms and, ore, covered or claimed subject matter is intended to be construed as
not being limited to any example embodiments set forth herein; example embodiments are
provided merely to be illustrative. Likewise, a reasonably broad scope for claimed or covered
subject matter is intended. Among other things, for example, subject matter may be embodied as
methods, devices, ents, or systems. Accordingly, embodiments may, for example, take
the form of hardware, software, firmware or any ation thereof (other than re per
se). The following detailed description is, therefore, not intended to be taken in a limiting sense.
In the accompanying drawings, some features may be exaggerated to show s of
particular components (and any size, material and similar details shown in the figures are
intended to be illustrative and not restrictive). Therefore, specific structural and functional
s disclosed herein are not to be interpreted as limiting, but merely as a representative basis
for teaching one skilled in the art to sly employ the disclosed ments.
The present invention is described below with reference to block ms and
operational illustrations ofmethods and devices to select and present media d to a ic
topic. It is understood that each block of the block diagrams or operational illustrations, and
combinations ofblocks in the block diagrams or ional illustrations, can be implemented by
means of analog or digital hardware and computer program ctions. These computer
program instructions can be provided to a processor of a general purpose computer, special
e computer, ASIC, or other programmable data processing apparatus, such that the
instructions, which execute via the processor of the computer or other programmable data
processing apparatus, implements the functions/acts specified in the block diagrams or
operational block or blocks. s aspects or features will be presented in terms of systems
that may include a number of devices, ents, s, and the like. It is to be understood
and iated that the various systems may include additional devices, components, modules,
etc. and/or may not include all ofthe devices, components, modules etc. discussed in connection
with the figures. A combination ofthese ches may also be used.
In some alternate implementations, the functions/acts noted in the blocks can occur
out of the order noted in the operational illustrations. For example, two blocks shown in
succession can in fact be executed substantially rently or the blocks can sometimes be
executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the
embodiments of methods presented and described as arts in this disclosure are provided
by way of example in order to provide a more complete understanding of the technology. The
disclosed methods are not limited to the operations and logical flow presented herein.
Alternative embodiments are contemplated in which the order ofthe various ions is altered
and in which sub-operations described as being part of a larger ion are performed
independently.
For the purposes ofthis disclosure the term r" should be understood to refer to
a service point which provides processing, database, and communication facilities. By way of
example, and not limitation, the term "server" can refer to a single, physical sor with
associated communications and data storage and database facilities, or it can refer to a networked
or clustered complex of processors and associated network and e s, as well as
operating software and one or more database systems and applications software which support
the services provided by the server.
A computing device may be capable of sending or receiving signals, such as via a
wired or wireless network, or may be capable essing or storing signals, such as in memory
as physical memory states, and may, therefore, operate as a server. Thus, devices capable of
operating as a server may include, as examples, dedicated rack-mounted s, desktop
computers, laptop computers, set top boxes, integrated devices combining various features, such
as two or more features of the foregoing devices, or the like. Servers may vary widely in
configuration or capabilities, but generally a server may e one or more l sing
units and memory. A server may also include one or more mass storage devices, one or more
power supplies, one or more wired or wireless network interfaces, one or more input/output
interfaces, or one or more operating systems, such as s Server, Mac OS X, Unix, Linux,
FreeBSD, or the like.
Throughout the specification and claims, terms may have nuanced meamngs
suggested or implied in context beyond an itly stated meaning. Likewise, the phrase "in
one ment" as used herein does not necessarily refer to the same embodiment and the
phrase "in r embodiment" as used herein does not necessarily refer to a different
ment. It is intended, for example, that d subject matter include combinations of
example embodiments in whole or in part. In general, terminology may be understood at least
in part from usage in context. For example, terms, such as "and", "or", or "and/or," as used
herein may include a variety of meanings that may depend at least in part upon the context in
which such terms are used. Typically, "or" if used to associate a list, such as A, B or C, is
intended to mean A, B, and C, here used in the inclusive sense, as well as A, B or C, here used in
the exclusive sense. In on, the term "one or more" as used herein, depending at least in part
upon context, may be used to describe any feature, structure, or teristic in a singular sense
or may be used to describe combinations of features, structures or characteristics in a plural
sense. Similarly, terms, such as "a," "an," or "the," again, may be understood to convey a
singular usage or to convey a plural usage, depending at least in part upon context. In addition,
the term "based on" may be understood as not necessarily intended to convey an exclusive set of
factors and may, instead, allow for existence of additional factors not necessarily expressly
described, again, depending at least in part on context.
The present disclosure generally relates to exporting an object from a first 3D
program for rendering in a second 3D program in real-time. In one embodiment, a computer
system hosts a plurality of application instances, each application instance corresponding to a
local client application. The computer system rently renders, ing the ces ofthe
graphics processing unit ofthe computer system, the graphical output ofthe application instances
corresponding to the at least two of the local client applications in separate windows on the
2012/050381
computer system display. A user seeking to export a 3D object from the first application selects
an object from the first window and drags the object to the second window. As the user drags
the object, it is ed on the computer display pursuant to the user's drag commands. The
user then drops the object in the second application rendered in the second window, and the
object is imported in real-time into the second application.
Rendering may be considered as the process of generating an image from a
model, usually by means of computer programs. The model is usually a description of three-
ional (3D) objects and may be represented in a strictly defined language or data structure.
The model may contain geometry, viewpoint, texture, lighting, shading, , and other
suitable types of information. The image into which the model is ed may be a l
image or a raster graphics image, which may be formed by a collection of pixels. The present
sure s the concept ofrendering to generating an image that represents any output of
any application. The rendering may be performed based on any data, including two-dimensional
(2D) data as well as 3D data. In addition to generating images based on 3D models, particular
embodiments may render images that represent the output of applications such as, for e
and without limitation, web ng applications. word processing applications, spread sheet
applications, multimedia applications, scientific and medical applications, and game
applications.
ing an object from a 3D program is typically an arduous, ifnot ible,
task. If the user does not have the original OBJ or other format file for modifying in a 3D
graphics program such as 3D Studio Max or Maya, the user must decompile the 3D graphics file
used by the first 3D program. The graphics file may be stored in a given directory within the
program's install path, or compiled into the actual program code itself. In any case, the user
must perform several steps to obtain the object file in a format that is readable by a 3D graphics
program. rly, after modifying the object file, in order to view the appearance of the 3D
object from within the first program, the user must recompile or import the object into the code
of the first program. This process is time-consuming, and is exacerbated by the use of remote
applications.
Rendering may be a type of task that is suitable to be performed by a server
because the rendering process is often resource demanding, as it may be very computational
intensive, especially when the rendered images are of high resolution and high quality. In the
past, it could have taken an older computer system hours or days to render a three-dimensional
model into a single 2D image. With the development and advancement of computer hardware,
ally computer re specifically designed for computer graphics applications (e.g.,
gaming, multimedia, entertainment, or mapping), present computer systems may be able to
render each image within s or milliseconds. In fact, often it does not take all the available
resources of a server to render a model into a single image. As such, remote applications using
server-side rendering have become more prevalent.
To better facilitate the export of a 3D object from a first 3D program for
importation into a second 3D program, a software engine may respond to user commands to
select a particular object by intercepting the draw ds from the first application to the 3D
graphics rendering pipeline, and insert them in the draw commands for a given scene from a
second application. In particular embodiments, the second application may be a remote
application hosted on a separate server. In other embodiments, the second application may be a
remote application with server side rendering.
FIGURE 1 rates an example computing system 101 runmng local first
ation 105 and local second application 106. In normal operation, user activates the system
101, for example, via manipulating user hardware 108, and I/O interface 107 translates the
s from the user/hardware 108 into instructions to either first application 105 or second
application 106. Both applications 105 and 106 output draw commands to rendering API 104 for
rendering 2D or 3D scenes. The rendering API 104 passes the draw commands h a
rendering pipeline (not shown) to convert the draw commands into instructions executed by
graphics hardware 103 to render the 2D or 3D scene on display 102. In one embodiment, the
first application 105 is rendered in a first window on a portion of display 102, and the second
application 106 is rendered in a second window on a different portion of display 102. In an
ment, engine 109 is a software routine running on a processor (not shown) comprised
within system 101 concurrently with first application 105 and second application 106. The
engine 109 constantly monitors I/O interface 107 for instructions ting the drag and drop
process. When these instructions are detected, the instructions are detoured via path 110 to the
engine 109. The user may initiate the drag and drop process in a variety of methods, including
but not limited to: a special keystroke, holding a ermined key in ction with a mouse
or other pointing device input, a tap on a touch input device, or a specific gesture on a touch
input device. Once the commands are detoured to the Engine 109 via path 110, the engine 109
allows the user to select a given object in any application . Engine 109 also monitors the
draw commands from first application 105 and second application 106 to the rendering API 104,
and uses detoured user inputs 110 to determine which object or objects in a scene the user wishes
to select. The engine 109 extracts the draw commands corresponding to the object or s the
user wishes to select, and passes them to the ing API 104 for rendering during the drag
process. During the drop s, the engine 109 ues to pass the draw commands for the
object or objects to the rendering API 104 for rendering in the second application 106'swindow,
but simultaneously converts the draw commands into a format for importing into the second
application 106. Upon completion of the conversion and importation s, the engine 109
stops sending draw commands to the rendering API 104, and the selected object or objects are
rendered exclusively through the second application 106. A more detailed explanation of the
grab, drag, and drop processes is provided below. Only two applications are illustrated in
FIGURE 1 in order to simplify the discussion. However, it may be appreciated that in practice,
the computing system 101 can concurrently execute any number of applications rendering
various objects which can be ed from one application to another in accordance with
embodiments described herein.
FIGURE 2 illustrates a high level flow ofthe drag and drop process. At step 201,
the computing system begins running multiple applications. At step 202, the user initiates the
grab process, described in detail in FIGURE 4. At step 203, the user drags the desired object
from the window displaying the first application to the window displaying the second object,
described in detail in FIGURE 5. Finally, at step 204, the user drops the object into the window
for the second application, also referred to as the re-entry process, further described in detail in
FIGURE 6.
FIGURE 3 rates a representation of a user's display during the drag and drop
process. Initially, the y of the computing system contains two separate windows, a first
window 301 containing the rendered output of the first application, and a second window
containing the rendered output ofthe second application 304. Rendered within first window 301
are objects 303a and 303b. In practice, first window 301 and second window 304 may contain
any number cts FIGURE 3 is limited to two s for the purposes ofdiscussion only.
The first window 301 is shown in an enlarged view in FIGURE 3 as 303. The
user selects object 303a in a variety of different methods, such as clicking with an input device or
tapping a touch screen a single point 307 on object 303a, tracing a path 308 with an input device
or on a touch screen through the object 303a, or drawing a marquee 306 around the object 303a
with an input device or on a touch device. Other input s, including but not limited to:
gestures, selection wands, and polygonal marquees, can easily be envisioned by one possessing
ordinary skill in the art.
Upon selecting object 303a, user drags the object on the display along path 302
from the first window 301 to the second window 304. In some ments, the object is copied,
and remains rendered in window 301 while a copy 303c is rendered along the path 302 in an
intermediate space extending between the first window 301 and the second window 304. In
other ments, the actual object 303a is moved from window 301 to window 304. The path
302 is determined by user inputs and can take any path to or from window 301 to window 304.
Upon crossing the focus border for second window 304, the engine initiates the
re-entry, or drop, process. When the user has positioned object 303a as he or she desires in
window 304, the user initiates a command to drop the object 303a into window 304. At that point,
the drag and drop process is te and the engine imports the object 303a as object 303d into
the second application for rendering in the second window 304.
FIGURE 4 s the process flow of the grab process. At 401, the engine
receives a selection input selecting the desired object. The invention ons multiple selection
input s, as described above. Upon receiving the selection input, the engine detours the
draw commands from the first application ed for the rendering API to the engine itself.
From these draw commands, the engine is e of re-creating the scene rendered by the first
application. In the context of a 3D scene, the engine now has all the 3D objects in a given scene,
as well as the camera point and field of view used by the first application to render the scene.
In one embodiment, the user input is a single point on the first window on the
desired object. At step 402, the engine resolves the input ion to a graphic object in the
rendered display. In order to translate this two-dimensional input to a three-dimensional object,
traditional methods of 3D object selection are employed. One such method of is to assign a
camera on the near plane of the 3D scene at the location of the user input, and ray cast from the
camera to the far plane, selecting the first object that the ray hits. In another embodiment, a
selection tool selects all objects touching the first object the ray hits. In r embodiment, a
selection tool selects all the objects with the same object identifier, such as a tag or other meta-
data, as the first object the ray hits. In another embodiment, a selection tool selects all the
objects with the same e as the first object the ray hits. In yet another embodiment, a
selection tool selects all the s with the same motion vector as the first object the ray hits.
At step 403, the engine filters or expands the user selection based upon user
inputs. The user may choose to increase the selected object in the same way the original object
was selected, or some other input method, such as holding down a modifier to add to the
selection and g a marquee around other objects to be selected. Similarly, the user may be
presented with a pop up window to select other objects with the same motion vector, texture,
meta-data, etc. Similarly, the user may filter out objects from the selection in an analogous
manner. The user may have a key for cting objects from a selection and click individual
objects or draw a marquee around s to be excluded from the selection. onally, the
user may be provided a drop down menu to filter out objects with a given texture, motion vector,
meta-data tag, etc. The ion envisions multiple methods of adding to or subtracting from a
selection that are known to one of ordinary skill in the art.
After the user is satisfied, the user inputs commands to initiate the drag process at
step 404. The ds may include but are not d to: holding down the button of an input
device in conjunction with moving the input device to drag the selected object, a specific gesture
on a touch input device, holding down a key on the keyboard in conjunction with moving the
input , and the like. The invention envisions multiple methods of initiating the drag
process 404.
] FIGURE 5 illustrates the process flow of the drag process. At 501, the engine
creates a window on the computing system display. In an embodiment, the widow can have a
visible border. In an embodiment, the widow is borderless. A borderless window is merely a
designated area on the second display that is completely transparent, the only objects actually
rendered on the display are the graphic s contained within the borderless . At step
502, the engine writes the object to the borderless window. The engine performs this step by
detouring the draw commands associated with the selection from the first application to the
engine. The engine then sends these draw commands to the rendering API as objects to be
rendered within the borderless window. The rendering API processes the draw commands
h the rendering pipeline as normal and renders the scene on the display within the
borderless window. Because the borderless window is transparent, only the object appears to
move from the first window to the second window. Thus during the drag process, the rendering
API is processing draw commands from at least two applications in addition to the .
The engine transmits the draw commands associated with the selected object or
objects in accordance to borderless window movement commands ed from the user
through input output interface 107. As stated, the disclosure envisions multiple input methods
for the user to adjust the position of the borderless window during the drag process.
At any time the engine is not receiving user commands to move the borderless
window, the engine polls for a user input command to determine if a drop command has been
issued at step 504. The detection of a drop command sends the process to the re-entry process in
step 505. Drop commands may be any command from user input equipment to indicate the user
wishes to import the object into the second application rendered in the second window. Drop
commands may comprise but are not limited to, releasing a held button on a mouse or other input
device, a gesture on a touch input device, or other key press on an input device. Other user input
s for the drop commands may be envisioned by one of ordinary skill in the art. In one
embodiment, the ry process begins as soon as the object is dragged past the focus border of
the second window.
] FIGURE 6 illustrates the process flow of the re-entry process. At 601, the re-
entry process begins. The re-entry process may be triggered by either an explicit drop instruction
from the user, or the act of dragging the selection across the focus border of the second window.
When the re-entry process begins, the engine begins to convert the object from draw ds
into a format for implementation into the second application. In the 3D context, the engine
begins ting the draw commands into a 3D object file for ation into the second
application. For example, a user might be running a 3D game application in a first window and a
3D graphics editing m in the second window for editing a given model. After selecting
the desired object, the user drags the object to the second window, and the re-entry process
begins converting the draw ds associated with the object into a 3D object file such as an
OBJ flle.
At step 602, the engine continues to render the object by passing the draw
commands associated with the object to the rendering API. Because the sion process is
time consuming and processor intensive, the engine continues to render the object while the
conversion is taking place. The engine renders the object by inserting draw ds into the
draw command stream from the second ation to the rendering API. Thus during re-entry,
the engine is not merely rendering the object in the borderless window overlaid on top of the
second window, but actually ating the object into the second application as if it were
imported and rendered by the second application itself, including environmental effects. A
detailed illustration of this e is provided in FIGURE 7.
At step 603, the conversion is completed and the object file is imported into the
second application. The importing process differs for each application and each file format. In
the context of a 3D graphics editing program, the file is imported into the workspace of the
program as if the user had opened the file directly from the 3D graphics editing program. At step
604, after successful importation of the object into the second program, the engine halts its
rendering of the object, and the object is rendered by the second application. The entire re-entry
process occurs seamlessly, without any indication of multiple rendering processes or a file
conversion to the user. The user is unaware of these background processes by the engine, and
the object is rendered as if the object were simply dragged from the first window and dropped in
the second window.
FIGURE 7 depicts a representation of the drag and drop process ating
environmental s. In the first window 701, an object 702a, shown for simplicity as a sphere
sits with a light source 703 from the upper left of window 701. In the second window 704, the
environment of the 3D scene includes a light source 705 from the upper right. During the grab
process, the engine obtains the environment effects and ng of both windows 701 and 704,
and vely applies the environmental effects and lighting to the selected object 702a
ing on the distance of the object from each window. Thus, as the object 702a is dragged
towards the second window, the shading of the object 702a changes depending on the distance
from the light sources 703 and 705, as shown by representations 702b, 702c, 702d, and 702e.
The engine renders these nmental effects by ng them to the draw commands for the
object before g them to the rendering API. Environmental effects are not limited to merely
lighting, but, as one skilled in the art can on, can apply to fog, smoke, blurring, particle
effects, reflections, and other well-known environmental effects.
FIGURE 8 depicts an embodiment of a computing system 801 wherein one of the
two ations is a remote application 810 running on an application server 809. In such a case,
the operation of the engine 805 does not vary. The engine intercepts instructions from the
I/O interface 807 and detours the instructions to the engine along path 811 during the operation
of the drag and drop process. Assuming the user is dragging from the local application 806 and
dropping to the remote application 810, draw commands from the local application 806 to the
ing API 804 are intercepted and used during the grab process for the user to select the
desired objects. During the drag process, the engine 805 s the rendering of the object in
the borderless window by detouring the draw commands for the selection to the rendering API
804. When the user drops the object into the window of the remote application 810, the engine
805 begins the conversion process while continuing to render the selected object. Upon
completing the conversion, the ted object file is transferred over k link 814 to
application server 809 for importation into remote application 810. After importation, the engine
805 ceases to pass draw commands to the ing API 804 and the system operates as normal.
In another embodiment, the user drags an object from a remote application to a locally hosted
application. The system operates by a substantially similar mechanism in this arrangement.
] FIGURE 9 depicts an embodiment of a computing system 901 wherein one of the
applications 908 is a remote application run on application server 907 that has server-side
rendering through its own rendering API 909. As an example, the user drags an ation from
the window of the local application 905 to the window of the remote rendered application 908.
The operation of the system 901 is substantially the same as in FIGURE 8. The engine 910
intercepts I/O inputs from I/O interface 906 and s them along path 914 for the duration of
the drag and drop process. During the grab process, the engine 910 s draw commands
from the local application 905 to destined for the local rendering API 904 to the engine. After
the user selects the object, the engine 910 detours the commands to either the local rendering
API 904 along path 915. During the drag s, the ed draw commands for the selected
object are rendered by the local rendering engine 910 to render the object in the borderless
window. Upon initiation of the re-entry process, the engine begins file conversion of the object
into an object file for importation into remote application 908. When the file is converted, the
file is imported into the remote application 908 through path 916. Then the engine stops
rendering the object through local rendering API 904, and the object is exclusively rendered
through remote rendering API 909.
A special case exists for the embodiment where a user wishes to select an object
from a remote application with server-side rendering, such as application 908. In such an
embodiment, the engine must have access to the output of remote application 908 before it enters
the remote rendering API 909. This must be a special implementation requiring software
residing on remote application server 907, or at a bare m, permission from the server 907
for engine 910 to r the path between the remote application 908 and rendering API 909.
In such a case, the draw commands from ation 908 are detoured over a network connection
913 to the engine 910. This special case only arises when grabbing objects from remote
applications with server side rendering.
The disclosure envisions multiple arrangements, such as dragging from one
remote application to another, or variations of copy/pasting an object from one application to
multiple other ations. Such embodiments should be readily contemplated by those of
ordinary skill in the art. Although the disclosure describes a single instance of dragging and
dropping from a first application to a second application, skilled artisans in the art can envision
dragging from a first application to a second application, editing the object, and dragging the
edited object back into the first application to view the changes in real-time.
Particular embodiments may be implemented as re, software, or a
combination of re and software. For example and without limitation, one or more
computer systems may execute particular logic or software to perform one or more steps of one
or more ses described or rated herein. One or more of the computer systems may be
unitary or distributed, ng multiple computer systems or multiple nters, where
appropriate. The present disclosure contemplates any suitable computer . In particular
embodiments, performing one or more steps of one or more processes described or illustrated
herein need not necessarily be limited to one or more particular geographic locations and need
not necessarily have temporal limitations. As an example and not by way of limitation, one or
3, (C
more er systems may carry out their filnctions in “real time, offline,” in “batch mode,”
otherwise, or in a suitable combination of the ing, where appropriate. One or more of the
computer systems may carry out one or more portions of their functions at different times, at
ent locations, using different processing, where riate. Herein, reference to logic may
encompass software, and Vice versa, where riate. nce to software may encompass
one or more computer programs, and Vice versa, where appropriate. Reference to software may
encompass data, instructions, or both, and Vice versa, where appropriate. Similarly, reference to
data may encompass instructions, and Vice versa, where appropriate.
One or more computer-readable storage media may store or otherwise embody
software implementing particular embodiments. A computer-readable medium may be any
medium capable of ng, icating, containing, holding, maintaining, propagating,
retaining, storing, transmitting, transporting, or otherwise embodying software, where
appropriate. A computer-readable medium may be a biological, chemical, electronic,
electromagnetic, infrared, magnetic, optical, quantum, or other suitable medium or a combination
of two or more such media, where appropriate. A computer-readable medium may include one
or more ter-scale components or otherwise embody nanometer-scale design or
fabrication. Example computer-readable storage media include, but are not limited to, compact
discs (CDs), f1eld-programmable gate arrays (FPGAs), floppy disks, floptical disks, hard disks,
holographic storage deVices, integrated circuits (ICs) (such as application-specific integrated
circuits )), magnetic tape, caches, mmable logic deVices (PLDs), -access
memory (RAM) deVices, read-only memory (ROM) deVices, semiconductor memory deVices,
and other suitable computer-readable storage media.
Software implementing ular embodiments may be written in any le
mming language (which may be ural or object oriented) or combination of
programming languages, where appropriate. Any suitable type of er system (such as a
single- or multiple-processor computer system) or systems may execute software implementing
particular embodiments, where riate. A general-purpose computer system may execute
software implementing particular embodiments, where appropriate.
For example, FIGURE 10 illustrates an example computer system 1000 suitable
for implementing one or more portions of particular embodiments. Although the present
disclosure describes and illustrates a particular computer system 1000 having particular
components in a particular configuration, the present disclosure contemplates any suitable
er system haVing any suitable components in any suitable configuration. Moreover,
computer system 1000 may have take any suitable physical form, such as for example one or
more integrated circuit (ICs), one or more printed circuit boards (PCBs), one or more handheld
or other devices (such as mobile telephones or PDAs), one or more personal computers, or one or
more super computers.
System bus 1010 couples subsystems of computer system 1000 to each other.
Herein, nce to a bus encompasses one or more digital signal lines serving a common
fianction. The present sure contemplates any suitable system bus 1010 including any
suitable bus structures (such as one or more memory buses, one or more peripheral buses, one or
more a local buses, or a combination of the foregoing) having any suitable bus architectures.
Example bus architectures include, but are not limited to, ry Standard Architecture (ISA)
bus, Enhanced ISA (EISA) bus, Micro Channel Architecture (MCA) bus, Video Electronics
Standards Association local (VLB) bus, Peripheral Component onnect (PCI) bus, PCI-
Express bus ), and Accelerated Graphics Port (AGP) bus.
Computer system 1000 es one or more processors 1020 (or central
processing units (CPUs)). A processor 1020 may contain a cache 1022 for temporary local
storage of instructions, data, or er addresses. sors 1020 are coupled to one or more
storage devices, including memory 1030. Memory 1030 may include random access memory
(RAM) 1032 and read-only memory (ROM) 1034. Data and instructions may transfer bi-
directionally between processors 1020 and RAM 1032. Data and instructions may transfer uni-
ionally to processors 1020 from ROM 1034. RAM 1032 and ROM 1034 may include any
suitable computer-readable storage media.
Computer system 1000 includes fixed storage 1040 coupled bi-directionally to
sors 1020. Fixed storage 1040 may be coupled to processors 1020 via storage control unit
10102. Fixed e 1040 may provide onal data storage capacity and may include any
suitable computer-readable storage media. Fixed storage 1040 may store an operating system
(OS) 1042, one or more executables 1044, one or more applications or programs 1046, data
1048, and the like. Fixed storage 1040 is typically a secondary storage medium (such as a hard
disk) that is slower than primary storage. In appropriate cases, the information stored by fixed
storage 1040 may be incorporated as virtual memory into memory 1030.
Processors 1020 may be coupled to a y of interfaces, such as, for example,
graphics control 10104, video interface 10108, input interface 1060, output interface 1062, and
storage ace 1064, which in turn may be respectively coupled to appropriate devices.
Example input or output devices include, but are not limited to, video displays, track balls, mice,
keyboards, microphones, touch-sensitive displays, transducer card readers, magnetic or paper
tape readers, tablets, styli, voice or handwriting recognizers, biometrics readers, or computer
s. Network interface 10106 may couple processors 1020 to another computer system or to
network 1080. With network interface 10106, processors 1020 may receive or send information
from or to network 1080 in the course of performing steps of particular ments. Particular
embodiments may execute solely on processors 1020. Particular embodiments may execute on
processors 1020 and on one or more remote processors operating er.
] In a network environment, where computer system 1000 is ted to network
1080, er system 1000 may communicate with other devices connected to network 1080.
Computer system 1000 may communicate with network 1080 via network interface 10106. For
example, er system 1000 may receive information (such as a request or a response from
another device) from network 1080 in the form of one or more ng s at network
interface 10106 and memory 1030 may store the incoming packets for subsequent processing.
Computer system 1000 may send information (such as a request or a response to another device)
to network 1080 in the form of one or more outgoing s from network interface 10106,
which memory 1030 may store prior to being sent. Processors 1020 may access an incoming or
outgoing packet in memory 1030 to process it, according to particular needs.
Computer system 1000 may have one or more input devices 1066 (which may
include a keypad, keyboard, mouse, stylus, etc.), one or more output devices 1068 (which may
include one or more displays, one or more speakers, one or more rs, etc.), one or more
storage s 1070, and one or more storage medium 1072. An input device 1066 may be
external or internal to computer system 1000. An output device 1068 may be external or internal
to computer system 1000. A storage device 1070 may be external or internal to er system
1000. A storage medium 1072 may be external or internal to computer system 1000.
Particular embodiments involve one or more computer-storage products that
include one or more computer-readable storage media that embody software for performing one
or more steps of one or more processes described or illustrated herein. In particular
ments, one or more portions of the media, the software, or both may be designed and
manufactured specifically to perform one or more steps of one or more processes described or
illustrated herein. In addition or as an alternative, in particular embodiments, one or more
portions of the media, the software, or both may be generally ble without design or
manufacture specific to ses described or illustrated herein. Example computer-readable
storage media include, but are not d to, CDs (such as CD-ROMs), FPGAs, floppy disks,
floptical disks, hard disks, holographic storage devices, ICs (such as ASICs), magnetic tape,
caches, PLDs, RAM devices, ROM devices, nductor memory devices, and other suitable
computer-readable storage media. In particular ments, software may be machine code
which a compiler may generate or one or more files containing higher-level code which a
computer may execute using an interpreter.
As an example and not by way of limitation, memory 1030 may include one or
more computer-readable storage media embodying software and computer system 1000 may
provide particular functionality described or illustrated herein as a result of processors 1020
executing the software. Memory 1030 may store and processors 1020 may execute the software.
Memory 1030 may read the software from the computer-readable storage media in mass storage
device 1030 embodying the software or from one or more other sources via network interface
10106. When executing the software, processors 1020 may perform one or more steps of one or
more processes described or illustrated herein, which may include defining one or more data
structures for storage in memory 1030 and modifying one or more of the data structures as
directed by one or more portions the software, according to particular needs. In addition or as an
alternative, computer system 1000 may provide particular functionality described or rated
herein as a result of logic red or otherwise ed in a circuit, which may operate in
place of or together with software to perform one or more steps of one or more processes
described or illustrated herein. The present disclosure encompasses any suitable combination of
hardware and software, according to particular needs.
In particular embodiments, computer system 1000 may include one or more
Graphics sing Units (GPUs) 1024. In particular ments, GPU 1024 may comprise
one or more integrated circuits and/or processing cores that are directed to mathematical
operations commonly used in graphics ing. In some embodiments, the GPU 1024 may use
a l graphics unit instruction set, while in other implementations, the GPU may use a CPU-
like (e.g. a modified x86) instruction set. Graphics processing unit 1024 may implement a
number of cs primitive ions, such as blitting, texture mapping, pixel shading, frame
buffering, and the like. In particular embodiments, GPU 1024 may be a graphics accelerator, a
l Purpose GPU (GPGPU), or any other suitable processing unit.
WO 25521
In particular embodiments, GPU 1024 may be embodied in a graphics or display
card that attaches to the hardware system architecture via a card slot. In other implementations,
GPU 1024 may be integrated on the motherboard of computer system architecture. Suitable
graphics processing units may include Advanced Micro Devices(r)AMD R7XX based GPU
s (Radeon(r) HD 4XXX), AMD R8XX based GPU devices (Radeon(r) HD lOXXX),
Intel(r) Larabee based GPU devices (yet to be released), nVidia(r) 8 series GPUs, (r) 9
series GPUs, (r) 100 series GPUs, nVidia(r) 200 series GPUs, and any other DXll-
capable GPUs.
Although the present disclosure describes or illustrates ular operations as
occurring in a particular order, the present disclosure contemplates any suitable operations
occurring in any suitable order. Moreover, the present disclosure contemplates any suitable
operations being repeated one or more times in any suitable order. Although the present
disclosure bes or illustrates particular operations as occurring in sequence, the present
disclosure contemplates any suitable ions occurring at substantially the same time, where
appropriate. Any le ion or sequence of operations described or illustrated herein
may be interrupted, suspended, or otherwise controlled by r process, such as an operating
system or , where appropriate. The acts can operate in an operating system environment or
as stand-alone routines occupying all or a substantial part of the system processing.
The present disclosure encompasses all changes, substitutions, variations,
alterations, and modifications to the example embodiments herein that a person having ordinary
skill in the art would comprehend. Similarly, where appropriate, the appended claims ass
all s, substitutions, variations, alterations, and modifications to the example ments
herein that a person having ordinary skill in the art would comprehend.
For the purposes of this disclosure a computer readable medium stores computer
data, which data can include computer program code that is executable by a computer, in
machine readable form. By way of example, and not limitation, a computer readable medium
may comprise computer readable storage media, for tangible or fixed storage of data, or
communication media for transient interpretation of code-containing signals. Computer readable
storage media, as used herein, refers to physical or tangible storage (as opposed to signals) and
includes without tion volatile and non-volatile, removable and movable media
implemented in any method or technology for the tangible storage of information such as
computer-readable instructions, data structures, program s or other data.
Computer readable storage media includes, but is not limited to, RAM, ROM, EPROM,
EEPROM, flash memory or other solid state memory logy, CD-ROM, DVD, or other
optical storage, ic cassettes, magnetic tape, magnetic disk storage or other magnetic
storage deVices, or any other physical or material medium which can be used to tangibly store
the desired information or data or instructions and which can be accessed by a computer or
pI'OCCSSOI'.
Claims (19)
1. A method for importing a graphical object into an application comprising: receiving, by a processor, a first user input; responsive to the first user input, selecting, by the processor, an object rendered in a first window of a display by a first application and a rendering API (Application Programming Interface); extracting, by the processor, the object from the first application via an engine that monitors received user ; receiving, by the processor, a second user input for dragging the object on the display from the first window to a second application rendered in a second window; responsive to the second user input to drag the object from the first window to the second window: rendering, by the sor, a borderless window; rendering, by the sor, a selection in the borderless window, wherein the selection comprises the object ed by the user; and moving, by the processor, the borderless window comprising the ion across the display from the first window to the second window pursuant to the second user input; and sive to the object crossing a focus border of the second window, importing, by the processor, the object into the second application.
2. The method of claim 1, wherein selecting an object comprises: detouring, by the processor, the first user input to the ; intercepting, by the processor, draw commands from the first application to the rendering API; determining, by the processor, the object from the draw commands; and selecting, by the processor, the object and other objects in accordance with a selection algorithm.
3. The method of claim 2, wherein ining, by the processor, the object comprises: assigning, by the processor, a camera on a near plane of a scene at coordinates of the first user input; ray g, by the processor, from the camera to a far plane; and selecting, by the processor, the first object the ray hits.
4. The method of claim 3, further comprising: receiving, by the processor, further user input to expand or filter the selection.
5. The method of claim 4, wherein expanding or filtering the selection comprises: selecting or deselecting, by the processor, other objects in a scene connected to the selected object or objects.
6. The method of claim 4, wherein ing or filtering the selection comprises: ing or deselecting, by the processor, other objects in a scene designated by the further user input, wherein the designation s comprises: receiving, by the processor, r user input for one of an object selection or deselection; assigning, by the processor, another camera on the near plane of the scene at the coordinates of the other user input; and ray casting, by the processor, from the camera to the far plane and designating the first object the ray hits.
7. The method of claim 1, wherein rendering, by the processor, the selection in the borderless window comprises: copying, by the processor, draw commands associated with the selection from the first application; inserting, by the processor, the draw commands from the first application in a pipeline of a rendering API; ing, by the processor, the draw commands via the rendering API.
8. The method of claim 1, wherein rendering the selection in the borderless window comprises: obtaining, by the processor, first conditions, comprising lighting and environmental effects from the first application; obtaining, by the processor, second conditions, comprising lighting and environmental effects from the second application; gradually ng, by the processor, the first and second conditions depending on a ce of the borderless window from the first and second windows.
9. The method of claim 1, n importing the selection to a second application comprises: converting, by the processor, the selection for implementation into the second application; rendering, by the processor, the selection via the engine in the second window during the sion; upon completion of the sion, importing, by the sor, the selection into the second application; and upon importing the object into the second application, halting, by the processor, the engine rendering process and rendering, by the processor, the object from within the second application.
10. The method of claim 9, n converting the selection comprises: modifying, by the processor, the draw commands into a file format ed by the second application.
11. The method of claim 9, wherein rendering the selection via the engine comprises: inserting, by the sor, draw ds into a rendering API pipeline operable to instruct the rendering API to render the selection into the second window.
12. The method of claim 10, n the second application has its own rendering API, and rendering the selection from within the second application comprises rendering, by the processor, the selection in the second window using the second ation’s rendering API.
13. A system, comprising: a graphics processing unit; a processor; and a storage medium for tangibly storing thereon processor-executable program logic, the program logic comprising: first user input receiving logic, executed by the processor, to receive a first user input; selecting logic, executed by the processor to select an object rendered in a first window of a display by a first application and a rendering API in response to receiving the first user input; extracting logic, executed by the processor, to extract the object from the first application via an engine that monitors ed user inputs; second user input receiving logic, executed by the processor, to receive a second user input; dragging logic, executed by the processor, to drag the object on the display from the first window to a second application rendered in a second window in response to receiving the second user input, the dragging logic further comprising: window rendering logic, executed by the processor to render a borderless ; selection rendering logic, ed by the processor to render the selection in the borderless window, n the selection comprises the object selected by the user; moving logic, executed by the processor, to move the borderless window across the display from the first window to the second window pursuant to the second user input in response to receiving the second user input to drag the borderless window from the first window to the second window; in response to the object crossing a focus border of the second window, importing logic, executed by the processor, to import the object into the second application.
14. The system of claim 13, wherein the ing logic executed by the sor, to select an object comprises: detouring logic, executed by the processor, to detour the first user input from the first application; intercepting logic, executed by the processor, to intercept draw commands from the first application to the rendering API; determining logic, executed by the processor, to determine the object from the draw commands associated with the first user input; and selecting logic, ed by the processor, to select the object and other objects in ance with a selection algorithm.
15. The system of claim 13, wherein determining, by the processor, the object comprises: assigning logic, executed by the processor, to assign a camera on a near plane of a scene at nates of the first user input; and ray casting logic, executed by the processor, for ray casting from the camera to afar plane and selecting the first object the ray hits.
16. The system of claim 13, wherein the importing logic further ses: converting logic, executed by the processor, for converting the selection for implementation into the second application such that the selection is imported into the second application upon completion of the conversion; rendering logic, executed by the processor, for rendering the selection in the second window during the conversion process; and, halting logic, executed by the processor, for halting the engine rendering s and rendering the object from within the second application upon importing the object into the second application.
17. The system of claim 13, wherein the ion rendering logic further comprises: first condition obtaining logic, executed by the processor, to obtain first ions, comprising lighting and environmental effects from the first ation; second condition obtaining logic, executed by the processor, to obtain second conditions, comprising lighting and environmental effects from the second application; conditions applying logic, executed by the processor, to gradually apply the first and second ions depending on the distance of the borderless window from the first and second windows.
18. A non-transitory computer readable storage medium, having stored thereon, processorexecutable instructions for: receiving a first user input; responsive to the first user input, selecting an object rendered in a first window of a display by a first application and a rendering API ; extracting the object from the first ation via an engine; receiving a second user input for ng the 3D object on the display from the first window to a second application rendered in a second ; responsive to the second user input: rendering a borderless window; rendering the ion in the borderless , wherein the selection comprises the object selected by the user; and moving the less window comprising the selection across the display from the first window to the second window pursuant to the second user input; responsive to the object crossing a focus border of the second window, importing the object into the second application.
19. The computer readable storage medium of claim 18, wherein instructions for importing the object into the second application further comprise instructions for: receiving a user gesture for importing the object into the second application responsive to the borderless window comprising the object crossing a focus border of the second window.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NZ709107A NZ709107B2 (en) | 2011-08-12 | 2012-08-10 | Drag and drop of objects between applications |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161523142P | 2011-08-12 | 2011-08-12 | |
US61/523,142 | 2011-08-12 | ||
US13/571,182 US10162491B2 (en) | 2011-08-12 | 2012-08-09 | Drag and drop of objects between applications |
US13/571,182 | 2012-08-09 | ||
PCT/US2012/050381 WO2013025521A2 (en) | 2011-08-12 | 2012-08-10 | Drag and drop of objects between applications |
Publications (2)
Publication Number | Publication Date |
---|---|
NZ619935A NZ619935A (en) | 2015-07-31 |
NZ619935B2 true NZ619935B2 (en) | 2015-11-03 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2012295303B2 (en) | Drag and drop of objects between applications | |
KR102646977B1 (en) | Display method and device based on augmented reality, and storage medium | |
US8984428B2 (en) | Overlay images and texts in user interface | |
US9240070B2 (en) | Methods and systems for viewing dynamic high-resolution 3D imagery over a network | |
US8456467B1 (en) | Embeddable three-dimensional (3D) image viewer | |
US9183672B1 (en) | Embeddable three-dimensional (3D) image viewer | |
US11443490B2 (en) | Snapping, virtual inking, and accessibility in augmented reality | |
KR102590102B1 (en) | Augmented reality-based display method, device, and storage medium | |
CN110515657B (en) | Indirect command buffer for graphics processing | |
MXPA06012368A (en) | Integration of three dimensional scene hierarchy into two dimensional compositing system. | |
US11475636B2 (en) | Augmented reality and virtual reality engine for virtual desktop infrastucture | |
Rumiński et al. | Creation of interactive AR content on mobile devices | |
KR102367640B1 (en) | Systems and methods for the creation and display of interactive 3D representations of real objects | |
AU2015200570B2 (en) | Drag and drop of objects between applications | |
NZ619935B2 (en) | Drag and drop of objects between applications | |
NZ709107B2 (en) | Drag and drop of objects between applications | |
Gentile et al. | A Multimodal Fruition Model for Graphical Contents in Ancient Books | |
Hestman | The potential of utilizing bim models with the webgl technology for building virtual environments-a web-based prototype within the virtual hospital field | |
US20240098213A1 (en) | Modifying digital content transmitted to devices in real time via processing circuitry | |
Fahim | A motion capture system based on natural interaction devices | |
Van den Bergh et al. | A Novel Camera-based System for Collaborative Interaction with Mult-dimensional Data Models | |
Wu et al. | A Motion-Driven System for Performing Art |