WO2022056499A1 - 3d rendering and animation support for ui controls - Google Patents

3d rendering and animation support for ui controls Download PDF

Info

Publication number
WO2022056499A1
WO2022056499A1 PCT/US2021/054844 US2021054844W WO2022056499A1 WO 2022056499 A1 WO2022056499 A1 WO 2022056499A1 US 2021054844 W US2021054844 W US 2021054844W WO 2022056499 A1 WO2022056499 A1 WO 2022056499A1
Authority
WO
WIPO (PCT)
Prior art keywords
inputs
manager
objects
computing system
geometry
Prior art date
Application number
PCT/US2021/054844
Other languages
French (fr)
Inventor
Zhan Yu
Xiaofeng Li
Yiwei ZHAO
Original Assignee
Innopeak Technology, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Innopeak Technology, Inc. filed Critical Innopeak Technology, Inc.
Priority to PCT/US2021/054844 priority Critical patent/WO2022056499A1/en
Publication of WO2022056499A1 publication Critical patent/WO2022056499A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the present disclosure relates, in general, to methods, systems, and apparatuses for implementing image rendering, and, more particularly, to methods, systems, and apparatuses for implementing three-dimensional (“3D”) and animation support for user interface (“UI”) controls.
  • 3D three-dimensional
  • UI user interface
  • UI frameworks provide very limited 3D rendering and animation support.
  • Existing UI frameworks (such as Ulkit, Flutter®, etc.) are mostly two-dimensional (“2D") UI frameworks, and only support limited 3D transformation and animation.
  • UI frameworks in popular game engines (such as Unity® and Unreal®, etc.) provide very limited range of UI views. Although they support full 3D transformation and animation, they are embedded in a heavy weight graphics processing unit (“GPU”) pipeline (i.e., requiring greater memory and/or other system resources, or the like), and hence are not suitable for use for system level UIs.
  • GPU graphics processing unit
  • a method may comprise receiving, using a computing system, UI layout data of a first UI from a UI framework; generating, using the computing system, a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data; rendering, using the computing system, the generated first 3D UI; and causing, using the computing system, the rendered first 3D UI to be displayed within a display screen of a user device associated with a user.
  • an apparatus might comprise at least one processor and a non- transitory computer readable medium communicatively coupled to the at least one processor.
  • the non-transitory computer readable medium might have stored thereon computer software comprising a set of instructions that, when executed by the at least one processor, causes the apparatus to: receive UI layout data of a first UI from a UI framework; generate a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data; render the generated first 3D UI; and cause the rendered first 3D UI to be displayed within a display screen of a user device associated with a user.
  • a system might comprise a computing system, which might comprise a 3D scene manager, a 3D rendering engine, at least one first processor, and a first non-transitory computer readable medium communicatively coupled to the at least one first processor.
  • the first non-transitory computer readable medium might have stored thereon computer software comprising a first set of instructions that, when executed by the at least one first processor, causes the computing system to: receive, using the 3D scene manager, UI layout data of a first UI from a UI framework; generate, using the 3D scene manager, a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data; render, using the 3D rendering engine, the generated first 3D UI; and cause the rendered first 3D UI to be displayed within a display screen of a user device associated with a user.
  • Fig. 1 is a schematic diagram illustrating a system for implementing three- dimensional (“3D”) and animation support for user interface (“UI”) controls, in accordance with various embodiments.
  • 3D three- dimensional
  • UI user interface
  • FIG. 2 is a schematic block flow diagram illustrating a non- limiting example of a method for implementing 3D and animation support for UI controls, in accordance with various embodiments.
  • Figs. 3A-3G are schematic diagrams illustrating various non-limiting examples of 3D transformations and/or 3D animations that may be utilized for rendering 3D UI during implementation of 3D and animation support for UI controls, in accordance with various embodiments.
  • FIGs. 4A-4E are flow diagrams illustrating a method for implementing 3D and animation support for UI controls, in accordance with various embodiments.
  • FIG. 5 is a block diagram illustrating an example of computer or system hardware architecture, in accordance with various embodiments.
  • Fig. 6 is a block diagram illustrating a networked system of computers, computing systems, or system hardware architecture, which can be used in accordance with various embodiments.
  • Various embodiments provide tools and techniques for implementing image rendering, and, more particularly, to methods, systems, and apparatuses for implementing three-dimensional (“3D”) and animation support for user interface (“UI”) controls.
  • 3D three-dimensional
  • UI user interface
  • a computing system may receive UI layout data of a first UI from a UI framework; may generate a first 3D UI, by parsing two-dimensional ("2D") layout of the first UI, based at least in part on the UI layout data; may render the generated first 3D UI; and may cause the rendered first 3D UI to be displayed within a display screen of a user device associated with a user.
  • 2D two-dimensional
  • the computing system may comprise at least one of a 3D UI Tenderer, a machine learning system, an artificial intelligence ("Al”) system, a deep learning system, a neural network, a processor on the user device, one or more graphics processing units ("GPUs"), a server computer over a network, a cloud computing system, or a distributed computing system, and/or the like.
  • the computing system may comprise a 3D scene manager and a 3D rendering engine, wherein the 3D scene manager may receive the UI layout data and may generate the first 3D UI, and wherein the 3D rendering engine renders the generated first 3D UI.
  • a user input processor of a 3D scene manager of the computing system may receive one or more user inputs, the one or more user inputs comprising at least one of one or more selection inputs, one or more translation inputs, one or more rotation inputs, one or more tilt inputs, one or more zoom inputs, one or more swipe inputs, one or more dragging inputs, one or more gesture inputs, one or more tracing inputs, one or more camera translation inputs, one or more rotation inputs, one or more accelerometer inputs, or one or more gyroscope inputs, and/or the like.
  • generating the first 3D UI may comprise generating, using the computing system, a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the one or more user inputs.
  • At least one of a 3D scene geometry manager, a material manager, a camera manager, or a light manager, and/or the like, of a 3D scene manager of the computing system may determine corresponding at least one of UI geometry, material, camera angle, or lighting of one or more UI objects within the first UI, based at least in part on the UI layout data.
  • generating the first 3D UI may comprise generating, using the computing system, a first 3D UI, by parsing two-dimensional ("2D") layout of the first UI, based at least in part on the UI layout data and based at least in part on the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI.
  • 2D two-dimensional
  • rendering the generated first 3D UI may comprise rendering, using the computing system, the generated first 3D UI, by using the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI to construct at least one UI- specific 3D render pipeline and by using the constructed at least one Ul-specific 3D render pipeline to render the generated first 3D UI.
  • a hardware input processor of a 3D scene manager of the computing system may receive one or more hardware inputs, the one or more hardware inputs comprising at least one of one or more accelerometer inputs, one or more gyroscope inputs, one or more light sensor inputs, or one or more hardware -based user inputs, wherein the one or more hardware-based user inputs comprise at least one of one or more finger gesture inputs, one or more facial image capture inputs, one or more eye image capture inputs, or one or more stylus inputs, and/or the like; updating, using at least one of a camera manager, a light manager, or a 3D scene geometry manager of the 3D scene manager of the computing system, corresponding at least one of camera angle, lighting, or scene geometry of one or more UI objects within the first UI, based at least in part on corresponding at least one of a combination of the one or more accelerometer inputs or the one or more gyroscope inputs, the one or more light sensor input
  • generating the first 3D UI may comprise generating, using an animation manager of the computing system, a first 3D UI, by using at least one of predefined 3D animation control or physics based 3D animation control to animate 3D UI geometries in the first UI.
  • At least one of a dynamic geometry sorting system or a dynamic geometry subdivision system of a 3D rendering engine of the computing system may perform corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object to render correct perspectives of the at least one first UI object and the at least one second UI object.
  • performing the corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object may comprise: gathering, using the 3D rendering engine, UI geometries of UI objects among the at least one first UI object and the at least one second UI object; computing, using the 3D rendering engine, a bounding box for each UI object among the UI objects; sorting, using the dynamic geometry sorting system, the UI objects based on the computed bounding box for each UI object; and based on a determination that two or more bounding boxes overlap, subdividing, using the dynamic geometry subdivision system, UI geometries of overlapping UI objects corresponding to the two or more bounding boxes, and recomputing, using the 3D rendering engine, bounding boxes for each of the overlapping UI objects; and/or the like.
  • rendering the generated first 3D UI may comprise rendering, using the computing system, the generated first 3D UI, based at least in part on one of the computed
  • the first 3D UI may comprise a plurality of UI objects, wherein two or more UI objects among the plurality of UI objects may have different z-axes or z-planes, wherein generating and rendering the first 3D UI may be based on at least one of orthographic projections or perspective projections of the two or more UI objects about the different z-axes or z-planes.
  • the first 3D UI may comprise a second 3D UI that presents a plurality of 3D UI objects within the second 3D UI as an integrated 3D UI environment, wherein the plurality of 3D UI objects may be presented as objects consistent with the integrated 3D UI environment.
  • 3D UI Tenderer system that renders 2D UIs (or UI objects or controls, or the like) into 3D UIs (or UI objects or controls, or the like).
  • 3D UI objects or controls may be dynamically rendered rather than left as static 2D UI objects or controls as in conventional launchers.
  • Various embodiments as described herein - while embodying (in some cases) software products, computer-performed methods, and/or computer systems - represent tangible, concrete improvements to existing technological areas, including, without limitation, UI object (or control) rendering technology, application UI object (or control) rendering technology, UI Tenderer technology, 3D UI Tenderer technology, and/or the like.
  • some embodiments can improve the functioning of user equipment or systems themselves (e.g., UI object (or control) rendering systems, application UI object (or control) rendering systems, UI Tenderer systems, 3D UI Tenderer systems, etc.), for example, by receiving, using a computing system, user interface ("UI") layout data of a first UI from a UI framework; generating, using the computing system, a first three-dimensional ("3D") UI, by parsing two-dimensional ("2D") layout of the first UI, based at least in part on the UI layout data; rendering, using the computing system, the generated first 3D UI; and causing, using the computing system, the rendered first 3D UI to be displayed within a display screen of a user device associated with a user; and/or the like.
  • UI user interface
  • 3D three-dimensional
  • 2D two-dimensional
  • 3D UI Tenderer that renders 2D UI objects (or controls) as 3D UI objects (or controls), where UI objects (or controls) may be dynamically rendered in 3D rather than left as static 2D UI objects (or controls) as in conventional systems, at least some of which may be observed or measured by users, game/content developers, and/or user device manufacturers.
  • Figs. 1-6 illustrate some of the features of the method, system, and apparatus for implementing image rendering, and, more particularly, to methods, systems, and apparatuses for implementing three-dimensional ("3D") and animation support for user interface (“UI") controls, as referred to above.
  • the methods, systems, and apparatuses illustrated by Figs. 1-6 refer to examples of different embodiments that include various components and steps, which can be considered alternatives or which can be used in conjunction with one another in the various embodiments.
  • the description of the illustrated methods, systems, and apparatuses shown in Figs. 1-6 is provided for purposes of illustration and should not be considered to limit the scope of the different embodiments.
  • Fig. 1 is a schematic diagram illustrating a system 100 for implementing 3D and animation support for UI controls, in accordance with various embodiments.
  • system 100 may comprise computing system 105, which may include, but is not limited to, a three dimensional (“3D”) user interface (“UI") Tenderer 110 and one or more central processing units and/or graphics processing units (“CPUs/GPUs”) 130a-130n (collectively, "CPUs/GPUs 130" or the like).
  • the 3D UI Tenderer 110 may include, without limitation, a 3D scene manager 115, an animation manager 120, and/or a rendering engine 125.
  • the computing system 105 either may be an integrated computing system 105a as part of a user device 135 or may be a remote computing system 105b (in some cases, as part of a network(s) 180, or the like), and/or the like.
  • the computing system 105 may include, but is not limited to, at least one of a 3D UI Tenderer (e.g., 3D UI Tenderer 110, or the like), a machine learning system, an artificial intelligence ("Al”) system, a deep learning system, a neural network, a processor on the user device (e.g., user device 135, or the like), one or more graphics processing units ("GPUs”), a server computer over a network, a cloud computing system, or a distributed computing system, and/or the like.
  • a 3D UI Tenderer e.g., 3D UI Tenderer 110, or the like
  • Al artificial intelligence
  • GPUs graphics processing units
  • the user device 135 may include, without limitation, at least one of computing system 105a, user input device 140a, system composer 145a, system UI framework 150a, data storage 155a, communications system 160, display screen 165a, or audio playback device 170, and/or the like.
  • system 100 may further comprise database(s) 155b in network(s) 180 that is communicatively coupled to computing system 105b.
  • System 100 may further comprise network-based system composer 145b and system UI framework 150b (and corresponding database(s) 175) in network(s) 180.
  • System 100 may further comprise at least one of user input devices 140b, display devices 165b, and/or other user devices 185, and/or the like.
  • Each of user device 135, user input devices 140b, display devices 165b, and/or other user devices 185, or the like, may communicatively couple to at least one of computing system 105b, network-based system composer 145b, system UI framework 150b, and/or each other via network(s) 180 and via wired communications lines and/or via wireless communications lines (as depicted in Fig. 1 by lightning bolt symbols).
  • networks 180 may each include, without limitation, one of a local area network (“LAN”), including, without limitation, a fiber network, an Ethernet network, a Token-RingTM network, and/or the like; a wide-area network (“WAN”); a wireless wide area network (“WWAN”); a virtual network, such as a virtual private network (“VPN”); the Internet; an intranet; an extranet; a public switched telephone network (“PSTN”); an infra-red network; a wireless network, including, without limitation, a network operating under any of the IEEE 802.11 suite of protocols, the BluetoothTM protocol known in the art, and/or any other wireless protocol; and/or any combination of these and/or other networks.
  • the network(s) 180 may include an access network of the service provider (e.g., an Internet service provider (“ISP”)).
  • the network(s) 180 may include a core network of the service provider and/or the Internet.
  • computing system 105, 105a, or 105b may receive UI layout data (e.g., UI layout data 190, or the like) of a first UI from a UI framework (e.g., system UI framework 150a or 150b, or the like).
  • the computing system may generate a first 3D UI, in some cases, by parsing two-dimensional ("2D") layout of the first UI, based at least in part on the UI layout data.
  • the computing system may render the generated first 3D UI, and may cause the rendered first 3D UI to be displayed within a display screen (e.g., display screen 165a or display devices 165b, or the like) of a user device (e.g., user device 135 or other user devices 185, or the like) associated with a user.
  • a display screen e.g., display screen 165a or display devices 165b, or the like
  • a user device e.g., user device 135 or other user devices 185, or the like
  • a 3D scene manager may receive the UI layout data (e.g., UI layout data 190, or the like), and may generate the first 3D UI.
  • a 3D rendering engine e.g., 3D rendering engine 125, or the like
  • may render the generated first 3D UI e.g., rendered image(s) 195, or the like.
  • a user input processor of the 3D scene manager e.g., 3D scene manager 115, or the like
  • the computing system may receive one or more user inputs.
  • the one or more user inputs may include, but are not limited to, at least one of one or more selection inputs, one or more translation inputs, one or more rotation inputs, one or more tilt inputs, one or more zoom inputs (e.g., pinching inputs, i.e., pinching in to zoom in and pinching out to zoom out, or the like), one or more swipe inputs, one or more dragging inputs, one or more gesture inputs, one or more tracing inputs, one or more camera translation inputs, one or more rotation inputs, one or more accelerometer inputs, or one or more gyroscope inputs, and/or the like.
  • one or more selection inputs e.g., pinching inputs, i.e., pinching in to zoom in and pinching out to zoom out, or the like
  • zoom inputs e.g., pinching inputs, i.e., pinching in to zoom in and pinching out to zoom out, or the like
  • zoom inputs e.g.,
  • generating the first 3D UI may comprise the computing system generating a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the one or more user inputs.
  • At least one of a 3D scene geometry manager, a material manager, a camera manager, or a light manager, and/or the like, of a 3D scene manager of the computing system may determine corresponding at least one of UI geometry, material, camera angle, or lighting of one or more UI objects within the first UI, based at least in part on the UI layout data.
  • generating the first 3D UI may comprise the computing system generating a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI.
  • rendering the generated first 3D UI may comprise the computing system rendering the generated first 3D UI, by using the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI to construct at least one Ul-specific 3D render pipeline and by using the constructed at least one Ul-specific 3D render pipeline to render the generated first 3D UI.
  • a hardware input processor of a 3D scene manager of the computing system may receive one or more hardware inputs.
  • the one or more hardware inputs may include, without limitation, at least one of one or more accelerometer inputs, one or more gyroscope inputs, one or more light sensor inputs, or one or more hardware-based user inputs, and/or the like.
  • the one or more hardware-based user inputs may include, but are not limited to, at least one of one or more finger gesture inputs, one or more facial image capture inputs, one or more eye image capture inputs, or one or more stylus inputs, and/or the like.
  • At least one of a camera manager, a light manager, or a 3D scene geometry manager of the 3D scene manager of the computing system may update corresponding at least one of camera angle, lighting, or scene geometry of one or more UI objects within the first UI, based at least in part on corresponding at least one of a combination of the one or more accelerometer inputs or the one or more gyroscope inputs, the one or more light sensor inputs, or the one or more hardware-based user inputs.
  • generating the first 3D UI may comprise an animation manager of the computing system generating a first 3D UI, by using at least one of predefined 3D animation control or physics based 3D animation control to animate 3D UI geometries in the first UI.
  • At least one of a dynamic geometry sorting system or a dynamic geometry subdivision system of a 3D rendering engine of the computing system may perform corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object to render correct perspectives of the at least one first UI object and the at least one second UI object.
  • performing the corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object may comprise: gathering, using the 3D rendering engine, UI geometries of UI objects among the at least one first UI object and the at least one second UI object; computing, using the 3D rendering engine, a bounding box for each UI object among the UI objects; sorting, using the dynamic geometry sorting system, the UI objects based on the computed bounding box for each UI object; and based on a determination that two or more bounding boxes overlap, subdividing, using the dynamic geometry subdivision system, UI geometries of overlapping UI objects corresponding to the two or more bounding boxes, and recomputing, using the 3D rendering engine, bounding boxes for each of the overlapping UI objects; and/or the like.
  • rendering the generated first 3D UI may comprise the computing system rendering the generated first 3D UI, based at least in part on one of the computed bounding box
  • the first 3D UI may comprise a plurality of UI objects.
  • two or more UI objects among the plurality of UI objects may have different z-axes or z-planes.
  • generating and rendering the first 3D UI may be based on at least one of orthographic projections or perspective projections of the two or more UI objects about the different z-axes or z-planes.
  • the first 3D UI may comprise a second 3D UI that presents a plurality of 3D UI objects within the second 3D UI as an integrated 3D UI environment.
  • the plurality of 3D UI objects may be presented as objects consistent with the integrated 3D UI environment.
  • FIG. 2 is a schematic block flow diagram illustrating a non-limiting example 200 of a method for implementing 3D and animation support for UI controls, in accordance with various embodiments.
  • 3D UI Tenderer 110 may include, but is not limited to, 3D scene manager 115 (similar to 3D scene manager 115 of Fig. 1, or the like), animation manager 120 (similar to animation manager 120 of Fig. 1, or the like), and rendering engine 125 (similar to rendering engine 125 of Fig. 1, or the like).
  • the 3D scene manager 115 may include, without limitation, at least one of user input processor 205, 3D scene geometry manager 210, material manager 215, camera manager 220, or light manager 225, and/or the like.
  • the animation manager 120 may include, but is not limited to, at least one of predefined 3D animation control 230 or physics-based 3D animation control 235, and/or the like.
  • the rendering engine 125 may include, without limitation, at least one of dynamic geometry sorting system 240, dynamic geometry subdivision system 245, or render pipeline constructor 250, and/or the like.
  • the user input processor 215 may receive and process user inputs from user device 135/185 (similar to user devices 135 and/or 185 of Fig. 1, or the like).
  • the one or more user inputs may include, but are not limited to, at least one of one or more selection inputs, one or more translation inputs, one or more rotation inputs, one or more tilt inputs, one or more zoom inputs (e.g., pinching inputs, i.e., pinching in to zoom in and pinching out to zoom out, or the like), one or more swipe inputs, one or more dragging inputs, one or more gesture inputs, one or more tracing inputs, one or more camera translation inputs, one or more rotation inputs, one or more accelerometer inputs, or one or more gyroscope inputs, and/or the like.
  • the 3D scene manager 210 may convert a given UI control hierarchy from the system UI framework 150 into a corresponding 3D geometry representation based on the 3D properties defined in the UI. 3D scene manager 210 may also manage the material manager 215, the camera manager 220, and/or the light manager 225. The material manager 215 may control rendering of material textures on models of each 3D object within the UI or of the 3D UI itself. The camera manager 220 may control rendering of changes to each 3D object within the UI or the 3D UI itself based on camera perspective changes (e.g., based on camera pan, tilt, and/or zoom functions, or the like).
  • the light manager 225 may control rendering of each 3D object or the 3D UI itself based on lighting control (e.g., based on angle, color/tint, and/or focused/diffuse aspects of the light, or the like).
  • the 3D system UI framework 150 may, in some cases, be a stand-alone rendering component of UI framework, and may maintain separate settings for each application, and in some cases may be switched off from the system or the application settings.
  • the animation manager 120 may receive at least one of 3D geometry information, materials information, camera information, or light information, and/or the like, from the 3D scene manager 115.
  • Predefined 3D animation control 230 and physics-based 3D animation control 235 may generate at least one of animated 3D geometry information, animated materials information, animated camera information, or animated light information, and/or the like, based on at least one of 3D geometry information, materials information, camera information, or light information, and/or the like.
  • the rendering engine 125 may receive the generated at least one of animated 3D geometry information, animated materials information, animated camera information, or animated light information, and/or the like, from the animation manager 120.
  • Dynamic Geometry sorting system 240 and dynamic geometry subdivision system 245 may perform dynamic sorting and subdivision, respectively, to ensure or maintain render correctness due to any transparency among the UI objects or controls.
  • the rendering engine 125 may use at least one of the animated 3D geometry information (including, e.g., transformation of the UI geometry, or the like), the animated material information, the animated camera information, or the animated light information, and/or the like, to construct a Ul-specific 3D render pipeline (using render pipeline constructor 250, or the like) to render the final frames of the 3D UI, the 3D objects within the UI, and/or the 3D UI itself.
  • the animated 3D geometry information including, e.g., transformation of the UI geometry, or the like
  • the animated material information including, e.g., transformation of the UI geometry, or the like
  • the animated material information including, e.g., transformation of the UI geometry, or the like
  • the animated camera information including, e.g., transformation of the UI geometry, or the like
  • animated light information including, e.g., transformation of the UI geometry, or the like
  • Ul-specific 3D render pipeline using render pipeline constructor 250, or the like
  • System composer 145 may send current back buffer data to the rendering engine 125, and may receive rendered back buffer data from the rendering engine 125 based on the rendering of each of at least one of the final frames of the 3D UI, the 3D objects within the UI, and/or the 3D UI itself.
  • the system composer 145 may send display buffer data based on the rendered back buffer data (similar to rendered image(s) 195 of Fig.
  • the user device 135/185 may cause the user device 135/185 to display the rendered at least one of the final frames of the 3D UI, the 3D objects within the UI, and/or the 3D UI itself that is displayed within a display screen of the user device 135/185.
  • 3D UI Tenderer 110, 3D scene manager 115, animation manager 120, rendering engine 125, user device 135/185, system composer 145, system UI framework 150, UI control hierarchy, and rendered back buffer data of system 200 in Fig. 2 may be similar to 3D UI Tenderer 110, 3D scene manager 115, animation manager 120, rendering engine 125, user device 135 and/or 185, system composer 145a and/or 145b, system UI framework 150a and/or 150b, UI layout data 190, and rendered image(s) 195, respectively, of system 100 in Fig. 1, and the descriptions of these components of system 100 (and their functions) are applicable to the corresponding components of system 200, respectively.
  • FIGs. 3A-3G are schematic diagrams illustrating various non-limiting examples 300, 300', 300", 300"', and 300"" of 3D transformations and/or 3D animations that may be utilized for rendering 3D UI during implementation of 3D and animation support for UI controls, in accordance with various embodiments.
  • a 2D object 305a (in this case, a 2D image of a globe, or the like) is shown.
  • 2D transformation may be performed on the 2D object 305a.
  • 2D transformation may include (but is not limited to) rotation about the z-axis of the globe to produce a transformed 2D object 305b.
  • 3D transformation may also be performed on the 2D object 305a or 305b, in this case, by generating a 3D object 305c based on the 2D object 305a or 305b, where the 3D object 305c is transformed (in this case, along the direction denoted by the curved arrow).
  • transformation of 2D UI objects relies on 3x3 matrices
  • transformation of 3D UI objects relies on 4x4 matrices (e.g., matrix 305d, or the like) to support affine and projective transformation in 3D.
  • 4x4 matrices e.g., matrix 305d, or the like
  • a 2D map UI control 305a is converted into a 3D globe control 305c that allows for intuitive 3D rotation.
  • a third dimension may be introduced to convert a 2D UI control into a 3D UI control (as depicted by objects 310a, or the like).
  • the depth values of the vertices of UI geometry do not need to be the same. That is, the UI objects need not always be on the same z-plane as in the 2D UI case.
  • 3D objects 310b which includes 3D buttons and 3D stock graph objects, have different z-axes (in this case, a z-axis for the 3D buttons and a separate z-axis for the 3D stock graph objects, or the like).
  • the camera could support orthographic and/or perspective projections with any pose.
  • Dynamic sorting and subdivision algorithms may be added to maintain render correctness due to transparency among the UI controls or objects 310b.
  • 3D transformation of the UI objects may rely on 4x4 matrices (e.g., matrix 310c, or the like) to support affine and projective transformation in 3D.
  • a first (3D) object 320a may be partially blocked behind a second (3D) object 320b relative to a field of view ("FOV") 325 of a camera or a user's eye(s).
  • FOV field of view
  • the first (3D) object 320a is shown being partially blocked by the second (3D) object 320b.
  • a fourth (3D) object 335b may be partially blocked behind a third (3D) object 335a relative to a FOV 340 of a camera or a user's eye(s).
  • the fourth (3D) object 335b is shown being partially blocked by the third (3D) object 335a.
  • a first panel 345a depicts a first set of objects (in this case, objects in the form of mountains, or the like) being placed one in front of the other, while a second panel 345b depicts a second set of objects (in this case, objects in the form of topographically changing ground surfaces, or the like), and a third panel 345c depicts a third set of objects (in this case, objects in the form of various trees on the topographically changing ground surfaces, or the like).
  • a zoomed in view 350 of the objects in the third panel 345c is shown with a triangular overlay 355a, which is further subdivided into modified triangular overlay 355b.
  • modified triangular overlay 355b subdivided triangles separate the various different objects or sets of objects within the panel 345c.
  • dynamic sorting and subdivision may be performed to ensure or maintain render correctness, regardless of transparency of objects, while also providing relative positions when performing 3D transformation, or the like.
  • a coherent control experience driven by the 3D UI may be implemented.
  • the system could display all applications within one UI environment, instead of the conventional way of keeping each app in a silo separate from other silos containing other apps.
  • the applications controlling one or more Internet of Things (“loT”) devices may be displayed as one single 3D UI environment (such as the single 3D UI environment 365 of an example residence hall loT environment 3D UI as displayed within the display screen of user device 360, or the like, as depicted in Fig. 3F).
  • the user can oversee the status of each device in the overview display or can navigate to each device's application to perform detailed control, or the like.
  • Fig. 3G the intuitive use nature of the UIs allow the user to navigate to the 3D object that represents the application, and to enable interaction between the user and the object in 3D (such as the navigated view 370 of the 3D object representing the application among the applications from the single 3D UI environment 365 of Fig. 3F, or the like).
  • Figs. 4A-4E are flow diagrams illustrating a method 400 for implementing 3D and animation support for UI controls, in accordance with various embodiments.
  • Method 400 of Fig. 4 A continues onto Fig. 4B following the circular marker denoted, "A,” and returns to Fig. 4A following the circular marker denoted, "B.”
  • Fig. 4A Following the circular marker denoted, "B.”
  • FIG. 4A While the techniques and procedures are depicted and/or described in a certain order for purposes of illustration, it should be appreciated that certain procedures may be reordered and/or omitted within the scope of various embodiments.
  • the systems, examples, or embodiments 100, 200, 300, 300', 300", 300"', and 300"" of Figs. 1, 2, 3A, 3B, 3C-3D, 3E, and 3F-3G can each also operate according to other modes of operation and/or perform other suitable procedures.
  • method 400 may comprise receiving, using a computing system, user interface ("UI") layout data of a first UI from a UI framework.
  • Method 400 may further comprise at least one of: receiving, using a user input processor of a 3D scene manager of the computing system, one or more user inputs (block 404); determining, using at least one of a 3D scene geometry manager, a material manager, a camera manager, or a light manager of a 3D scene manager of the computing system, corresponding at least one of UI geometry, material, camera angle, or lighting of one or more UI objects within the first UI, based at least in part on the UI layout data (block 406); or receiving, using a hardware input processor of a 3D scene manager of the computing system, one or more hardware inputs (block 408), and updating, using at least one of a camera manager, a light manager, or a 3D scene geometry manager of the 3D scene manager of the computing system,
  • the one or more user inputs may include, without limitation, at least one of one or more selection inputs, one or more translation inputs, one or more rotation inputs, one or more tilt inputs, one or more zoom inputs, one or more swipe inputs, one or more dragging inputs, one or more gesture inputs, one or more tracing inputs, one or more camera translation inputs, one or more rotation inputs, one or more accelerometer inputs, or one or more gyroscope inputs, and/or the like.
  • the one or more hardware inputs may include, but are not limited to, at least one of one or more accelerometer inputs, one or more gyroscope inputs, one or more light sensor inputs, or one or more hardwarebased user inputs, wherein the one or more hardware-based user inputs comprise at least one of one or more finger gesture inputs, one or more facial image capture inputs, one or more eye image capture inputs, or one or more stylus inputs, and/or the like.
  • method 400 may comprise generating, using the computing system, a first three-dimensional ("3D") UI.
  • generating the first 3D UI may comprise at least one of generating, using the computing system, a first three-dimensional ("3D") UI, by parsing two-dimensional ("2D") layout of the first UI, based at least in part on the UI layout data; generating, using the computing system, a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the one or more user inputs; generating, using the computing system, a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI; or generating, using an animation manager of the computing system, a first 3D UI, by using at least one of predefined 3D animation control or physics based 3D animation control to animate 3D
  • Method 400 may comprise rendering, using the computing system, the generated first 3D UI.
  • rendering the generated first 3D UI may comprise rendering, using the computing system, the generated first 3D UI, by using the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI to construct at least one UI- specific 3D render pipeline and by using the constructed at least one Ul-specific 3D render pipeline to render the generated first 3D UI.
  • Method 400 may further comprise, at block 416, causing, using the computing system, the rendered first 3D UI to be displayed within a display screen of a user device associated with a user.
  • the computing system may include, without limitation, at least one of a 3D UI Tenderer, a machine learning system, an artificial intelligence ("Al") system, a deep learning system, a neural network, a processor on the user device, one or more graphics processing units ("GPUs"), a server computer over a network, a cloud computing system, or a distributed computing system, and/or the like.
  • the computing system may include, but is not limited to, a 3D scene manager and a 3D rendering engine, and/or the like.
  • the 3D scene manager may receive the UI layout data and may generate the first 3D UI.
  • the 3D rendering engine may render the generated first 3D UI.
  • Method 400 may continue onto the process at block 418 in Fig. 4B following the circular marker denoted, "A.”
  • method 400 may comprise, based on a determination that at least one first UI object within the generated first 3D UI contains transparent portions that are intended to be positioned in front of at least one second UI object, performing, using at least one of a dynamic geometry sorting system or a dynamic geometry subdivision system of a 3D rendering engine of the computing system, corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object to render correct perspectives of the at least one first UI object and the at least one second UI object.
  • performing the corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object may comprise: gathering, using the 3D rendering engine, UI geometries of UI objects among the at least one first UI object and the at least one second UI object (block 420); computing, using the 3D rendering engine, a bounding box for each UI object among the UI objects (block 422); sorting, using the dynamic geometry sorting system, the UI objects based on the computed bounding box for each UI object (block 424); and based on a determination that two or more bounding boxes overlap, subdividing, using the dynamic geometry subdivision system, UI geometries of overlapping UI objects corresponding to the two or more bounding boxes, and recomputing, using the 3D rendering engine, bounding boxes for each of the overlapping UI objects (block 426).
  • Method 400 may return onto the process at block 414 in Fig. 4A following the circular marker denoted, "B."
  • rendering the generated first 3D UI may comprise rendering, using the computing system, the generated first 3D UI, based at least in part on one of the computed bounding box or the recomputed bounding box of each UI object.
  • the first 3D UI may comprise a plurality of UI objects.
  • two or more UI objects among the plurality of UI objects may have different z-axes or z-planes, and generating and rendering the first 3D UI may be based on at least one of orthographic projections or perspective projections of the two or more UI objects about the different z-axes or z-planes.
  • the first 3D UI may comprise a second 3D UI that presents a plurality of 3D UI objects within the second 3D UI as an integrated 3D UI environment.
  • the plurality of 3D UI objects may be presented as objects consistent with the integrated 3D UI environment.
  • method 400 may comprise: starting a 3D Ul-based application (block 428); loading a UI layout file (block 430); and determining whether 3D UI is enabled (block 432). If so, method 400 may continue onto the process at block 434. If not, method 400 may continue onto the process at block 448.
  • method 400 may further comprise: parsing 2D layout data and constructing a 3D UI (block 434); constructing render pipeline (block 436); fetching current buffer data (block 438); updating scene geometry and camera/lighting data based on hardware inputs (block 440); rendering the 3D UI (block 442); and checking whether the app is running (or still running) (block 444). If so, method 400 may return to the process at block 438 in a render loop. If not, method 400 may proceed to the process at block 446, at which the app finishes running.
  • method 400 may further comprise: setting up a conventional 2D UI (block 448); performing conventional 2D render (block 450); and checking whether the app is running (or still running) (block 452). If so, method 400 may return to the process at block 450 in a render loop. If not, method 400 may proceed to the process at block 446, at which the app finishes running.
  • method 400 may perform updating the 3D UI, and may comprise: starting updating of a 3D UI scene (block 454); receiving hardware inputs (block 456); passing queued inputs across frames (block 458); and fetching the hardware inputs (block 460).
  • Method 400 may further comprise: updating scene geometry data (block 462), in some cases, based on at least one of gesture input data (e.g., finger gesture data, hand gesture data, or the like), facial image capture input data, eye image capture input data, or stylus input data, and/or the like; updating camera data (block 464), in some cases, based on at least one of gyroscope input data or accelerometer input data, and/or the like; and/or updating lighting data (block 466), in some cases, based on ambient light sensor input data, or the like.
  • gesture input data e.g., finger gesture data, hand gesture data, or the like
  • facial image capture input data e.g., eye image capture input data, or stylus input data
  • camera data e.g., g., gyroscope input data or accelerometer input data, and/or the like
  • lighting data e.g., g., gyroscope input data or accelerometer input data, and/or the like
  • Method 400 may further comprise performing UI animations (block 468), based on at least one of the updated scene geometry data (from block 462), the updated camera data (from block 464), and/or the updated lighting data (from block 466). Method 400 may further comprise finishing updating of the 3D UI scene (block 470).
  • method 400 may perform dynamic sorting and subdivision, and may comprise: starting sorting (block 472); gathering UI geometries (block 474); computing bounding box of each UI (block 476); sorting remaining UIs based on bounding boxes (block 478); and checking if overlapping bounding boxes exist (block 480). If so, method 400 may continue to the process at block 482. If not, method 400 may continue to the process at block 484. At block 482, method 400 may comprise subdividing geometry and recomputing the bounding box. Method 400 may return to the process at block 478. At block 484, method 400 may comprise outputting the sorted UI queue. Method 400 may further comprise finishing sorting (block 486).
  • Fig. 5 is a block diagram illustrating an example of computer or system hardware architecture, in accordance with various embodiments.
  • Fig. 5 provides a schematic illustration of one embodiment of a computer system 500 of the service provider system hardware that can perform the methods provided by various other embodiments, as described herein, and/or can perform the functions of computer or hardware system (i.e., computing systems 105, 105a, and 105b, three-dimensional ("3D") user interface (“UI") Tenderer 110, central processing units and/or graphics processing units (“CPUs/GPUs”) 130a- 130n, user devices 135 and 185, user input devices 140a and 140b, system composers 145, 145a, and 145b, system UI framework 150, 150a, and 150b, etc.), as described above.
  • computing systems 105, 105a, and 105b three-dimensional (“3D") user interface (“UI") Tenderer 110, central processing units and/or graphics processing units (“CPUs/GPUs") 130a- 130n, user devices
  • Fig. 5 is meant only to provide a generalized illustration of various components, of which one or more (or none) of each may be utilized as appropriate. Fig. 5, therefore, broadly illustrates how individual system elements may be implemented in a relatively separated or relatively more integrated manner.
  • the computer or hardware system 500 - which might represent an embodiment of the computer or hardware system (i.e., computing systems 105, 105a, and 105b, 3D UI Tenderer 110, CPUs/GPUs 130a-130n, user devices 135 and 185, user input devices 140a and 140b, system composers 145, 145a, and 145b, system UI framework 150, 150a, and 150b, etc.), described above with respect to Figs. 1-4 - is shown comprising hardware elements that can be electrically coupled via a bus 505 (or may otherwise be in communication, as appropriate).
  • a bus 505 or may otherwise be in communication, as appropriate.
  • the hardware elements may include one or more processors 510, including, without limitation, one or more general-purpose processors and/or one or more specialpurpose processors (such as microprocessors, digital signal processing chips, graphics acceleration processors, and/or the like); one or more input devices 515, which can include, without limitation, a mouse, a keyboard, and/or the like; and one or more output devices 520, which can include, without limitation, a display device, a printer, and/or the like.
  • processors 510 including, without limitation, one or more general-purpose processors and/or one or more specialpurpose processors (such as microprocessors, digital signal processing chips, graphics acceleration processors, and/or the like)
  • input devices 515 which can include, without limitation, a mouse, a keyboard, and/or the like
  • output devices 520 which can include, without limitation, a display device, a printer, and/or the like.
  • the computer or hardware system 500 may further include (and/or be in communication with) one or more storage devices 525, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, solid-state storage device such as a random access memory (“RAM”) and/or a read-only memory (“ROM”), which can be programmable, flash-updateable, and/or the like.
  • RAM random access memory
  • ROM read-only memory
  • Such storage devices may be configured to implement any appropriate data stores, including, without limitation, various file systems, database structures, and/or the like.
  • the computer or hardware system 500 might also include a communications subsystem 530, which can include, without limitation, a modem, a network card (wireless or wired), an infra-red communication device, a wireless communication device and/or chipset (such as a BluetoothTM device, an 802.11 device, a WiFi device, a WiMax device, a WWAN device, cellular communication facilities, etc.), and/or the like.
  • the communications subsystem 530 may permit data to be exchanged with a network (such as the network described below, to name one example), with other computer or hardware systems, and/or with any other devices described herein.
  • the computer or hardware system 500 will further comprise a working memory 535, which can include a RAM or ROM device, as described above.
  • the computer or hardware system 500 also may comprise software elements, shown as being currently located within the working memory 535, including an operating system 540, device drivers, executable libraries, and/or other code, such as one or more application programs 545, which may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein.
  • an operating system 540 may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein.
  • application programs 545 may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein.
  • one or more procedures described with respect to the method(s) discussed above might be implemented as code and/or instructions executable by a computer (and/or a processor within a computer); in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods.
  • a set of these instructions and/or code might be encoded and/or stored on a non- transitory computer readable storage medium, such as the storage device(s) 525 described above.
  • the storage medium might be incorporated within a computer system, such as the system 500.
  • the storage medium might be separate from a computer system (i.e., a removable medium, such as a compact disc, etc.), and/or provided in an installation package, such that the storage medium can be used to program, configure, and/or adapt a general purpose computer with the instructions/code stored thereon.
  • These instructions might take the form of executable code, which is executable by the computer or hardware system 500 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer or hardware system 500 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.) then takes the form of executable code.
  • some embodiments may employ a computer or hardware system (such as the computer or hardware system 500) to perform methods in accordance with various embodiments of the invention.
  • some or all of the procedures of such methods are performed by the computer or hardware system 500 in response to processor 510 executing one or more sequences of one or more instructions (which might be incorporated into the operating system 540 and/or other code, such as an application program 545) contained in the working memory 535.
  • Such instructions may be read into the working memory 535 from another computer readable medium, such as one or more of the storage device(s) 525.
  • execution of the sequences of instructions contained in the working memory 535 might cause the processor(s) 510 to perform one or more procedures of the methods described herein.
  • machine readable medium and “computer readable medium,” as used herein, refer to any medium that participates in providing data that causes a machine to operate in some fashion.
  • various computer readable media might be involved in providing instructions/code to processor(s) 510 for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals).
  • a computer readable medium is a non-transitory, physical, and/or tangible storage medium.
  • a computer readable medium may take many forms, including, but not limited to, non-volatile media, volatile media, or the like.
  • Non-volatile media includes, for example, optical and/or magnetic disks, such as the storage device(s) 525.
  • Volatile media includes, without limitation, dynamic memory, such as the working memory 535.
  • a computer readable medium may take the form of transmission media, which includes, without limitation, coaxial cables, copper wire, and fiber optics, including the wires that comprise the bus 505, as well as the various components of the communication subsystem 530 (and/or the media by which the communications subsystem 530 provides communication with other devices).
  • transmission media can also take the form of waves (including without limitation radio, acoustic, and/or light waves, such as those generated during radio-wave and infra-red data communications).
  • Common forms of physical and/or tangible computer readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code.
  • Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 510 for execution.
  • the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer.
  • a remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer or hardware system 500.
  • These signals which might be in the form of electromagnetic signals, acoustic signals, optical signals, and/or the like, are all examples of carrier waves on which instructions can be encoded, in accordance with various embodiments of the invention.
  • the communications subsystem 530 (and/or components thereof) generally will receive the signals, and the bus 505 then might carry the signals (and/or the data, instructions, etc. carried by the signals) to the working memory 535, from which the processor(s) 505 retrieves and executes the instructions.
  • the instructions received by the working memory 535 may optionally be stored on a storage device 525 either before or after execution by the processor(s) 510.
  • a set of embodiments comprises methods and systems for implementing image rendering, and, more particularly, to methods, systems, and apparatuses for implementing three-dimensional ("3D") and animation support for user interface ("UI") controls.
  • Fig. 6 illustrates a schematic diagram of a system 600 that can be used in accordance with one set of embodiments.
  • the system 600 can include one or more user computers, user devices, or customer devices 605.
  • a user computer, user device, or customer device 605 can be a general purpose personal computer (including, merely by way of example, desktop computers, tablet computers, laptop computers, handheld computers, and the like, running any appropriate operating system, several of which are available from vendors such as Apple, Microsoft Corp., and the like), cloud computing devices, a server(s), and/or a workstation computer(s) running any of a variety of commercially-available UNIXTM or UNIX-like operating systems.
  • a user computer, user device, or customer device 605 can also have any of a variety of applications, including one or more applications configured to perform methods provided by various embodiments (as described above, for example), as well as one or more office applications, database client and/or server applications, and/or web browser applications.
  • a user computer, user device, or customer device 605 can be any other electronic device, such as a thin-client computer, Internet-enabled mobile telephone, and/or personal digital assistant, capable of communicating via a network (e.g., the network(s) 610 described below) and/or of displaying and navigating web pages or other types of electronic documents.
  • a network e.g., the network(s) 610 described below
  • the system 600 is shown with two user computers, user devices, or customer devices 605, any number of user computers, user devices, or customer devices can be supported.
  • Some embodiments operate in a networked environment, which can include a network(s) 610.
  • the network(s) 610 can be any type of network familiar to those skilled in the art that can support data communications using any of a variety of commercially- available (and/or free or proprietary) protocols, including, without limitation, TCP/IP, SNATM, IPXTM, AppleTalkTM, and the like.
  • TCP/IP Transmission Control Protocol
  • SNATM Session Init Protocol
  • IPXTM IPXTM
  • AppleTalkTM AppleTalk
  • LAN local area network
  • WAN wide-area network
  • WWAN wireless wide area network
  • VPN virtual private network
  • PSTN public switched telephone network
  • a wireless network including, without limitation, a network operating under any of the IEEE 802.11 suite of protocols, the BluetoothTM protocol known in the art, and/or any other wireless protocol; and/or any combination of these and/or other networks.
  • the network might include an access network of the service provider (e.g., an Internet service provider (“ISP”)).
  • ISP Internet service provider
  • the network might include a core network of the service provider, and/or the Internet.
  • Embodiments can also include one or more server computers 615.
  • Each of the server computers 615 may be configured with an operating system, including, without limitation, any of those discussed above, as well as any commercially (or freely) available server operating systems.
  • Each of the servers 615 may also be running one or more applications, which can be configured to provide services to one or more clients 605 and/or other servers 615.
  • one of the servers 615 might be a data server, a web server, a cloud computing device(s), or the like, as described above.
  • the data server might include (or be in communication with) a web server, which can be used, merely by way of example, to process requests for web pages or other electronic documents from user computers 605.
  • the web server can also run a variety of server applications, including HTTP servers, FTP servers, CGI servers, database servers, Java servers, and the like.
  • the web server may be configured to serve web pages that can be operated within a web browser on one or more of the user computers 605 to perform methods of the invention.
  • the server computers 615 might include one or more application servers, which can be configured with one or more applications accessible by a client running on one or more of the client computers 605 and/or other servers 615.
  • the server(s) 615 can be one or more general purpose computers capable of executing programs or scripts in response to the user computers 605 and/or other servers 615, including, without limitation, web applications (which might, in some cases, be configured to perform methods provided by various embodiments).
  • a web application can be implemented as one or more scripts or programs written in any suitable programming language, such as JavaTM, C, C#TM or C++, and/or any scripting language, such as Perl, Python, or TCL, as well as combinations of any programming and/or scripting languages.
  • the application server(s) can also include database servers, including, without limitation, those commercially available from OracleTM, MicrosoftTM, SybaseTM, IBMTM, and the like, which can process requests from clients (including, depending on the configuration, dedicated database clients, API clients, web browsers, etc.) running on a user computer, user device, or customer device 605 and/or another server 615.
  • an application server can perform one or more of the processes for implementing image rendering, and, more particularly, to methods, systems, and apparatuses for implementing 3D and animation support for UI controls, as described in detail above.
  • Data provided by an application server may be formatted as one or more web pages (comprising HTML, JavaScript, etc., for example) and/or may be forwarded to a user computer 605 via a web server (as described above, for example).
  • a web server might receive web page requests and/or input data from a user computer 605 and/or forward the web page requests and/or input data to an application server.
  • a web server may be integrated with an application server.
  • one or more servers 615 can function as a file server and/or can include one or more of the files (e.g., application code, data files, etc.) necessary to implement various disclosed methods, incorporated by an application running on a user computer 605 and/or another server 615.
  • a file server can include all necessary files, allowing such an application to be invoked remotely by a user computer, user device, or customer device 605 and/or server 615.
  • the system can include one or more databases 620a-620n (collectively, "databases 620").
  • databases 620 The location of each of the databases 620 is discretionary: merely by way of example, a database 620a might reside on a storage medium local to (and/or resident in) a server 615a (and/or a user computer, user device, or customer device 605).
  • a database 620n can be remote from any or all of the computers 605, 615, so long as it can be in communication (e.g., via the network 610) with one or more of these.
  • a database 620 can reside in a storage-area network ("SAN") familiar to those skilled in the art.
  • SAN storage-area network
  • system 600 may further comprise computing system 625 and corresponding database(s) 655 (similar to computing systems 105, 105a, and 105b and corresponding database(s) 155a and 155b of Fig. 1, or the like).
  • Computing system 625 may comprise three-dimensional ("3D") user interface ("UI") Tenderer 630 (similar to 3D UI Tenderer 110 of Figs. 1 and 2, or the like), which may include 3D scene manager 635 (similar to 3D scene manager 115 of Figs. 1 and 2, or the like), animation manager 640 (similar to animation manager 120 of Figs. 1 and 2, or the like), and rendering engine 645 (similar to rendering engine 125 of Figs. 1 and 2, or the like), or the like, and one or more central processing units and/or graphics processing units (“CPUs/GPUs”) 650a-650n (similar to CPUs/GPUs 130a-130n of Fig. 1, or the like).
  • CPUs/GPUs graphics processing units
  • System 600 may further comprise system composer 660 (similar to system composers 145, 145a, and 145b of Figs. 1 and 2, or the like), and system UI framework 665 (similar to system UI framework 150, 150a, and 150b of Figs. 1 and 2, or the like) and corresponding database(s) 670 (similar to database(s) 175 of Fig. 1, or the like).
  • system composer 660 similar to system composers 145, 145a, and 145b of Figs. 1 and 2, or the like
  • system UI framework 665 similar to system UI framework 150, 150a, and 150b of Figs. 1 and 2, or the like
  • database(s) 670 similar to database(s) 175 of Fig. 1, or the like.
  • computing system 625 may receive UI layout data (e.g., UI layout data 675, or the like) of a first UI from a UI framework (e.g., system UI framework 665, or the like).
  • the computing system may generate a first 3D UI, in some cases, by parsing two-dimensional ("2D") layout of the first UI, based at least in part on the UI layout data.
  • the computing system may render the generated first 3D UI, and may cause the rendered first 3D UI to be displayed within a display screen of a user device (e.g., user device 605a or 605b, or the like) associated with a user.
  • a user device e.g., user device 605a or 605b, or the like
  • a 3D scene manager may receive the UI layout data (e.g., UI layout data 675, or the like), and may generate the first 3D UI.
  • a 3D rendering engine e.g., 3D rendering engine 645, or the like
  • may render the generated first 3D UI e.g., rendered image(s) 680, or the like.
  • a user input processor of the 3D scene manager e.g., 3D scene manager 635, or the like
  • the computing system may receive one or more user inputs.
  • the one or more user inputs may include, but are not limited to, at least one of one or more selection inputs, one or more translation inputs, one or more rotation inputs, one or more tilt inputs, one or more zoom inputs (e.g., pinching inputs, i.e., pinching in to zoom in and pinching out to zoom out, or the like), one or more swipe inputs, one or more dragging inputs, one or more gesture inputs, one or more tracing inputs, one or more camera translation inputs, one or more rotation inputs, one or more accelerometer inputs, or one or more gyroscope inputs, and/or the like.
  • one or more selection inputs e.g., pinching inputs, i.e., pinching in to zoom in and pinching out to zoom out, or the like
  • zoom inputs e.g., pinching inputs, i.e., pinching in to zoom in and pinching out to zoom out, or the like
  • zoom inputs e.g.,
  • generating the first 3D UI may comprise the computing system generating a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the one or more user inputs.
  • At least one of a 3D scene geometry manager, a material manager, a camera manager, or a light manager, and/or the like, of a 3D scene manager of the computing system may determine corresponding at least one of UI geometry, material, camera angle, or lighting of one or more UI objects within the first UI, based at least in part on the UI layout data.
  • generating the first 3D UI may comprise the computing system generating a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI.
  • rendering the generated first 3D UI may comprise the computing system rendering the generated first 3D UI, by using the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI to construct at least one Ul-specific 3D render pipeline and by using the constructed at least one Ul-specific 3D render pipeline to render the generated first 3D UI.
  • a hardware input processor of a 3D scene manager of the computing system may receive one or more hardware inputs.
  • the one or more hardware inputs may include, without limitation, at least one of one or more accelerometer inputs, one or more gyroscope inputs, one or more light sensor inputs, or one or more hardware-based user inputs, and/or the like.
  • the one or more hardware-based user inputs may include, but are not limited to, at least one of one or more finger gesture inputs, one or more facial image capture inputs, one or more eye image capture inputs, or one or more stylus inputs, and/or the like.
  • At least one of a camera manager, a light manager, or a 3D scene geometry manager of the 3D scene manager of the computing system may update corresponding at least one of camera angle, lighting, or scene geometry of one or more UI objects within the first UI, based at least in part on corresponding at least one of a combination of the one or more accelerometer inputs or the one or more gyroscope inputs, the one or more light sensor inputs, or the one or more hardware-based user inputs.
  • generating the first 3D UI may comprise an animation manager of the computing system generating a first 3D UI, by using at least one of predefined 3D animation control or physics based 3D animation control to animate 3D UI geometries in the first UI.
  • At least one of a dynamic geometry sorting system or a dynamic geometry subdivision system of a 3D rendering engine of the computing system may perform corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object to render correct perspectives of the at least one first UI object and the at least one second UI object.
  • performing the corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object may comprise: gathering, using the 3D rendering engine, UI geometries of UI objects among the at least one first UI object and the at least one second UI object; computing, using the 3D rendering engine, a bounding box for each UI object among the UI objects; sorting, using the dynamic geometry sorting system, the UI objects based on the computed bounding box for each UI object; and based on a determination that two or more bounding boxes overlap, subdividing, using the dynamic geometry subdivision system, UI geometries of overlapping UI objects corresponding to the two or more bounding boxes, and recomputing, using the 3D rendering engine, bounding boxes for each of the overlapping UI objects; and/or the like.
  • rendering the generated first 3D UI may comprise the computing system rendering the generated first 3D UI, based at least in part on one of the computed bounding box
  • the first 3D UI may comprise a plurality of UI objects.
  • two or more UI objects among the plurality of UI objects may have different z-axes or z-planes.
  • generating and rendering the first 3D UI may be based on at least one of orthographic projections or perspective projections of the two or more UI objects about the different z-axes or z-planes.
  • the first 3D UI may comprise a second 3D UI that presents a plurality of 3D UI objects within the second 3D UI as an integrated 3D UI environment.
  • the plurality of 3D UI objects may be presented as objects consistent with the integrated 3D UI environment.

Abstract

Novel tools and techniques are provided for implementing three-dimensional ("3D") and animation support for user interface ("UI") controls. In various embodiments, a computing system may receive UI layout data of a first UI from a UI framework. The computing system may generate a first 3D UI, in some cases, by parsing two-dimensional ("2D") layout of the first UI, based at least in part on the UI layout data. The computing system may render the generated first 3D UI. The computing system may cause the rendered first 3D UI to be displayed within a display screen of a user device associated with a user.

Description

3D RENDERING AND ANIMATION SUPPORT FOR UI CONTROLS
COPYRIGHT STATEMENT
[0001] A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
FIELD
[0002] The present disclosure relates, in general, to methods, systems, and apparatuses for implementing image rendering, and, more particularly, to methods, systems, and apparatuses for implementing three-dimensional ("3D") and animation support for user interface ("UI") controls.
BACKGROUND
[0003] Conventional UI frameworks provide very limited 3D rendering and animation support. Existing UI frameworks (such as Ulkit, Flutter®, etc.) are mostly two-dimensional ("2D") UI frameworks, and only support limited 3D transformation and animation. UI frameworks in popular game engines (such as Unity® and Unreal®, etc.) provide very limited range of UI views. Although they support full 3D transformation and animation, they are embedded in a heavy weight graphics processing unit ("GPU") pipeline (i.e., requiring greater memory and/or other system resources, or the like), and hence are not suitable for use for system level UIs.
[0004] Hence, there is a need for more robust and scalable solutions for implementing image rendering, and, more particularly, to methods, systems, and apparatuses for implementing 3D and animation support for UI controls.
SUMMARY
[0005] The techniques of this disclosure generally relate to tools and techniques for implementing image rendering, and, more particularly, to methods, systems, and apparatuses for implementing 3D and animation support for UI controls. [0006] In an aspect, a method may comprise receiving, using a computing system, UI layout data of a first UI from a UI framework; generating, using the computing system, a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data; rendering, using the computing system, the generated first 3D UI; and causing, using the computing system, the rendered first 3D UI to be displayed within a display screen of a user device associated with a user.
[0007] In another aspect, an apparatus might comprise at least one processor and a non- transitory computer readable medium communicatively coupled to the at least one processor. The non-transitory computer readable medium might have stored thereon computer software comprising a set of instructions that, when executed by the at least one processor, causes the apparatus to: receive UI layout data of a first UI from a UI framework; generate a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data; render the generated first 3D UI; and cause the rendered first 3D UI to be displayed within a display screen of a user device associated with a user.
[0008] In yet another aspect, a system might comprise a computing system, which might comprise a 3D scene manager, a 3D rendering engine, at least one first processor, and a first non-transitory computer readable medium communicatively coupled to the at least one first processor. The first non-transitory computer readable medium might have stored thereon computer software comprising a first set of instructions that, when executed by the at least one first processor, causes the computing system to: receive, using the 3D scene manager, UI layout data of a first UI from a UI framework; generate, using the 3D scene manager, a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data; render, using the 3D rendering engine, the generated first 3D UI; and cause the rendered first 3D UI to be displayed within a display screen of a user device associated with a user.
[0009] Various modifications and additions can be made to the embodiments discussed without departing from the scope of the invention. For example, while the embodiments described above refer to particular features, the scope of this invention also includes embodiments having different combination of features and embodiments that do not include all of the above-described features.
[0010] The details of one or more aspects of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the techniques described in this disclosure will be apparent from the description and drawings, and from the claims. BRIEF DESCRIPTION OF THE DRAWINGS
[0011] A further understanding of the nature and advantages of particular embodiments may be realized by reference to the remaining portions of the specification and the drawings, in which like reference numerals are used to refer to similar components. In some instances, a sub-label is associated with a reference numeral to denote one of multiple similar components. When reference is made to a reference numeral without specification to an existing sub-label, it is intended to refer to all such multiple similar components.
[0012] Fig. 1 is a schematic diagram illustrating a system for implementing three- dimensional ("3D") and animation support for user interface ("UI") controls, in accordance with various embodiments.
[0013] Fig. 2 is a schematic block flow diagram illustrating a non- limiting example of a method for implementing 3D and animation support for UI controls, in accordance with various embodiments.
[0014] Figs. 3A-3G are schematic diagrams illustrating various non-limiting examples of 3D transformations and/or 3D animations that may be utilized for rendering 3D UI during implementation of 3D and animation support for UI controls, in accordance with various embodiments.
[0015] Figs. 4A-4E are flow diagrams illustrating a method for implementing 3D and animation support for UI controls, in accordance with various embodiments.
[0016] Fig. 5 is a block diagram illustrating an example of computer or system hardware architecture, in accordance with various embodiments.
[0017] Fig. 6 is a block diagram illustrating a networked system of computers, computing systems, or system hardware architecture, which can be used in accordance with various embodiments.
DETAILED DESCRIPTION
[0018] Overview
[0019] Various embodiments provide tools and techniques for implementing image rendering, and, more particularly, to methods, systems, and apparatuses for implementing three-dimensional ("3D") and animation support for user interface ("UI") controls.
[0020] In various embodiments, a computing system may receive UI layout data of a first UI from a UI framework; may generate a first 3D UI, by parsing two-dimensional ("2D") layout of the first UI, based at least in part on the UI layout data; may render the generated first 3D UI; and may cause the rendered first 3D UI to be displayed within a display screen of a user device associated with a user.
[0021] In some embodiments, the computing system may comprise at least one of a 3D UI Tenderer, a machine learning system, an artificial intelligence ("Al") system, a deep learning system, a neural network, a processor on the user device, one or more graphics processing units ("GPUs"), a server computer over a network, a cloud computing system, or a distributed computing system, and/or the like. Alternatively, or additionally, the computing system may comprise a 3D scene manager and a 3D rendering engine, wherein the 3D scene manager may receive the UI layout data and may generate the first 3D UI, and wherein the 3D rendering engine renders the generated first 3D UI.
[0022] According to some embodiments, a user input processor of a 3D scene manager of the computing system may receive one or more user inputs, the one or more user inputs comprising at least one of one or more selection inputs, one or more translation inputs, one or more rotation inputs, one or more tilt inputs, one or more zoom inputs, one or more swipe inputs, one or more dragging inputs, one or more gesture inputs, one or more tracing inputs, one or more camera translation inputs, one or more rotation inputs, one or more accelerometer inputs, or one or more gyroscope inputs, and/or the like. In some cases, generating the first 3D UI may comprise generating, using the computing system, a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the one or more user inputs.
[0023] Alternatively, or additionally, at least one of a 3D scene geometry manager, a material manager, a camera manager, or a light manager, and/or the like, of a 3D scene manager of the computing system may determine corresponding at least one of UI geometry, material, camera angle, or lighting of one or more UI objects within the first UI, based at least in part on the UI layout data. In some instances, generating the first 3D UI may comprise generating, using the computing system, a first 3D UI, by parsing two-dimensional ("2D") layout of the first UI, based at least in part on the UI layout data and based at least in part on the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI. In some cases, rendering the generated first 3D UI may comprise rendering, using the computing system, the generated first 3D UI, by using the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI to construct at least one UI- specific 3D render pipeline and by using the constructed at least one Ul-specific 3D render pipeline to render the generated first 3D UI.
[0024] Alternatively, or additionally, a hardware input processor of a 3D scene manager of the computing system may receive one or more hardware inputs, the one or more hardware inputs comprising at least one of one or more accelerometer inputs, one or more gyroscope inputs, one or more light sensor inputs, or one or more hardware -based user inputs, wherein the one or more hardware-based user inputs comprise at least one of one or more finger gesture inputs, one or more facial image capture inputs, one or more eye image capture inputs, or one or more stylus inputs, and/or the like; updating, using at least one of a camera manager, a light manager, or a 3D scene geometry manager of the 3D scene manager of the computing system, corresponding at least one of camera angle, lighting, or scene geometry of one or more UI objects within the first UI, based at least in part on corresponding at least one of a combination of the one or more accelerometer inputs or the one or more gyroscope inputs, the one or more light sensor inputs, or the one or more hardware-based user inputs. In such cases, generating the first 3D UI may comprise generating, using an animation manager of the computing system, a first 3D UI, by using at least one of predefined 3D animation control or physics based 3D animation control to animate 3D UI geometries in the first UI. [0025] In some embodiments, based on a determination that at least one first UI object within the generated first 3D UI contains transparent portions that are intended to be positioned in front of at least one second UI object, at least one of a dynamic geometry sorting system or a dynamic geometry subdivision system of a 3D rendering engine of the computing system may perform corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object to render correct perspectives of the at least one first UI object and the at least one second UI object. In some cases, performing the corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object may comprise: gathering, using the 3D rendering engine, UI geometries of UI objects among the at least one first UI object and the at least one second UI object; computing, using the 3D rendering engine, a bounding box for each UI object among the UI objects; sorting, using the dynamic geometry sorting system, the UI objects based on the computed bounding box for each UI object; and based on a determination that two or more bounding boxes overlap, subdividing, using the dynamic geometry subdivision system, UI geometries of overlapping UI objects corresponding to the two or more bounding boxes, and recomputing, using the 3D rendering engine, bounding boxes for each of the overlapping UI objects; and/or the like. In some instances, rendering the generated first 3D UI may comprise rendering, using the computing system, the generated first 3D UI, based at least in part on one of the computed bounding box or the recomputed bounding box of each UI object.
[0026] According to some embodiments, the first 3D UI may comprise a plurality of UI objects, wherein two or more UI objects among the plurality of UI objects may have different z-axes or z-planes, wherein generating and rendering the first 3D UI may be based on at least one of orthographic projections or perspective projections of the two or more UI objects about the different z-axes or z-planes.
[0027] In some embodiments, the first 3D UI may comprise a second 3D UI that presents a plurality of 3D UI objects within the second 3D UI as an integrated 3D UI environment, wherein the plurality of 3D UI objects may be presented as objects consistent with the integrated 3D UI environment.
[0028] The various aspects described herein provide 3D UI Tenderer system that renders 2D UIs (or UI objects or controls, or the like) into 3D UIs (or UI objects or controls, or the like). In this manner, 3D UI objects or controls may be dynamically rendered rather than left as static 2D UI objects or controls as in conventional launchers.
[0029] These and other aspects of the system and method for implementing 3D and animation support for UI controls are described in greater detail with respect to the figures. [0030] The following detailed description illustrates a few embodiments in further detail to enable one of skill in the art to practice such embodiments. The described examples are provided for illustrative purposes and are not intended to limit the scope of the invention. [0031] In the following description, for the purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the described embodiments. It will be apparent to one skilled in the art, however, that other embodiments of the present invention may be practiced without some of these details. In other instances, some structures and devices are shown in block diagram form. Several embodiments are described herein, and while various features are ascribed to different embodiments, it should be appreciated that the features described with respect to one embodiment may be incorporated with other embodiments as well. By the same token, however, no single feature or features of any described embodiment should be considered essential to every embodiment of the invention, as other embodiments of the invention may omit such features. [0032] Unless otherwise indicated, all numbers used herein to express quantities, dimensions, and so forth used should be understood as being modified in all instances by the term "about." In this application, the use of the singular includes the plural unless specifically stated otherwise, and use of the terms "and" and "or" means "and/or" unless otherwise indicated. Moreover, the use of the term "including," as well as other forms, such as "includes" and "included," should be considered non-exclusive. Also, terms such as "element" or "component" encompass both elements and components comprising one unit and elements and components that comprise more than one unit, unless specifically stated otherwise.
[0033] Various embodiments as described herein - while embodying (in some cases) software products, computer-performed methods, and/or computer systems - represent tangible, concrete improvements to existing technological areas, including, without limitation, UI object (or control) rendering technology, application UI object (or control) rendering technology, UI Tenderer technology, 3D UI Tenderer technology, and/or the like. In other aspects, some embodiments can improve the functioning of user equipment or systems themselves (e.g., UI object (or control) rendering systems, application UI object (or control) rendering systems, UI Tenderer systems, 3D UI Tenderer systems, etc.), for example, by receiving, using a computing system, user interface ("UI") layout data of a first UI from a UI framework; generating, using the computing system, a first three-dimensional ("3D") UI, by parsing two-dimensional ("2D") layout of the first UI, based at least in part on the UI layout data; rendering, using the computing system, the generated first 3D UI; and causing, using the computing system, the rendered first 3D UI to be displayed within a display screen of a user device associated with a user; and/or the like.
[0034] In particular, to the extent any abstract concepts are present in the various embodiments, those concepts can be implemented as described herein by devices, software, systems, and methods that involve novel functionality (e.g., steps or operations), such as, rendering and displaying UI objects (or controls) as 3D UI objects (or controls) and/or the like, to name a few examples, that extend beyond mere conventional computer processing operations. These functionalities can produce tangible results outside of the implementing computer system, including, merely by way of example, providing a 3D UI Tenderer that renders 2D UI objects (or controls) as 3D UI objects (or controls), where UI objects (or controls) may be dynamically rendered in 3D rather than left as static 2D UI objects (or controls) as in conventional systems, at least some of which may be observed or measured by users, game/content developers, and/or user device manufacturers.
[0035] Some Embodiments
[0036] We now turn to the embodiments as illustrated by the drawings. Figs. 1-6 illustrate some of the features of the method, system, and apparatus for implementing image rendering, and, more particularly, to methods, systems, and apparatuses for implementing three-dimensional ("3D") and animation support for user interface ("UI") controls, as referred to above. The methods, systems, and apparatuses illustrated by Figs. 1-6 refer to examples of different embodiments that include various components and steps, which can be considered alternatives or which can be used in conjunction with one another in the various embodiments. The description of the illustrated methods, systems, and apparatuses shown in Figs. 1-6 is provided for purposes of illustration and should not be considered to limit the scope of the different embodiments.
[0037] With reference to the figures, Fig. 1 is a schematic diagram illustrating a system 100 for implementing 3D and animation support for UI controls, in accordance with various embodiments.
[0038] In the non-limiting embodiment of Fig. 1, system 100 may comprise computing system 105, which may include, but is not limited to, a three dimensional ("3D") user interface ("UI") Tenderer 110 and one or more central processing units and/or graphics processing units ("CPUs/GPUs") 130a-130n (collectively, "CPUs/GPUs 130" or the like). The 3D UI Tenderer 110 may include, without limitation, a 3D scene manager 115, an animation manager 120, and/or a rendering engine 125. The computing system 105 either may be an integrated computing system 105a as part of a user device 135 or may be a remote computing system 105b (in some cases, as part of a network(s) 180, or the like), and/or the like. In some embodiments, the computing system 105 may include, but is not limited to, at least one of a 3D UI Tenderer (e.g., 3D UI Tenderer 110, or the like), a machine learning system, an artificial intelligence ("Al") system, a deep learning system, a neural network, a processor on the user device (e.g., user device 135, or the like), one or more graphics processing units ("GPUs"), a server computer over a network, a cloud computing system, or a distributed computing system, and/or the like.
[0039] In some embodiments, the user device 135 may include, without limitation, at least one of computing system 105a, user input device 140a, system composer 145a, system UI framework 150a, data storage 155a, communications system 160, display screen 165a, or audio playback device 170, and/or the like. In some cases, system 100 may further comprise database(s) 155b in network(s) 180 that is communicatively coupled to computing system 105b. System 100 may further comprise network-based system composer 145b and system UI framework 150b (and corresponding database(s) 175) in network(s) 180. System 100 may further comprise at least one of user input devices 140b, display devices 165b, and/or other user devices 185, and/or the like. Each of user device 135, user input devices 140b, display devices 165b, and/or other user devices 185, or the like, may communicatively couple to at least one of computing system 105b, network-based system composer 145b, system UI framework 150b, and/or each other via network(s) 180 and via wired communications lines and/or via wireless communications lines (as depicted in Fig. 1 by lightning bolt symbols).
[0040] In some embodiments, networks 180 may each include, without limitation, one of a local area network ("LAN"), including, without limitation, a fiber network, an Ethernet network, a Token-Ring™ network, and/or the like; a wide-area network ("WAN"); a wireless wide area network ("WWAN"); a virtual network, such as a virtual private network ("VPN"); the Internet; an intranet; an extranet; a public switched telephone network ("PSTN"); an infra-red network; a wireless network, including, without limitation, a network operating under any of the IEEE 802.11 suite of protocols, the Bluetooth™ protocol known in the art, and/or any other wireless protocol; and/or any combination of these and/or other networks. In a particular embodiment, the network(s) 180 may include an access network of the service provider (e.g., an Internet service provider ("ISP")). In another embodiment, the network(s) 180 may include a core network of the service provider and/or the Internet.
[0041] In operation, computing system 105, 105a, or 105b (collectively, "computing system" or the like) may receive UI layout data (e.g., UI layout data 190, or the like) of a first UI from a UI framework (e.g., system UI framework 150a or 150b, or the like). The computing system may generate a first 3D UI, in some cases, by parsing two-dimensional ("2D") layout of the first UI, based at least in part on the UI layout data. The computing system may render the generated first 3D UI, and may cause the rendered first 3D UI to be displayed within a display screen (e.g., display screen 165a or display devices 165b, or the like) of a user device (e.g., user device 135 or other user devices 185, or the like) associated with a user.
[0042] In some embodiments, a 3D scene manager (e.g., 3D scene manager 115, or the like) may receive the UI layout data (e.g., UI layout data 190, or the like), and may generate the first 3D UI. In some instances, a 3D rendering engine (e.g., 3D rendering engine 125, or the like) may render the generated first 3D UI (e.g., rendered image(s) 195, or the like). [0043] According to some embodiments, a user input processor of the 3D scene manager (e.g., 3D scene manager 115, or the like) of the computing system may receive one or more user inputs. In some instances, the one or more user inputs may include, but are not limited to, at least one of one or more selection inputs, one or more translation inputs, one or more rotation inputs, one or more tilt inputs, one or more zoom inputs (e.g., pinching inputs, i.e., pinching in to zoom in and pinching out to zoom out, or the like), one or more swipe inputs, one or more dragging inputs, one or more gesture inputs, one or more tracing inputs, one or more camera translation inputs, one or more rotation inputs, one or more accelerometer inputs, or one or more gyroscope inputs, and/or the like. In some cases, generating the first 3D UI may comprise the computing system generating a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the one or more user inputs.
[0044] Alternatively, or additionally, at least one of a 3D scene geometry manager, a material manager, a camera manager, or a light manager, and/or the like, of a 3D scene manager of the computing system may determine corresponding at least one of UI geometry, material, camera angle, or lighting of one or more UI objects within the first UI, based at least in part on the UI layout data. In some instances, generating the first 3D UI may comprise the computing system generating a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI. In some cases, rendering the generated first 3D UI may comprise the computing system rendering the generated first 3D UI, by using the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI to construct at least one Ul-specific 3D render pipeline and by using the constructed at least one Ul-specific 3D render pipeline to render the generated first 3D UI.
[0045] Alternatively, or additionally, a hardware input processor of a 3D scene manager of the computing system may receive one or more hardware inputs. In some cases, the one or more hardware inputs may include, without limitation, at least one of one or more accelerometer inputs, one or more gyroscope inputs, one or more light sensor inputs, or one or more hardware-based user inputs, and/or the like. In some instances, the one or more hardware-based user inputs may include, but are not limited to, at least one of one or more finger gesture inputs, one or more facial image capture inputs, one or more eye image capture inputs, or one or more stylus inputs, and/or the like. At least one of a camera manager, a light manager, or a 3D scene geometry manager of the 3D scene manager of the computing system may update corresponding at least one of camera angle, lighting, or scene geometry of one or more UI objects within the first UI, based at least in part on corresponding at least one of a combination of the one or more accelerometer inputs or the one or more gyroscope inputs, the one or more light sensor inputs, or the one or more hardware-based user inputs. In such cases, generating the first 3D UI may comprise an animation manager of the computing system generating a first 3D UI, by using at least one of predefined 3D animation control or physics based 3D animation control to animate 3D UI geometries in the first UI.
[0046] In some embodiments, based on a determination that at least one first UI object within the generated first 3D UI contains transparent portions that are intended to be positioned in front of at least one second UI object, at least one of a dynamic geometry sorting system or a dynamic geometry subdivision system of a 3D rendering engine of the computing system may perform corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object to render correct perspectives of the at least one first UI object and the at least one second UI object. In some cases, performing the corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object may comprise: gathering, using the 3D rendering engine, UI geometries of UI objects among the at least one first UI object and the at least one second UI object; computing, using the 3D rendering engine, a bounding box for each UI object among the UI objects; sorting, using the dynamic geometry sorting system, the UI objects based on the computed bounding box for each UI object; and based on a determination that two or more bounding boxes overlap, subdividing, using the dynamic geometry subdivision system, UI geometries of overlapping UI objects corresponding to the two or more bounding boxes, and recomputing, using the 3D rendering engine, bounding boxes for each of the overlapping UI objects; and/or the like. In some instances, rendering the generated first 3D UI may comprise the computing system rendering the generated first 3D UI, based at least in part on one of the computed bounding box or the recomputed bounding box of each UI object.
[0047] According to some embodiments, the first 3D UI may comprise a plurality of UI objects. In some instances, two or more UI objects among the plurality of UI objects may have different z-axes or z-planes. In some cases, generating and rendering the first 3D UI may be based on at least one of orthographic projections or perspective projections of the two or more UI objects about the different z-axes or z-planes.
[0048] In some embodiments, the first 3D UI may comprise a second 3D UI that presents a plurality of 3D UI objects within the second 3D UI as an integrated 3D UI environment. In such cases, the plurality of 3D UI objects may be presented as objects consistent with the integrated 3D UI environment.
[0049] These and other functions of the system 100 (and its components) are described in greater detail below with respect to Figs. 2-4.
[0050] Fig. 2 is a schematic block flow diagram illustrating a non-limiting example 200 of a method for implementing 3D and animation support for UI controls, in accordance with various embodiments.
[0051] With reference to the non-limiting example 200 of Fig. 2, 3D UI Tenderer 110 (similar to 3D UI Tenderer 110 of Fig. 1, or the like) may include, but is not limited to, 3D scene manager 115 (similar to 3D scene manager 115 of Fig. 1, or the like), animation manager 120 (similar to animation manager 120 of Fig. 1, or the like), and rendering engine 125 (similar to rendering engine 125 of Fig. 1, or the like). The 3D scene manager 115 may include, without limitation, at least one of user input processor 205, 3D scene geometry manager 210, material manager 215, camera manager 220, or light manager 225, and/or the like. The animation manager 120 may include, but is not limited to, at least one of predefined 3D animation control 230 or physics-based 3D animation control 235, and/or the like. The rendering engine 125 may include, without limitation, at least one of dynamic geometry sorting system 240, dynamic geometry subdivision system 245, or render pipeline constructor 250, and/or the like.
[0052] In some embodiments, the user input processor 215 may receive and process user inputs from user device 135/185 (similar to user devices 135 and/or 185 of Fig. 1, or the like). In some instances, the one or more user inputs may include, but are not limited to, at least one of one or more selection inputs, one or more translation inputs, one or more rotation inputs, one or more tilt inputs, one or more zoom inputs (e.g., pinching inputs, i.e., pinching in to zoom in and pinching out to zoom out, or the like), one or more swipe inputs, one or more dragging inputs, one or more gesture inputs, one or more tracing inputs, one or more camera translation inputs, one or more rotation inputs, one or more accelerometer inputs, or one or more gyroscope inputs, and/or the like. [0053] The 3D scene manager 210 may convert a given UI control hierarchy from the system UI framework 150 into a corresponding 3D geometry representation based on the 3D properties defined in the UI. 3D scene manager 210 may also manage the material manager 215, the camera manager 220, and/or the light manager 225. The material manager 215 may control rendering of material textures on models of each 3D object within the UI or of the 3D UI itself. The camera manager 220 may control rendering of changes to each 3D object within the UI or the 3D UI itself based on camera perspective changes (e.g., based on camera pan, tilt, and/or zoom functions, or the like). The light manager 225 may control rendering of each 3D object or the 3D UI itself based on lighting control (e.g., based on angle, color/tint, and/or focused/diffuse aspects of the light, or the like). The 3D system UI framework 150 may, in some cases, be a stand-alone rendering component of UI framework, and may maintain separate settings for each application, and in some cases may be switched off from the system or the application settings.
[0054] The animation manager 120 may receive at least one of 3D geometry information, materials information, camera information, or light information, and/or the like, from the 3D scene manager 115. Predefined 3D animation control 230 and physics-based 3D animation control 235 may generate at least one of animated 3D geometry information, animated materials information, animated camera information, or animated light information, and/or the like, based on at least one of 3D geometry information, materials information, camera information, or light information, and/or the like.
[0055] The rendering engine 125 may receive the generated at least one of animated 3D geometry information, animated materials information, animated camera information, or animated light information, and/or the like, from the animation manager 120. Dynamic Geometry sorting system 240 and dynamic geometry subdivision system 245 may perform dynamic sorting and subdivision, respectively, to ensure or maintain render correctness due to any transparency among the UI objects or controls. The rendering engine 125 may use at least one of the animated 3D geometry information (including, e.g., transformation of the UI geometry, or the like), the animated material information, the animated camera information, or the animated light information, and/or the like, to construct a Ul-specific 3D render pipeline (using render pipeline constructor 250, or the like) to render the final frames of the 3D UI, the 3D objects within the UI, and/or the 3D UI itself.
[0056] System composer 145 (similar to system composer 145a and/or 145b, or the like) may send current back buffer data to the rendering engine 125, and may receive rendered back buffer data from the rendering engine 125 based on the rendering of each of at least one of the final frames of the 3D UI, the 3D objects within the UI, and/or the 3D UI itself. The system composer 145 may send display buffer data based on the rendered back buffer data (similar to rendered image(s) 195 of Fig. 1, or the like), and may cause the user device 135/185 to display the rendered at least one of the final frames of the 3D UI, the 3D objects within the UI, and/or the 3D UI itself that is displayed within a display screen of the user device 135/185.
[0057] 3D UI Tenderer 110, 3D scene manager 115, animation manager 120, rendering engine 125, user device 135/185, system composer 145, system UI framework 150, UI control hierarchy, and rendered back buffer data of system 200 in Fig. 2 may be similar to 3D UI Tenderer 110, 3D scene manager 115, animation manager 120, rendering engine 125, user device 135 and/or 185, system composer 145a and/or 145b, system UI framework 150a and/or 150b, UI layout data 190, and rendered image(s) 195, respectively, of system 100 in Fig. 1, and the descriptions of these components of system 100 (and their functions) are applicable to the corresponding components of system 200, respectively.
[0058] These and other functions of the system 200 (and its components) are described in greater detail below with respect to Figs. 1, 3, and 4.
[0059] Figs. 3A-3G (collectively, "Fig. 3") are schematic diagrams illustrating various non-limiting examples 300, 300', 300", 300"', and 300"" of 3D transformations and/or 3D animations that may be utilized for rendering 3D UI during implementation of 3D and animation support for UI controls, in accordance with various embodiments.
[0060] With reference to the non-limiting example 300 of Fig. 3A, a 2D object 305a (in this case, a 2D image of a globe, or the like) is shown. 2D transformation may be performed on the 2D object 305a. For example, as shown in Fig. 3A, 2D transformation may include (but is not limited to) rotation about the z-axis of the globe to produce a transformed 2D object 305b. 3D transformation may also be performed on the 2D object 305a or 305b, in this case, by generating a 3D object 305c based on the 2D object 305a or 305b, where the 3D object 305c is transformed (in this case, along the direction denoted by the curved arrow). The transformation of 2D UI objects (e.g., 2D objects 305a or 305b, or the like) relies on 3x3 matrices, while transformation of 3D UI objects (e.g., 3D object 305c, or the like) relies on 4x4 matrices (e.g., matrix 305d, or the like) to support affine and projective transformation in 3D. As shown in Fig. 3A, a 2D map UI control 305a is converted into a 3D globe control 305c that allows for intuitive 3D rotation. [0061] Referring to the non-limiting example 300' of Fig. 3B, a third dimension may be introduced to convert a 2D UI control into a 3D UI control (as depicted by objects 310a, or the like). In some embodiments, the depth values of the vertices of UI geometry do not need to be the same. That is, the UI objects need not always be on the same z-plane as in the 2D UI case. For example, as shown in Fig. 3B, 3D objects 310b, which includes 3D buttons and 3D stock graph objects, have different z-axes (in this case, a z-axis for the 3D buttons and a separate z-axis for the 3D stock graph objects, or the like). To achieve 3D rendering of each UI, instead of forcing the camera to be frontal parallel, the camera could support orthographic and/or perspective projections with any pose. Dynamic sorting and subdivision algorithms may be added to maintain render correctness due to transparency among the UI controls or objects 310b. As described above, 3D transformation of the UI objects may rely on 4x4 matrices (e.g., matrix 310c, or the like) to support affine and projective transformation in 3D.
[0062] Turning to the non- limiting examples 300" of Figs. 3C and 3D, dynamic sorting and subdivision may be performed to ensure or maintain render correctness due to transparency among the UI objects or controls, while also providing relative positions when performing 3D transformation, or the like, or the like. As shown in side view panel 315a in Fig. 3C, a first (3D) object 320a may be partially blocked behind a second (3D) object 320b relative to a field of view ("FOV") 325 of a camera or a user's eye(s). In successive front view panels 315b and 315c, the first (3D) object 320a is shown being partially blocked by the second (3D) object 320b. Although the objects 320a and 320b are depicted as cubes in Fig. 3C, the various embodiments are not so limited, and the objects may be of any suitable shape or combination of shapes. For instance, as shown in side view panel 330a in Fig. 3D, a fourth (3D) object 335b may be partially blocked behind a third (3D) object 335a relative to a FOV 340 of a camera or a user's eye(s). In successive front view panels 330b and 330c, the fourth (3D) object 335b is shown being partially blocked by the third (3D) object 335a.
[0063] Referring to the non-limiting example 300'" of Fig. 3E, a first panel 345a depicts a first set of objects (in this case, objects in the form of mountains, or the like) being placed one in front of the other, while a second panel 345b depicts a second set of objects (in this case, objects in the form of topographically changing ground surfaces, or the like), and a third panel 345c depicts a third set of objects (in this case, objects in the form of various trees on the topographically changing ground surfaces, or the like). For dynamic sorting and subdivision, a zoomed in view 350 of the objects in the third panel 345c is shown with a triangular overlay 355a, which is further subdivided into modified triangular overlay 355b. As depicted in the modified triangular overlay 355b, subdivided triangles separate the various different objects or sets of objects within the panel 345c. In this manner, dynamic sorting and subdivision (as described in detail above) may be performed to ensure or maintain render correctness, regardless of transparency of objects, while also providing relative positions when performing 3D transformation, or the like.
[0064] With reference to the non-limiting examples 300"" of Figs. 3F and 3G, a coherent control experience driven by the 3D UI may be implemented. In particular, based on the 3D nature of the UIs of the applications or apps, the system could display all applications within one UI environment, instead of the conventional way of keeping each app in a silo separate from other silos containing other apps. For example, the applications controlling one or more Internet of Things ("loT") devices may be displayed as one single 3D UI environment (such as the single 3D UI environment 365 of an example residence hall loT environment 3D UI as displayed within the display screen of user device 360, or the like, as depicted in Fig. 3F). The user can oversee the status of each device in the overview display or can navigate to each device's application to perform detailed control, or the like. Turning to Fig. 3G, the intuitive use nature of the UIs allow the user to navigate to the 3D object that represents the application, and to enable interaction between the user and the object in 3D (such as the navigated view 370 of the 3D object representing the application among the applications from the single 3D UI environment 365 of Fig. 3F, or the like).
[0065] Figs. 4A-4E (collectively, "Fig. 4") are flow diagrams illustrating a method 400 for implementing 3D and animation support for UI controls, in accordance with various embodiments. Method 400 of Fig. 4 A continues onto Fig. 4B following the circular marker denoted, "A," and returns to Fig. 4A following the circular marker denoted, "B." [0066] While the techniques and procedures are depicted and/or described in a certain order for purposes of illustration, it should be appreciated that certain procedures may be reordered and/or omitted within the scope of various embodiments. Moreover, while the method 400 illustrated by Fig. 4 can be implemented by or with (and, in some cases, are described below with respect to) the systems, examples, or embodiments 100, 200, 300, 300', 300", 300'", and 300"" of Figs. 1, 2, 3A, 3B, 3C-3D, 3E, and 3F-3G, respectively (or components thereof), such methods may also be implemented using any suitable hardware (or software) implementation. Similarly, while each of the systems, examples, or embodiments 100, 200, 300, 300', 300", 300'", and 300"" of Figs. 1, 2, 3A, 3B, 3C-3D, 3E, and 3F-3G, respectively (or components thereof), can operate according to the method 400 illustrated by Fig. 4 (e.g., by executing instructions embodied on a computer readable medium), the systems, examples, or embodiments 100, 200, 300, 300', 300", 300"', and 300"" of Figs. 1, 2, 3A, 3B, 3C-3D, 3E, and 3F-3G can each also operate according to other modes of operation and/or perform other suitable procedures.
[0067] In the non-limiting embodiment of Fig. 4A, method 400, at block 402, may comprise receiving, using a computing system, user interface ("UI") layout data of a first UI from a UI framework. Method 400 may further comprise at least one of: receiving, using a user input processor of a 3D scene manager of the computing system, one or more user inputs (block 404); determining, using at least one of a 3D scene geometry manager, a material manager, a camera manager, or a light manager of a 3D scene manager of the computing system, corresponding at least one of UI geometry, material, camera angle, or lighting of one or more UI objects within the first UI, based at least in part on the UI layout data (block 406); or receiving, using a hardware input processor of a 3D scene manager of the computing system, one or more hardware inputs (block 408), and updating, using at least one of a camera manager, a light manager, or a 3D scene geometry manager of the 3D scene manager of the computing system, corresponding at least one of camera angle, lighting, or scene geometry of one or more UI objects within the first UI, based at least in part on corresponding at least one of a combination of the one or more accelerometer inputs or the one or more gyroscope inputs, the one or more light sensor inputs, or the one or more hardware-based user inputs (block 410).
[0068] In some cases, the one or more user inputs may include, without limitation, at least one of one or more selection inputs, one or more translation inputs, one or more rotation inputs, one or more tilt inputs, one or more zoom inputs, one or more swipe inputs, one or more dragging inputs, one or more gesture inputs, one or more tracing inputs, one or more camera translation inputs, one or more rotation inputs, one or more accelerometer inputs, or one or more gyroscope inputs, and/or the like. In some instances, the one or more hardware inputs may include, but are not limited to, at least one of one or more accelerometer inputs, one or more gyroscope inputs, one or more light sensor inputs, or one or more hardwarebased user inputs, wherein the one or more hardware-based user inputs comprise at least one of one or more finger gesture inputs, one or more facial image capture inputs, one or more eye image capture inputs, or one or more stylus inputs, and/or the like. [0069] At block 412, method 400 may comprise generating, using the computing system, a first three-dimensional ("3D") UI. In some embodiments, generating the first 3D UI may comprise at least one of generating, using the computing system, a first three-dimensional ("3D") UI, by parsing two-dimensional ("2D") layout of the first UI, based at least in part on the UI layout data; generating, using the computing system, a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the one or more user inputs; generating, using the computing system, a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI; or generating, using an animation manager of the computing system, a first 3D UI, by using at least one of predefined 3D animation control or physics based 3D animation control to animate 3D UI geometries in the first UI; and/or the like.
[0070] Method 400, at block 414, may comprise rendering, using the computing system, the generated first 3D UI. In some embodiments, rendering the generated first 3D UI may comprise rendering, using the computing system, the generated first 3D UI, by using the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI to construct at least one UI- specific 3D render pipeline and by using the constructed at least one Ul-specific 3D render pipeline to render the generated first 3D UI. Method 400 may further comprise, at block 416, causing, using the computing system, the rendered first 3D UI to be displayed within a display screen of a user device associated with a user.
[0071] In some embodiments, the computing system may include, without limitation, at least one of a 3D UI Tenderer, a machine learning system, an artificial intelligence ("Al") system, a deep learning system, a neural network, a processor on the user device, one or more graphics processing units ("GPUs"), a server computer over a network, a cloud computing system, or a distributed computing system, and/or the like. Alternatively, or additionally, the computing system may include, but is not limited to, a 3D scene manager and a 3D rendering engine, and/or the like. In some cases, the 3D scene manager may receive the UI layout data and may generate the first 3D UI. In some instances, the 3D rendering engine may render the generated first 3D UI.
[0072] Method 400 may continue onto the process at block 418 in Fig. 4B following the circular marker denoted, "A." [0073] At block 418 in Fig. 4B (following the circular marker denoted, "A"), method 400 may comprise, based on a determination that at least one first UI object within the generated first 3D UI contains transparent portions that are intended to be positioned in front of at least one second UI object, performing, using at least one of a dynamic geometry sorting system or a dynamic geometry subdivision system of a 3D rendering engine of the computing system, corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object to render correct perspectives of the at least one first UI object and the at least one second UI object. In some embodiments, performing the corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object (at block 418) may comprise: gathering, using the 3D rendering engine, UI geometries of UI objects among the at least one first UI object and the at least one second UI object (block 420); computing, using the 3D rendering engine, a bounding box for each UI object among the UI objects (block 422); sorting, using the dynamic geometry sorting system, the UI objects based on the computed bounding box for each UI object (block 424); and based on a determination that two or more bounding boxes overlap, subdividing, using the dynamic geometry subdivision system, UI geometries of overlapping UI objects corresponding to the two or more bounding boxes, and recomputing, using the 3D rendering engine, bounding boxes for each of the overlapping UI objects (block 426).
[0074] Method 400 may return onto the process at block 414 in Fig. 4A following the circular marker denoted, "B." In some cases, rendering the generated first 3D UI (at block 414) may comprise rendering, using the computing system, the generated first 3D UI, based at least in part on one of the computed bounding box or the recomputed bounding box of each UI object.
[0075] According to some embodiments, the first 3D UI may comprise a plurality of UI objects. In some instances, two or more UI objects among the plurality of UI objects may have different z-axes or z-planes, and generating and rendering the first 3D UI may be based on at least one of orthographic projections or perspective projections of the two or more UI objects about the different z-axes or z-planes.
[0076] In some embodiments, the first 3D UI may comprise a second 3D UI that presents a plurality of 3D UI objects within the second 3D UI as an integrated 3D UI environment. In such cases, the plurality of 3D UI objects may be presented as objects consistent with the integrated 3D UI environment. [0077] In some embodiments, with reference to Fig. 4C, method 400 may comprise: starting a 3D Ul-based application (block 428); loading a UI layout file (block 430); and determining whether 3D UI is enabled (block 432). If so, method 400 may continue onto the process at block 434. If not, method 400 may continue onto the process at block 448.
[0078] Based on a determination that 3D UI is enabled, method 400 may further comprise: parsing 2D layout data and constructing a 3D UI (block 434); constructing render pipeline (block 436); fetching current buffer data (block 438); updating scene geometry and camera/lighting data based on hardware inputs (block 440); rendering the 3D UI (block 442); and checking whether the app is running (or still running) (block 444). If so, method 400 may return to the process at block 438 in a render loop. If not, method 400 may proceed to the process at block 446, at which the app finishes running.
[0079] Alternatively, based on a determination that 3D UI is not enabled, method 400 may further comprise: setting up a conventional 2D UI (block 448); performing conventional 2D render (block 450); and checking whether the app is running (or still running) (block 452). If so, method 400 may return to the process at block 450 in a render loop. If not, method 400 may proceed to the process at block 446, at which the app finishes running. [0080] Turning to Fig. 4D, method 400 may perform updating the 3D UI, and may comprise: starting updating of a 3D UI scene (block 454); receiving hardware inputs (block 456); passing queued inputs across frames (block 458); and fetching the hardware inputs (block 460). Method 400 may further comprise: updating scene geometry data (block 462), in some cases, based on at least one of gesture input data (e.g., finger gesture data, hand gesture data, or the like), facial image capture input data, eye image capture input data, or stylus input data, and/or the like; updating camera data (block 464), in some cases, based on at least one of gyroscope input data or accelerometer input data, and/or the like; and/or updating lighting data (block 466), in some cases, based on ambient light sensor input data, or the like. Method 400 may further comprise performing UI animations (block 468), based on at least one of the updated scene geometry data (from block 462), the updated camera data (from block 464), and/or the updated lighting data (from block 466). Method 400 may further comprise finishing updating of the 3D UI scene (block 470).
[0081] Referring to Fig. 4E, method 400 may perform dynamic sorting and subdivision, and may comprise: starting sorting (block 472); gathering UI geometries (block 474); computing bounding box of each UI (block 476); sorting remaining UIs based on bounding boxes (block 478); and checking if overlapping bounding boxes exist (block 480). If so, method 400 may continue to the process at block 482. If not, method 400 may continue to the process at block 484. At block 482, method 400 may comprise subdividing geometry and recomputing the bounding box. Method 400 may return to the process at block 478. At block 484, method 400 may comprise outputting the sorted UI queue. Method 400 may further comprise finishing sorting (block 486).
[0082] Examples of System and Hardware Implementation
[0083] Fig. 5 is a block diagram illustrating an example of computer or system hardware architecture, in accordance with various embodiments. Fig. 5 provides a schematic illustration of one embodiment of a computer system 500 of the service provider system hardware that can perform the methods provided by various other embodiments, as described herein, and/or can perform the functions of computer or hardware system (i.e., computing systems 105, 105a, and 105b, three-dimensional ("3D") user interface ("UI") Tenderer 110, central processing units and/or graphics processing units ("CPUs/GPUs") 130a- 130n, user devices 135 and 185, user input devices 140a and 140b, system composers 145, 145a, and 145b, system UI framework 150, 150a, and 150b, etc.), as described above. It should be noted that Fig. 5 is meant only to provide a generalized illustration of various components, of which one or more (or none) of each may be utilized as appropriate. Fig. 5, therefore, broadly illustrates how individual system elements may be implemented in a relatively separated or relatively more integrated manner.
[0084] The computer or hardware system 500 - which might represent an embodiment of the computer or hardware system (i.e., computing systems 105, 105a, and 105b, 3D UI Tenderer 110, CPUs/GPUs 130a-130n, user devices 135 and 185, user input devices 140a and 140b, system composers 145, 145a, and 145b, system UI framework 150, 150a, and 150b, etc.), described above with respect to Figs. 1-4 - is shown comprising hardware elements that can be electrically coupled via a bus 505 (or may otherwise be in communication, as appropriate). The hardware elements may include one or more processors 510, including, without limitation, one or more general-purpose processors and/or one or more specialpurpose processors (such as microprocessors, digital signal processing chips, graphics acceleration processors, and/or the like); one or more input devices 515, which can include, without limitation, a mouse, a keyboard, and/or the like; and one or more output devices 520, which can include, without limitation, a display device, a printer, and/or the like.
[0085] The computer or hardware system 500 may further include (and/or be in communication with) one or more storage devices 525, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, solid-state storage device such as a random access memory ("RAM") and/or a read-only memory ("ROM"), which can be programmable, flash-updateable, and/or the like. Such storage devices may be configured to implement any appropriate data stores, including, without limitation, various file systems, database structures, and/or the like.
[0086] The computer or hardware system 500 might also include a communications subsystem 530, which can include, without limitation, a modem, a network card (wireless or wired), an infra-red communication device, a wireless communication device and/or chipset (such as a Bluetooth™ device, an 802.11 device, a WiFi device, a WiMax device, a WWAN device, cellular communication facilities, etc.), and/or the like. The communications subsystem 530 may permit data to be exchanged with a network (such as the network described below, to name one example), with other computer or hardware systems, and/or with any other devices described herein. In many embodiments, the computer or hardware system 500 will further comprise a working memory 535, which can include a RAM or ROM device, as described above.
[0087] The computer or hardware system 500 also may comprise software elements, shown as being currently located within the working memory 535, including an operating system 540, device drivers, executable libraries, and/or other code, such as one or more application programs 545, which may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein. Merely by way of example, one or more procedures described with respect to the method(s) discussed above might be implemented as code and/or instructions executable by a computer (and/or a processor within a computer); in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods.
[0088] A set of these instructions and/or code might be encoded and/or stored on a non- transitory computer readable storage medium, such as the storage device(s) 525 described above. In some cases, the storage medium might be incorporated within a computer system, such as the system 500. In other embodiments, the storage medium might be separate from a computer system (i.e., a removable medium, such as a compact disc, etc.), and/or provided in an installation package, such that the storage medium can be used to program, configure, and/or adapt a general purpose computer with the instructions/code stored thereon. These instructions might take the form of executable code, which is executable by the computer or hardware system 500 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer or hardware system 500 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.) then takes the form of executable code.
[0089] It will be apparent to those skilled in the art that substantial variations may be made in accordance with particular requirements. For example, customized hardware (such as programmable logic controllers, field-programmable gate arrays, application-specific integrated circuits, and/or the like) might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed.
[0090] As mentioned above, in one aspect, some embodiments may employ a computer or hardware system (such as the computer or hardware system 500) to perform methods in accordance with various embodiments of the invention. According to a set of embodiments, some or all of the procedures of such methods are performed by the computer or hardware system 500 in response to processor 510 executing one or more sequences of one or more instructions (which might be incorporated into the operating system 540 and/or other code, such as an application program 545) contained in the working memory 535. Such instructions may be read into the working memory 535 from another computer readable medium, such as one or more of the storage device(s) 525. Merely by way of example, execution of the sequences of instructions contained in the working memory 535 might cause the processor(s) 510 to perform one or more procedures of the methods described herein.
[0091] The terms "machine readable medium" and "computer readable medium," as used herein, refer to any medium that participates in providing data that causes a machine to operate in some fashion. In an embodiment implemented using the computer or hardware system 500, various computer readable media might be involved in providing instructions/code to processor(s) 510 for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals). In many implementations, a computer readable medium is a non-transitory, physical, and/or tangible storage medium. In some embodiments, a computer readable medium may take many forms, including, but not limited to, non-volatile media, volatile media, or the like. Non-volatile media includes, for example, optical and/or magnetic disks, such as the storage device(s) 525. Volatile media includes, without limitation, dynamic memory, such as the working memory 535. In some alternative embodiments, a computer readable medium may take the form of transmission media, which includes, without limitation, coaxial cables, copper wire, and fiber optics, including the wires that comprise the bus 505, as well as the various components of the communication subsystem 530 (and/or the media by which the communications subsystem 530 provides communication with other devices). In an alternative set of embodiments, transmission media can also take the form of waves (including without limitation radio, acoustic, and/or light waves, such as those generated during radio-wave and infra-red data communications). [0092] Common forms of physical and/or tangible computer readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code.
[0093] Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 510 for execution. Merely by way of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer or hardware system 500. These signals, which might be in the form of electromagnetic signals, acoustic signals, optical signals, and/or the like, are all examples of carrier waves on which instructions can be encoded, in accordance with various embodiments of the invention.
[0094] The communications subsystem 530 (and/or components thereof) generally will receive the signals, and the bus 505 then might carry the signals (and/or the data, instructions, etc. carried by the signals) to the working memory 535, from which the processor(s) 505 retrieves and executes the instructions. The instructions received by the working memory 535 may optionally be stored on a storage device 525 either before or after execution by the processor(s) 510.
[0095] As noted above, a set of embodiments comprises methods and systems for implementing image rendering, and, more particularly, to methods, systems, and apparatuses for implementing three-dimensional ("3D") and animation support for user interface ("UI") controls. Fig. 6 illustrates a schematic diagram of a system 600 that can be used in accordance with one set of embodiments. The system 600 can include one or more user computers, user devices, or customer devices 605. A user computer, user device, or customer device 605 can be a general purpose personal computer (including, merely by way of example, desktop computers, tablet computers, laptop computers, handheld computers, and the like, running any appropriate operating system, several of which are available from vendors such as Apple, Microsoft Corp., and the like), cloud computing devices, a server(s), and/or a workstation computer(s) running any of a variety of commercially-available UNIX™ or UNIX-like operating systems. A user computer, user device, or customer device 605 can also have any of a variety of applications, including one or more applications configured to perform methods provided by various embodiments (as described above, for example), as well as one or more office applications, database client and/or server applications, and/or web browser applications. Alternatively, a user computer, user device, or customer device 605 can be any other electronic device, such as a thin-client computer, Internet-enabled mobile telephone, and/or personal digital assistant, capable of communicating via a network (e.g., the network(s) 610 described below) and/or of displaying and navigating web pages or other types of electronic documents. Although the system 600 is shown with two user computers, user devices, or customer devices 605, any number of user computers, user devices, or customer devices can be supported.
[0096] Some embodiments operate in a networked environment, which can include a network(s) 610. The network(s) 610 can be any type of network familiar to those skilled in the art that can support data communications using any of a variety of commercially- available (and/or free or proprietary) protocols, including, without limitation, TCP/IP, SNA™, IPX™, AppleTalk™, and the like. Merely by way of example, the network(s) 610 (similar to network(s) 180 of Fig. 1, or the like) can each include a local area network ("LAN"), including, without limitation, a fiber network, an Ethernet network, a Token- Ring™ network, and/or the like; a wide-area network ("WAN"); a wireless wide area network ("WWAN"); a virtual network, such as a virtual private network ("VPN"); the Internet; an intranet; an extranet; a public switched telephone network ("PSTN"); an infra-red network; a wireless network, including, without limitation, a network operating under any of the IEEE 802.11 suite of protocols, the Bluetooth™ protocol known in the art, and/or any other wireless protocol; and/or any combination of these and/or other networks. In a particular embodiment, the network might include an access network of the service provider (e.g., an Internet service provider ("ISP")). In another embodiment, the network might include a core network of the service provider, and/or the Internet.
[0097] Embodiments can also include one or more server computers 615. Each of the server computers 615 may be configured with an operating system, including, without limitation, any of those discussed above, as well as any commercially (or freely) available server operating systems. Each of the servers 615 may also be running one or more applications, which can be configured to provide services to one or more clients 605 and/or other servers 615.
[0098] Merely by way of example, one of the servers 615 might be a data server, a web server, a cloud computing device(s), or the like, as described above. The data server might include (or be in communication with) a web server, which can be used, merely by way of example, to process requests for web pages or other electronic documents from user computers 605. The web server can also run a variety of server applications, including HTTP servers, FTP servers, CGI servers, database servers, Java servers, and the like. In some embodiments of the invention, the web server may be configured to serve web pages that can be operated within a web browser on one or more of the user computers 605 to perform methods of the invention.
[0099] The server computers 615, in some embodiments, might include one or more application servers, which can be configured with one or more applications accessible by a client running on one or more of the client computers 605 and/or other servers 615. Merely by way of example, the server(s) 615 can be one or more general purpose computers capable of executing programs or scripts in response to the user computers 605 and/or other servers 615, including, without limitation, web applications (which might, in some cases, be configured to perform methods provided by various embodiments). Merely by way of example, a web application can be implemented as one or more scripts or programs written in any suitable programming language, such as Java™, C, C#™ or C++, and/or any scripting language, such as Perl, Python, or TCL, as well as combinations of any programming and/or scripting languages. The application server(s) can also include database servers, including, without limitation, those commercially available from Oracle™, Microsoft™, Sybase™, IBM™, and the like, which can process requests from clients (including, depending on the configuration, dedicated database clients, API clients, web browsers, etc.) running on a user computer, user device, or customer device 605 and/or another server 615. In some embodiments, an application server can perform one or more of the processes for implementing image rendering, and, more particularly, to methods, systems, and apparatuses for implementing 3D and animation support for UI controls, as described in detail above. Data provided by an application server may be formatted as one or more web pages (comprising HTML, JavaScript, etc., for example) and/or may be forwarded to a user computer 605 via a web server (as described above, for example). Similarly, a web server might receive web page requests and/or input data from a user computer 605 and/or forward the web page requests and/or input data to an application server. In some cases, a web server may be integrated with an application server.
[0100] In accordance with further embodiments, one or more servers 615 can function as a file server and/or can include one or more of the files (e.g., application code, data files, etc.) necessary to implement various disclosed methods, incorporated by an application running on a user computer 605 and/or another server 615. Alternatively, as those skilled in the art will appreciate, a file server can include all necessary files, allowing such an application to be invoked remotely by a user computer, user device, or customer device 605 and/or server 615.
[0101] It should be noted that the functions described with respect to various servers herein (e.g., application server, database server, web server, file server, etc.) can be performed by a single server and/or a plurality of specialized servers, depending on implementation-specific needs and parameters.
[0102] In some embodiments, the system can include one or more databases 620a-620n (collectively, "databases 620"). The location of each of the databases 620 is discretionary: merely by way of example, a database 620a might reside on a storage medium local to (and/or resident in) a server 615a (and/or a user computer, user device, or customer device 605). Alternatively, a database 620n can be remote from any or all of the computers 605, 615, so long as it can be in communication (e.g., via the network 610) with one or more of these. In a particular set of embodiments, a database 620 can reside in a storage-area network ("SAN") familiar to those skilled in the art. (Likewise, any necessary files for performing the functions attributed to the computers 605, 615 can be stored locally on the respective computer and/or remotely, as appropriate.) In one set of embodiments, the database 620 can be a relational database, such as an Oracle database, that is adapted to store, update, and retrieve data in response to SQL-formatted commands. The database might be controlled and/or maintained by a database server, as described above, for example. [0103] According to some embodiments, system 600 may further comprise computing system 625 and corresponding database(s) 655 (similar to computing systems 105, 105a, and 105b and corresponding database(s) 155a and 155b of Fig. 1, or the like). Computing system 625 may comprise three-dimensional ("3D") user interface ("UI") Tenderer 630 (similar to 3D UI Tenderer 110 of Figs. 1 and 2, or the like), which may include 3D scene manager 635 (similar to 3D scene manager 115 of Figs. 1 and 2, or the like), animation manager 640 (similar to animation manager 120 of Figs. 1 and 2, or the like), and rendering engine 645 (similar to rendering engine 125 of Figs. 1 and 2, or the like), or the like, and one or more central processing units and/or graphics processing units ("CPUs/GPUs") 650a-650n (similar to CPUs/GPUs 130a-130n of Fig. 1, or the like). System 600 may further comprise system composer 660 (similar to system composers 145, 145a, and 145b of Figs. 1 and 2, or the like), and system UI framework 665 (similar to system UI framework 150, 150a, and 150b of Figs. 1 and 2, or the like) and corresponding database(s) 670 (similar to database(s) 175 of Fig. 1, or the like).
[0104] In operation, computing system 625 (also referred to simply as "computing system" or the like) may receive UI layout data (e.g., UI layout data 675, or the like) of a first UI from a UI framework (e.g., system UI framework 665, or the like). The computing system may generate a first 3D UI, in some cases, by parsing two-dimensional ("2D") layout of the first UI, based at least in part on the UI layout data. The computing system may render the generated first 3D UI, and may cause the rendered first 3D UI to be displayed within a display screen of a user device (e.g., user device 605a or 605b, or the like) associated with a user.
[0105] In some embodiments, a 3D scene manager (e.g., 3D scene manager 635, or the like) may receive the UI layout data (e.g., UI layout data 675, or the like), and may generate the first 3D UI. In some instances, a 3D rendering engine (e.g., 3D rendering engine 645, or the like) may render the generated first 3D UI (e.g., rendered image(s) 680, or the like). [0106] According to some embodiments, a user input processor of the 3D scene manager (e.g., 3D scene manager 635, or the like) of the computing system may receive one or more user inputs. In some instances, the one or more user inputs may include, but are not limited to, at least one of one or more selection inputs, one or more translation inputs, one or more rotation inputs, one or more tilt inputs, one or more zoom inputs (e.g., pinching inputs, i.e., pinching in to zoom in and pinching out to zoom out, or the like), one or more swipe inputs, one or more dragging inputs, one or more gesture inputs, one or more tracing inputs, one or more camera translation inputs, one or more rotation inputs, one or more accelerometer inputs, or one or more gyroscope inputs, and/or the like. In some cases, generating the first 3D UI may comprise the computing system generating a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the one or more user inputs.
[0107] Alternatively, or additionally, at least one of a 3D scene geometry manager, a material manager, a camera manager, or a light manager, and/or the like, of a 3D scene manager of the computing system may determine corresponding at least one of UI geometry, material, camera angle, or lighting of one or more UI objects within the first UI, based at least in part on the UI layout data. In some instances, generating the first 3D UI may comprise the computing system generating a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI. In some cases, rendering the generated first 3D UI may comprise the computing system rendering the generated first 3D UI, by using the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI to construct at least one Ul-specific 3D render pipeline and by using the constructed at least one Ul-specific 3D render pipeline to render the generated first 3D UI.
[0108] Alternatively, or additionally, a hardware input processor of a 3D scene manager of the computing system may receive one or more hardware inputs. In some cases, the one or more hardware inputs may include, without limitation, at least one of one or more accelerometer inputs, one or more gyroscope inputs, one or more light sensor inputs, or one or more hardware-based user inputs, and/or the like. In some instances, the one or more hardware-based user inputs may include, but are not limited to, at least one of one or more finger gesture inputs, one or more facial image capture inputs, one or more eye image capture inputs, or one or more stylus inputs, and/or the like. At least one of a camera manager, a light manager, or a 3D scene geometry manager of the 3D scene manager of the computing system may update corresponding at least one of camera angle, lighting, or scene geometry of one or more UI objects within the first UI, based at least in part on corresponding at least one of a combination of the one or more accelerometer inputs or the one or more gyroscope inputs, the one or more light sensor inputs, or the one or more hardware-based user inputs. In such cases, generating the first 3D UI may comprise an animation manager of the computing system generating a first 3D UI, by using at least one of predefined 3D animation control or physics based 3D animation control to animate 3D UI geometries in the first UI.
[0109] In some embodiments, based on a determination that at least one first UI object within the generated first 3D UI contains transparent portions that are intended to be positioned in front of at least one second UI object, at least one of a dynamic geometry sorting system or a dynamic geometry subdivision system of a 3D rendering engine of the computing system may perform corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object to render correct perspectives of the at least one first UI object and the at least one second UI object. In some cases, performing the corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object may comprise: gathering, using the 3D rendering engine, UI geometries of UI objects among the at least one first UI object and the at least one second UI object; computing, using the 3D rendering engine, a bounding box for each UI object among the UI objects; sorting, using the dynamic geometry sorting system, the UI objects based on the computed bounding box for each UI object; and based on a determination that two or more bounding boxes overlap, subdividing, using the dynamic geometry subdivision system, UI geometries of overlapping UI objects corresponding to the two or more bounding boxes, and recomputing, using the 3D rendering engine, bounding boxes for each of the overlapping UI objects; and/or the like. In some instances, rendering the generated first 3D UI may comprise the computing system rendering the generated first 3D UI, based at least in part on one of the computed bounding box or the recomputed bounding box of each UI object.
[0110] According to some embodiments, the first 3D UI may comprise a plurality of UI objects. In some instances, two or more UI objects among the plurality of UI objects may have different z-axes or z-planes. In some cases, generating and rendering the first 3D UI may be based on at least one of orthographic projections or perspective projections of the two or more UI objects about the different z-axes or z-planes.
[0111] In some embodiments, the first 3D UI may comprise a second 3D UI that presents a plurality of 3D UI objects within the second 3D UI as an integrated 3D UI environment. In such cases, the plurality of 3D UI objects may be presented as objects consistent with the integrated 3D UI environment.
[0112] These and other functions of the system 600 (and its components) are described in greater detail above with respect to Figs. 1-4. [0113] While particular features and aspects have been described with respect to some embodiments, one skilled in the art will recognize that numerous modifications are possible. For example, the methods and processes described herein may be implemented using hardware components, software components, and/or any combination thereof. Further, while various methods and processes described herein may be described with respect to particular structural and/or functional components for ease of description, methods provided by various embodiments are not limited to any particular structural and/or functional architecture but instead can be implemented on any suitable hardware, firmware and/or software configuration. Similarly, while particular functionality is ascribed to particular system components, unless the context dictates otherwise, this functionality need not be limited to such and can be distributed among various other system components in accordance with the several embodiments.
[0114] Moreover, while the procedures of the methods and processes described herein are described in a particular order for ease of description, unless the context dictates otherwise, various procedures may be reordered, added, and/or omitted in accordance with various embodiments. Moreover, the procedures described with respect to one method or process may be incorporated within other described methods or processes; likewise, system components described according to a particular structural architecture and/or with respect to one system may be organized in alternative structural architectures and/or incorporated within other described systems. Hence, while various embodiments are described with — or without — particular features for ease of description and to illustrate some aspects of those embodiments, the various components and/or features described herein with respect to a particular embodiment can be substituted, added and/or subtracted from among other described embodiments, unless the context dictates otherwise. Consequently, although several embodiments are described above, it will be appreciated that the invention is intended to cover all modifications and equivalents within the scope of the following claims.

Claims

WHAT IS CLAIMED IS:
1. A method, comprising: receiving, using a computing system, user interface ("UI") layout data of a first UI from a UI framework; generating, using the computing system, a first three-dimensional ("3D") UI, by parsing two-dimensional ("2D") layout of the first UI, based at least in part on the UI layout data; rendering, using the computing system, the generated first 3D UI; and causing, using the computing system, the rendered first 3D UI to be displayed within a display screen of a user device associated with a user.
2. The method of claim 1, wherein the computing system comprises at least one of a 3D UI Tenderer, a machine learning system, an artificial intelligence ("Al") system, a deep learning system, a neural network, a processor on the user device, one or more graphics processing units ("GPUs"), a server computer over a network, a cloud computing system, or a distributed computing system.
3. The method of claim 1 or 2, wherein the computing system comprises a 3D scene manager and a 3D rendering engine, wherein the 3D scene manager receives the UI layout data and generates the first 3D UI, wherein the 3D rendering engine renders the generated first 3D UI.
4. The method of any of claims 1-3, further comprising: receiving, using a user input processor of a 3D scene manager of the computing system, one or more user inputs, the one or more user inputs comprising at least one of one or more selection inputs, one or more translation inputs, one or more rotation inputs, one or more tilt inputs, one or more zoom inputs, one or more swipe inputs, one or more dragging inputs, one or more gesture inputs, one or more tracing inputs, one or more camera translation inputs, one or more rotation inputs, one or more accelerometer inputs, or one or more gyroscope inputs; wherein generating the first 3D UI comprises generating, using the computing system, a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the one or more user inputs.
5. The method of any of claims 1-4, further comprising:
32 determining, using at least one of a 3D scene geometry manager, a material manager, a camera manager, or a light manager of a 3D scene manager of the computing system, corresponding at least one of UI geometry, material, camera angle, or lighting of one or more UI objects within the first UI, based at least in part on the UI layout data; wherein generating the first 3D UI comprises generating, using the computing system, a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI; and wherein rendering the generated first 3D UI comprises rendering, using the computing system, the generated first 3D UI, by using the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI to construct at least one Ul-specific 3D render pipeline and by using the constructed at least one Ul-specific 3D render pipeline to render the generated first 3D UI.
6. The method of any of claims 1-5, further comprising: receiving, using a hardware input processor of a 3D scene manager of the computing system, one or more hardware inputs, the one or more hardware inputs comprising at least one of one or more accelerometer inputs, one or more gyroscope inputs, one or more light sensor inputs, or one or more hardware-based user inputs, wherein the one or more hardware-based user inputs comprise at least one of one or more finger gesture inputs, one or more facial image capture inputs, one or more eye image capture inputs, or one or more stylus inputs; updating, using at least one of a camera manager, a light manager, or a 3D scene geometry manager of the 3D scene manager of the computing system, corresponding at least one of camera angle, lighting, or scene geometry of one or more UI objects within the first UI, based at least in part on corresponding at least one of a combination of the one or more accelerometer inputs or the one or more gyroscope inputs, the one or more light sensor inputs, or the one or more hardware-based user inputs;
33 wherein generating the first 3D UI comprises generating, using an animation manager of the computing system, a first 3D UI, by using at least one of predefined 3D animation control or physics based 3D animation control to animate 3D UI geometries in the first UI.
7. The method of any of claims 1-6, further comprising: based on a determination that at least one first UI object within the generated first 3D UI contains transparent portions that are intended to be positioned in front of at least one second UI object, performing, using at least one of a dynamic geometry sorting system or a dynamic geometry subdivision system of a 3D rendering engine of the computing system, corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object to render correct perspectives of the at least one first UI object and the at least one second UI object.
8. The method of claim 7, wherein performing the corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object comprises: gathering, using the 3D rendering engine, UI geometries of UI objects among the at least one first UI object and the at least one second UI object; computing, using the 3D rendering engine, a bounding box for each UI object among the UI objects; sorting, using the dynamic geometry sorting system, the UI objects based on the computed bounding box for each UI object; and based on a determination that two or more bounding boxes overlap, subdividing, using the dynamic geometry subdivision system, UI geometries of overlapping UI objects corresponding to the two or more bounding boxes, and recomputing, using the 3D rendering engine, bounding boxes for each of the overlapping UI objects; wherein rendering the generated first 3D UI comprises rendering, using the computing system, the generated first 3D UI, based at least in part on one of the computed bounding box or the recomputed bounding box of each UI object.
9. The method of any of claims 1-8, wherein the first 3D UI comprises a plurality of UI objects, wherein two or more UI objects among the plurality of UI objects have different z-axes or z-planes, wherein generating and rendering the first 3D UI are based on at least one of orthographic projections or perspective projections of the two or more UI objects about the different z-axes or z-planes.
10. The method of any of claims 1-9, wherein the first 3D UI comprises a second 3D UI that presents a plurality of 3D UI objects within the second 3D UI as an integrated 3D UI environment, wherein the plurality of 3D UI objects are presented as objects consistent with the integrated 3D UI environment.
11. An apparatus, comprising: at least one processor; and a non-transitory computer readable medium communicatively coupled to the at least one processor, the non-transitory computer readable medium having stored thereon computer software comprising a set of instructions that, when executed by the at least one processor, causes the apparatus to: receive user interface ("UI") layout data of a first UI from a UI framework; generate a first three-dimensional ("3D") UI, by parsing two-dimensional
("2D") layout of the first UI, based at least in part on the UI layout data; render the generated first 3D UI; and cause the rendered first 3D UI to be displayed within a display screen of a user device associated with a user.
12. The apparatus of claim 11, wherein the apparatus comprises at least one of a 3D UI Tenderer, a machine learning system, an artificial intelligence ("Al") system, a deep learning system, a neural network, a processor on the user device, one or more graphics processing units ("GPUs"), a server computer over a network, a cloud computing system, or a distributed computing system.
13. A system, comprising: a computing system, comprising: a 3D scene manager; a 3D rendering engine; at least one first processor; and a first non-transitory computer readable medium communicatively coupled to the at least one first processor, the first non-transitory computer readable medium having stored thereon computer software comprising a first set of instructions that, when executed by the at least one first processor, causes the computing system to: receive, using the 3D scene manager, user interface ("UI") layout data of a first UI from a UI framework; generate, using the 3D scene manager, a first three-dimensional ("3D") UI, by parsing two-dimensional ("2D") layout of the first UI, based at least in part on the UI layout data; render, using the 3D rendering engine, the generated first 3D UI; and cause the rendered first 3D UI to be displayed within a display screen of a user device associated with a user.
14. The system of claim 13, wherein the computing system comprises at least one of a 3D UI Tenderer, a machine learning system, an artificial intelligence ("Al") system, a deep learning system, a neural network, a processor on the user device, one or more graphics processing units ("GPUs"), a server computer over a network, a cloud computing system, or a distributed computing system.
15. The system of claim 13 or 14, wherein the 3D scene manager comprises a user input processor, wherein the first set of instructions, when executed by the at least one first processor, further causes the computing system to: receive, using the user input processor of the 3D scene manager of the computing system, one or more user inputs, the one or more user inputs comprising at least one of one or more selection inputs, one or more translation inputs, one or more rotation inputs, one or more tilt inputs, one or more zoom inputs, one or more swipe inputs, one or more dragging inputs, one or more gesture inputs, one or more tracing inputs, one or more camera translation inputs, one or more rotation inputs, one or more accelerometer inputs, or one or more gyroscope inputs; wherein generating the first 3D UI comprises generating a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the one or more user inputs.
16. The system of any of claims 13-15, wherein the 3D scene manager comprises at least one of a 3D scene geometry manager, a material manager, a camera manager, or a light manager, wherein the first set of instructions, when executed by the at least one first processor, further causes the computing system to:
36 determine, using the at least one of the 3D scene geometry manager, the material manager, the camera manager, or the light manager of the 3D scene manager of the computing system, corresponding at least one of UI geometry, material, camera angle, or lighting of one or more UI objects within the first UI, based at least in part on the UI layout data; wherein generating the first 3D UI comprises generating a first 3D UI, by parsing 2D layout of the first UI, based at least in part on the UI layout data and based at least in part on the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI; and wherein rendering the generated first 3D UI comprises rendering the generated first 3D UI, by using the determined at least one of UI geometry, material, camera angle, or lighting of the one or more UI objects within the first UI to construct at least one Ul-specific 3D render pipeline and by using the constructed at least one Ul-specific 3D render pipeline to render the generated first 3D UI.
17. The system of any of claims 13-16, wherein the 3D scene manager comprises a hardware input processor and at least one of a camera manager, a light manager, or a 3D scene geometry manager, wherein the computing system further comprises an animation manager, wherein the first set of instructions, when executed by the at least one first processor, further causes the computing system to: receive, using the hardware input processor of the 3D scene manager of the computing system, one or more hardware inputs, the one or more hardware inputs comprising at least one of one or more accelerometer inputs, one or more gyroscope inputs, one or more light sensor inputs, or one or more hardware-based user inputs, wherein the one or more hardware-based user inputs comprise at least one of one or more finger gesture inputs, one or more facial image capture inputs, one or more eye image capture inputs, or one or more stylus inputs; update, using the at least one of the camera manager, the light manager, or the 3D scene geometry manager of the 3D scene manager of the computing system, corresponding at least one of camera angle, lighting, or scene geometry of one or more UI objects within the first UI, based at least in part on corresponding at least one of a combination of the one or more accelerometer inputs or the one or more
37 gyroscope inputs, the one or more light sensor inputs, or the one or more hardware-based user inputs; wherein generating the first 3D UI comprises generating, using the animation manager of the computing system, a first 3D UI, by using at least one of predefined 3D animation control or physics based 3D animation control to animate 3D UI geometries in the first UI.
18. The system of any of claims 13-17, wherein the 3D rendering engine comprises at least one of a dynamic geometry sorting system or a dynamic geometry subdivision system, wherein the first set of instructions, when executed by the at least one first processor, further causes the computing system to: based on a determination that at least one first UI object within the generated first 3D UI contains transparent portions that are intended to be positioned in front of at least one second UI object, perform, using the at least one of the dynamic geometry sorting system or the dynamic geometry subdivision system of the 3D rendering engine of the computing system, corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object to render correct perspectives of the at least one first UI object and the at least one second UI object.
19. The system of claim 18, wherein performing the corresponding at least one of dynamic sorting or dynamic subdivision of the at least one first UI object and the at least one second UI object comprises: gathering, using the 3D rendering engine, UI geometries of UI objects among the at least one first UI object and the at least one second UI object; computing, using the 3D rendering engine, a bounding box for each UI object among the UI objects; sorting, using the dynamic geometry sorting system, the UI objects based on the computed bounding box for each UI object; and based on a determination that two or more bounding boxes overlap, subdividing, using the dynamic geometry subdivision system, UI geometries of overlapping UI objects corresponding to the two or more bounding boxes, and recomputing, using the 3D rendering engine, bounding boxes for each of the overlapping UI objects;
38 wherein rendering the generated first 3D UI comprises rendering the generated first 3D UI, based at least in part on one of the computed bounding box or the recomputed bounding box of each UI object.
20. The system of any of claims 13-19, wherein the first 3D UI comprises a plurality of UI objects, wherein two or more UI objects among the plurality of UI objects have different z-axes or z-planes, wherein generating and rendering the first 3D UI are based on at least one of orthographic projections or perspective projections of the two or more UI objects about the different z-axes or z-planes.
21. The system of any of claims 13-20, wherein the first 3D UI comprises a second 3D UI that presents a plurality of 3D UI objects within the second 3D UI as an integrated 3D UI environment, wherein the plurality of 3D UI objects are presented as objects consistent with the integrated 3D UI environment.
39
PCT/US2021/054844 2021-10-13 2021-10-13 3d rendering and animation support for ui controls WO2022056499A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2021/054844 WO2022056499A1 (en) 2021-10-13 2021-10-13 3d rendering and animation support for ui controls

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2021/054844 WO2022056499A1 (en) 2021-10-13 2021-10-13 3d rendering and animation support for ui controls

Publications (1)

Publication Number Publication Date
WO2022056499A1 true WO2022056499A1 (en) 2022-03-17

Family

ID=80629968

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/054844 WO2022056499A1 (en) 2021-10-13 2021-10-13 3d rendering and animation support for ui controls

Country Status (1)

Country Link
WO (1) WO2022056499A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115640010A (en) * 2022-12-23 2023-01-24 北京沃德博创信息科技有限公司 List component for unlimited circular scrolling in Flutter
CN116009415A (en) * 2022-11-19 2023-04-25 纬创软件(北京)有限公司 Unity-based 2D model three-dimensional system, method and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080122839A1 (en) * 2006-11-28 2008-05-29 Microsoft Corporation Interacting with 2D content on 3D surfaces
US20140201656A1 (en) * 2005-02-12 2014-07-17 Mentor Graphics Corporation User interfaces
US20180349108A1 (en) * 2017-06-05 2018-12-06 Umajin Inc. Application system for generating 3d applications

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140201656A1 (en) * 2005-02-12 2014-07-17 Mentor Graphics Corporation User interfaces
US20080122839A1 (en) * 2006-11-28 2008-05-29 Microsoft Corporation Interacting with 2D content on 3D surfaces
US20180349108A1 (en) * 2017-06-05 2018-12-06 Umajin Inc. Application system for generating 3d applications

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116009415A (en) * 2022-11-19 2023-04-25 纬创软件(北京)有限公司 Unity-based 2D model three-dimensional system, method and storage medium
CN116009415B (en) * 2022-11-19 2023-09-22 纬创软件(北京)有限公司 Unity-based 2D model three-dimensional system, method and storage medium
CN115640010A (en) * 2022-12-23 2023-01-24 北京沃德博创信息科技有限公司 List component for unlimited circular scrolling in Flutter

Similar Documents

Publication Publication Date Title
US11385760B2 (en) Augmentable and spatially manipulable 3D modeling
EP2742415B1 (en) Drag and drop of objects between applications
JP6013583B2 (en) Method for emphasizing effective interface elements
US9437038B1 (en) Simulating three-dimensional views using depth relationships among planes of content
US20140002443A1 (en) Augmented reality interface
US11698822B2 (en) Software development kit for image processing
US20200311428A1 (en) Virtual item display simulations
US11443490B2 (en) Snapping, virtual inking, and accessibility in augmented reality
US20230152936A1 (en) 3D Interactions with Web Content
US10049490B2 (en) Generating virtual shadows for displayable elements
US11010961B2 (en) Object permanence in surface reconstruction
WO2022056499A1 (en) 3d rendering and animation support for ui controls
US11475636B2 (en) Augmented reality and virtual reality engine for virtual desktop infrastucture
US9269324B2 (en) Orientation aware application demonstration interface
WO2022047436A1 (en) 3d launcher with 3d app icons
US11562538B2 (en) Method and system for providing a user interface for a 3D environment
US10621768B2 (en) Augmented reality and virtual reality engine at the object level for virtual desktop infrastucture
AU2015200570B2 (en) Drag and drop of objects between applications
US20230289934A1 (en) Systems and methods for displaying image data of an object on a virtual rotating platform
Sulema et al. Haptic Interaction in 3D World with Use of Data Glove and Web-Camera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21867841

Country of ref document: EP

Kind code of ref document: A1