WO2014081420A1 - Interfaces utilisateurs dimensionnelles arbitraires - Google Patents

Interfaces utilisateurs dimensionnelles arbitraires Download PDF

Info

Publication number
WO2014081420A1
WO2014081420A1 PCT/US2012/066159 US2012066159W WO2014081420A1 WO 2014081420 A1 WO2014081420 A1 WO 2014081420A1 US 2012066159 W US2012066159 W US 2012066159W WO 2014081420 A1 WO2014081420 A1 WO 2014081420A1
Authority
WO
WIPO (PCT)
Prior art keywords
user interface
data
dimensional
partitions
arbitrary
Prior art date
Application number
PCT/US2012/066159
Other languages
English (en)
Inventor
Frank Edughom Ekpar
Original Assignee
Frank Edughom Ekpar
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Frank Edughom Ekpar filed Critical Frank Edughom Ekpar
Publication of WO2014081420A1 publication Critical patent/WO2014081420A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Definitions

  • the present invention relates generally to the field of graphical user interfaces.
  • the invention relates to a graphical user interface system permitting the creation of arbitrary dimensional ( - dim ensional ) graphical user interfaces that can have any shape and that can dynamically be expanded to any size without distortion or loss in quality and that comprise systems for automatic, semi-automatic or manual processing of the user interface.
  • Contemporary graphical user interfaces are limited in that when they allow arbitrary shapes, they are generally not expandable and when they are expandable, they generally do not permit the use of arbitrary shapes. Furthermore, these graphical user interfaces are generally limited to flat 2-dimensional or at best simulated 3 -dimensional structures.
  • Popular graphical user interface systems from software developers such as Microsoft and Apple and consequently the computer systems and computer-implemented methods offered by these popular and other less popular vendors and their products and services suffer from these limitations.
  • Liu et al. (US2005/0172239 Al) teach a graphical user interface that could be stretched or resized.
  • the characteristics or nature of designated areas such as "border” or “resize” regions) could be used to provide hints (such as a change in the shape of the cursor) to the user that resizing or stretching is possible or occurring at a particular position.
  • Liu et al. fail to teach the use of adaptive or selective rendering of the areas of the graphical user interface to achieve the said resizing or stretching. In fact, Liu et al. fail to teach any specific way to achieve the resizing or stretching at all.
  • Liu et al. teach "creating one or more first regions" and “creating one or more second regions" for the user interface.
  • Liu et al. teach the use of at least two regions - at least one first region and at least one second region.
  • GUI graphical user interface
  • Hamlet et al. teach the use of "optimized vector image data" to enable the display of a graphical user interface in any shape and at any size with minimal or no loss of original image quality, Hamlet et al. fail to provide the option of utilizing the nature (possibly related to the appearance or texture) of the original image. In fact, Hamlet et al. teach away from the use of the appearance of the original image.
  • the adaptive or selective application of appropriate rules to selected regions of the original image based on the nature (possibly related to the texture or appearance of same) can achieve "infinite resolution" and permit the user interface to assume any desired shape and size without easily noticeable distortion or loss of quality.
  • the "optimized vector image data" is generally created separately and could also be stored separately from the graphical user interface to which it is applied and that changes in certain attributes (such as size) of the interface may necessitate re-computation of vector data.
  • the present invention provides the option of directly using the original image and adaptively applying appropriate rules to selected regions of the image with suitable characteristics to facilitate user interfaces that can assume any size and shape.
  • the options provided by the present invention obviate the need to generate, access or otherwise compute or re-compute vector data, leading to savings in resources and allowing for faster and more responsive and richer user interfaces than permitted by the prior art.
  • Guo et al. (US 2006/0104511 Al) teach a user interface in which intended display or presentation characteristics of the user interface could be used to edit the user interface - thus allowing intended display or presentation characteristics to guide or inform design characteristics, the guidance being utilized via the editing of the user interface to better conform with the intended display or presentation characteristics, Guo et al. fail to teach the adaptive or selective application of appropriate rules to selected regions of the original image based on the nature (possibly related to the texture or appearance of same) to achieve "infinite resolution" and permit the user interface to assume any desired shape and size without easily noticeable distortion or loss of quality either in the representations presented or displayed or in the original background elements themselves.
  • characteristics of user interface elements such as texture, shape, size, and so on, can be determined or chosen or selected or specified or modified or edited at the time or during the process whereby the user interface elements are designed or created or modified or edited.
  • these characteristics are obviously design characteristics and consequently the term "design characteristics" does not require additional explanation in the specification.
  • Any characteristic of any user interface element such as texture, shape, size, and so on, that can be determined or chosen or selected or specified or modified at the time or during the process whereby the user interface element is designed or created or modified or edited is obviously a design characteristic.
  • characteristics of user interface elements such as texture, shape, size, and so on, can be determined or chosen or selected or specified or modified or edited at the time or during the process whereby the user interface elements are presented or displayed in any suitable representation based on the intent of the user or designer or creator.
  • these characteristics are obviously intended presentation characteristics and consequently the term "intended presentation characteristics" does not require additional explanation in the specification.
  • Any characteristic of any user interface element such as texture, shape, size, and so on, that can be determined or chosen or selected or specified or modified at the time or during the process whereby the user interface element is presented or displayed in any suitable representation based on the intent of the user or designer or creator is obviously an intended presentation characteristic.
  • the set of rules that each partition (or generally user interface element) is individually associated with and that defines attributes and behaviors of the partition can be selected or specified or formulated in a manner that guides or informs design decisions pertaining to the partition (or generally user interface element) at the time or during the process whereby the partition (or generally user interface element) is designed or created or modified or edited.
  • a partition that is intended to be displayed or presented in representations that permit the ability to resize the partition arbitrarily could be designed as simple partitions with a uniform texture for the elements comprising the partition.
  • a partition that is intended to be displayed or presented in representations that do not permit the ability to resize the partition arbitrarily but that limit the rendering or display or presentation to the original size of the partition could be designed as complex partitions with an arbitrarily complex texture for the elements comprising the partition since there is little chance of distortion of the partition as a result of stretching during resizing.
  • the rules can be specified based on the characteristics of the relevant partition or user interface element as currently designed or created.
  • the present inventor discloses a versatile graphical user interface comprising one or more N-dimensional background elements each of which is divided into one or more arbitrarily-shaped N-dimensional partitions, wherein each partition may contain one or more user interface elements and is associated with one or more sets of rules that can be based on the nature of the partitions and that define rendering, positioning, element placement and other relevant attributes and behaviors, wherein said rules can be specified in such a way as to enable said N-dimensional background to assume any desired arbitrary shape and to facilitate expansion to any desired arbitrary size without distortion or loss in quality in US 2005/0225572 Al .
  • N can be 1, 2, 3, 4 - for 3 spatial dimensions and 1 temporal dimension for instance, 5, or any number of dimensions.
  • the invention disclosed in US 2005/0225572 Al remedies many of the limitations of the prior art and leads to the creation of much more versatile, more dynamic and richer user interfaces than are possible with the prior art, N, the number of dimensions is not truly arbitrary but is associated with specific embodiments of the invention.
  • the invention disclosed in US 2005/0225572 Al enables embodiments where N can be 1, 2, 3 or 4 but does not define embodiments in which N is greater than 4 or 5 or higher.
  • the present invention discloses automatic, semi-automatic or manual processing systems for arbitrary dimensional graphical user interfaces based on the principles of the present invention.
  • Graphical user interfaces enabled by the present invention can have an arbitrary number of dimensions and can thus be used to resolve numerous critically important issues in science and engineering, medicine, law enforcement, economics and finance, and many other fields of human endeavor require the management of arbitrary dimensional data.
  • FIG. 1 illustrates a representative concept for the preferred embodiment of the present invention.
  • FIG. 2 shows how instances of the user interfaces enabled by the present invention could be disposed on one or more surfaces.
  • FIG. 3 demonstrates tri-linear interpolation for a user interface instance bounded by eight neighboring user interface instances in a 3-dimensional spatial configuration.
  • FIG. 4 shows the partitioning of a background image according to the prior art.
  • FIG. 5 depicts a flowchart for the processing of the user interface for a preferred embodiment of the present invention.
  • FIG. 6 illustrates the tier-1 image representation used by the preferred embodiment of the present invention for the management of very large data sets.
  • FIG. 7 shows the partitioning or segmentation of the original image frame to form tier-2 of the image representation used by the preferred embodiment of the present invention for the management of very large data sets.
  • a computer system such as a personal computer system, workstation, server, tablet computer system, handheld or mobile computer system and any other suitable system could be used to embody the present invention.
  • Other suitable devices and systems providing means for or allowing the steps of the present invention to be carried out could be used.
  • user interaction with the user interface could be via a mouse or any other suitable means.
  • Well-known alternative means of interacting with user interfaces include gesture recognition systems, touch-based systems such as touch-screens and associated systems, brain-computer interfaces, speech recognition systems, and so on. These could all be employed in interacting with suitable representations of any aspect of the user interfaces enabled by the principles of the present invention.
  • Data for the interface could be stored or generally managed in computer memory and software running on the computer system could be used to allow editing and presentation of the user interface.
  • Suitably configured network systems such as the Internet could also be used to store or generally manage data associated with the user interfaces.
  • the user interface could be presented or rendered on a computer monitor or screen or any other suitable display or presentation system. Suitable computer network systems could be used to implement and/or present aspects of the user interface.
  • an instance of a graphical user interface is any display or presentation of any representation of any aspect of the user interface. Consequently, an instance of a graphical user interface could include, without limitation, any of the following elements or any combination of the following elements: representations of background elements, representations of partitions extracted from or defined in association with background elements, view or display areas for data managed using the user interface, user interface elements such as background elements, partitions, buttons, scrollbars, sliders, windows comprising any number or combination of user interface elements, and so on.
  • a background element could be a single background image or a collection of images considered as a background image.
  • Elements within the background image could be pixels or groups of pixels or fractions of pixels or groups of fractions of pixels (for sub-pixel precision) in the case of a digital image.
  • Partitions could be any selected, demarcated, defined or labeled aspect or region of a background element.
  • partitions could comprise labeled pixels or labeled groups of pixels contained either in a single image or distributed over a collection of images.
  • Parts of a background image could actually be cut out into a separate image to represent a partition - in which case the partition could be considered a literal partition.
  • partitions could be defined conceptually over any number of background elements including, but not limited to, pixels or groups of pixels or individual images without actually cutting out the elements into separate parts but by simply maintaining the data representing the partitions within the user interface. Such partitions could be considered conceptual partitions.
  • Any representation of any data displayed or rendered or presented using the user interface could be considered a part of the user interface and thus a component of an instance of the user interface.
  • Data could be represented as digital images comprising pixels or groups of pixels.
  • Other suitable representations of data could be utilized as required by any given application of the present invention.
  • Interaction with representations of data could be facilitated via interaction with the user interface as is well-known to one of ordinary skill in the art. Such interaction could involve clicking buttons, manipulating sliders, resizing or manipulating partitions, background elements or any suitable representations thereof.
  • instances of the user interface indicated generally as UIu > UIn > Ul 2i ' UI 22 ' UIij > Ulii ' - ' ULj are disposed on an arbitrarily-shaped and arbitrarily-sized surface.
  • the surface is depicted by the bounding box for the user interface instances in FIG. 1.
  • the individual user interface instances are shown in a manner reminiscent of entries in a 2-dimensional (2D) matrix.
  • the surface represented by the 2D matrix in FIG. 1 can be assigned an arbitrary size and may assume an arbitrary shape and is not limited to a 2-dimensional or 2D plane.
  • Suffixes i and j (as in J ) refer to row and column numbers within the 2D matrix.
  • Entries in columns refer to specific instances of the user interface. Please note that the underlying user interface elements need not be distinct for each individual instance. For example, one or more instances may share the same user interface elements but may be used to present or display different views or perspectives or representations of data presented or displayed using the user interface. Thus, one or more entries in columns could represent different representations of data while sharing some or all underlying user interface elements.
  • Rows within the 2D matrix represent individual or separate dimensions. Consequently, row 1 could be used to represent a new or additional dimension for the user interface instances contained in the columns within row 1, namely, JJJ n , UI U > UI w --> Consequently, irrespective of the actual number of dimensions contained within a specific user interface instance such as UJ n within row 1, the inclusion of UJ n as a column (column 1 in this case) within row 1 increases the number of dimensions for the specific user interface instance and its derivative representations by 1.
  • a derivative instance in this context refers to any new instance of the user interface
  • So entries within row 1 could be used to represent or display or present derivatives of JJJ n and thus promote JJJ n and derivatives of JJJ n to a 3D user interface in the case where JJJ n (and its derivatives) were originally conventional 2D user interface instances.
  • Adding a new dimension to the user interface simply involves adding a new row to the 2D matrix representing the user interface and representing, displaying, presenting or rendering derivative instances within the new row.
  • removing or disabling an unused or unwanted dimension from the user interface involves simply removing or disabling the affected row within the representative 2D matrix and disabling the representation, display, presentation or rendering of instances within the affected row.
  • the 2D matrix representation of the user interface depicted in FIG. 1 and described in the foregoing is sufficient to permit the user interface to manage arbitrary dimensional data without limitation.
  • the user simply adds a new row to the 2D matrix in order to add a new or additional dimension to the user interface.
  • the individual user interface instances are themselves arbitrary dimensional. That is, each instance could be managed using a representative 2D matrix as described for the entire user interface.
  • each instance could be displayed as a separate window within the desktop of a typical window-based operating system such as those available from Microsoft (Microsoft Windows, Windows Mobile, and so on), Apple (Apple OS, iPhone OS, iPad OS, and so on), Google (Android, and so on) or any other window-based operating system.
  • Microsoft Microsoft Windows, Windows Mobile, and so on
  • Apple Apple
  • Google Google
  • the exact configuration or arrangement of instances within such a window or the exact configuration or arrangement of windows could be determined in accordance with user preferences or other relevant factors such as resource constraints. Suitable systems for managing the representations of the instances could also be adopted. For example, window swapping techniques allowing one or more instances to be viewed at a time or in a specific situation or providing means of selecting which specific instance to view at a specific time or in a specific situation could be adopted in resource-constrained environments. Alternatively, each instance could be displayed on a separate monitor or display device where the monitors or display devices are spread over any chosen geographical territory in any chosen spatial configuration or arrangement.
  • One simple spatial arrangement could involve disposing the monitors or display devices in a manner reminiscent of a 2D matrix to closely match the 2D matrix representation of the user interface.
  • 3- dimensional (constraining the instances to the familiar physical 3D spatial environment or configuration), 4-dimensional (for example comprising a spatial 3D configuration and a linear or ID time dimension as in time- varying spatial 3D configurations or arrangements) or arbitrary dimensional by adopting the principles of the present invention and possibly utilizing the 2D matrix paradigm introduced by the present invention.
  • the surfaces need not be literal or physical surfaces but could represent computer memory, rendering surfaces - such as device contexts in Microsoft Windows Software Development Kit parlance - or any other suitable representations of surfaces on which instances could be disposed.
  • viewport and window management techniques including, but not limited to, scrolling, panning, and so on, could be employed where appropriate to facilitate the display or presentation of the instances.
  • the conceptualization of the user interface of the present invention as a 2D matrix of arbitrary dimensional instances permitting new or additional dimensions to be added simply by adding new rows to the representative 2D matrix and permitting unused or unwanted dimensions to be removed or disabled simply by removing or disabling rows within the representative 2D matrix makes user interfaces enabled by the present invention amenable to straightforward mathematical characterization and analysis.
  • each instance could be configured to collect and store - or transmit for further analysis - user data associated with any selected aspect of the user experience and this data could be represented and analyzed using the 2D matrix notation - the representation of the user interface itself as a 2D matrix of instances making the collection, analysis and utilization of the data easier or more efficient.
  • Performance metrics analysis could be facilitated by carrying out automatic, semi-automatic or manual collection and analysis of relevant user experience data.
  • Automatic data collection could be implemented by any means or method that allows the tracking of relevant user actions such as mouse clicks, button activation, hits and misses on buttons, sliders or any other tracked user interface elements.
  • Such tracking of using actions could readily be programmed into the user interface by one of ordinary skill in the art using readily available programming languages such as C, C++, JAVA, Python, HTML, HTML5, VRML, and so on in combination with suitable programing tools such as software development kits for any chosen computer system or application environment.
  • Semi-automatic data collection could involve the use of any of the automatic data collection methods described earlier (or any other suitable automatic data collection method) augmented by manual inspection and/or correction of errors in the automatically collected data.
  • Manual data collection could involve the explicit or implicit (where appropriate) use of user surveys, polls or queries to document or track the user experience.
  • Any other aspect of the user interface - such as design or presentation issues - could also be analyzed using matrix notation aided by the conceptualization of the user interface of the present invention as a 2D matrix of arbitrary dimensional instances.
  • FIG. 2 illustrates how instances of the user interfaces enabled by the present invention could be disposed on one or more arbitrarily-sized and arbitrarily-shaped surfaces along the lines of the familiar 3D spatial configuration.
  • FIG. 2 three separate surfaces are shown as an example. Any number of surfaces could be used in practice. Each surface is arbitrarily-sized and arbitrarily-shaped and based on the principles of the present invention, is arbitrary dimensional as already explained for the 2D matrix notation. The instances disposed on the surfaces are labeled
  • the matrix could be extended to a 3-dimensional or 3D form.
  • techniques for the mathematical characterization and analysis of 3D matrices could be applied to this 3D matrix formulation for the representative matrix.
  • any specific dimension could be extracted from a specific row of the 2D matrix form and interpreted as an additional or third matrix dimension (which must be distinguished from an actual user interface dimension) to form a 3D matrix representation. Furthermore, this process could be repeated to create arbitrary dimensional matrix representations. Conversely, a specific matrix dimension in any matrix formulation with three or higher dimensions could be extracted from the higher dimensional matrix and interpreted as a new or additional row for the 2D matrix form and in so doing reduce the number of dimensions for that specific matrix representation.
  • arbitrary dimensional matrices could be used to represent the user interface in the manner described and these matrices could be transformed from one dimensionality to the other in a process involving dimension expansion or reduction to facilitate mathematical characterization and analysis in any preferred formulation.
  • the 2D matrix form is sufficient for any number of actual dimensions for the user interface.
  • the use of higher dimensional matrices is for convenience in mathematical characterization and analysis only and is not required for adequate representation of the arbitrary dimensional user interfaces of the present invention.
  • each surface could be considered a page in a book representing the user interface.
  • Each individual user interface instance is disposed on or contained in one of the arbitrarily-sized and arbitrarily-shaped pages of the book.
  • the pages of the book could be flipped to reveal additional pages or more generally to navigate the book by moving from one page to the other.
  • This page flipping or page navigation could be implemented via the use of appropriate buttons or any other user interface elements on any of the instances that could be activated to initiate a page flip and reveal additional instances disposed on pages separate from the current page.
  • a user could step through the user interface - gaining access to instances disposed on successive pages - by utilizing page flipping commands or any other suitably configured elements on any of the instances within the current page.
  • Display or presentation of instances could be carried out as previously described for the 2D matrix representation. Appropriate transition effects could be added to enhance user perception of the page flipping process.
  • New instances of the user interface could be synthesized or created by utilizing existing instances of the user interface.
  • Interpolation techniques requiring neighborhood relationships or associations between the existing instance or instances and the new or synthesized instances could be employed.
  • Such interpolation techniques include, but are not limited to, bilinear interpolation (especially for 2D matrix representations), cubic interpolation, spline interpolation, tri-linear interpolation (especially for 3D matrix formulations) and so on.
  • the synthesis of new instances could be viewed as a means of improving the resolution of the user interface by permitting the creation or synthesis or prediction of new instances that may improve the value of the user interface.
  • tri-linear interpolation could be used to synthesize or compute new instances of the user interface by applying the data from neighboring instances.
  • FIG. 3 depicts tri-linear interpolation for a user interface instance bounded by eight neighboring user interface instances in a 3 -dimensional spatial configuration and illustrates how tri-linear interpolation could be implemented.
  • T(x, y, z) located at 3D coordinates (x, y, z) from an arbitrarily chosen origin and for which data is not directly available
  • data for the neighboring points labeled V000, VI 00, V101, V001, V010, VI 10, VI 11 and V011 could be utilized as follows and tri-linear interpolation applied.
  • T(x, y, z) (V000*(l-x) + V100*x) * (1-y) +
  • L0, LI, L3, L2 and B0, Bl are intersection points between adjacent faces of the cube formed by the neighboring points (namely V000, VI 00, V101, V001, V010, VI 10, VI 11 and V011) for which data is available and the target point - T(x, y, z) - for which no data is available.
  • data within an instance could be represented as an image with elements comprising pixels or fractions of pixels (for sub-pixel precision) with the typical red, blue, green, alpha (RGBA) quad values for the associated red, green and blue color channels and alpha transparency channel and associated 3D coordinates x, y, z, and an optional time component that could be used in the case of video or time-varying image data.
  • RGBA red, blue, green, alpha
  • FIG. 4 an illustration of a preferred embodiment of the present invention, the arbitrarily-sized and arbitrarily-shaped background is indicated generally as B.
  • p k are arbitrary dimensional partitions and k can be any number.
  • the background and partitions in FIG. 4 are depicted in 2-dimensional or 2D form.
  • the partitions are contiguous. In practice, however, the background and partitions are arbitrary dimensional according to the principles of the present invention and the partitions need not be contiguous.
  • partitions need not be literal - in which case a background comprising a 2-dimensional or 2D image would need to be broken up into a plurality of images to support a plurality of partitions - but could be logical or conceptual only - in which case said background image could remain monolithic and the partitions represented in any suitable data format that corresponds to the conceptual form.
  • Each partition may contain any number of user interface elements.
  • the partition itself is a user interface element.
  • a partition could be deemed to contain no user interface element if the partition does not contain any other additional user interface element apart from the partition itself.
  • each partition is arbitrary dimensional and has an arbitrary shape and an arbitrary size and is associated with a set of rules that define rendering, positioning, element placement and other relevant behaviors and attributes.
  • rules Generally, attributes and behaviors or characteristics or aspects of the user interface are chosen on the basis of usefulness or relevance in a given embodiment.
  • These rules can be specified in such a way that the arbitrary dimensional background-based graphical user interface can assume any arbitrary desired shape and can be expanded to any arbitrary desired size without distortion or loss in quality.
  • the background comprises a single, arbitrarily-shaped digital image and the user interface built from said background is to be rendered on a computer screen
  • said background can be divided into a number of partitions based on the nature of the background and the rendering of each partition can in turn be carried out on the basis of the nature of the partition.
  • a partition defined on a uniformly textured region of the background can be stretched without noticeable distortion or loss in quality.
  • a partition defined on a non-uniform region of the background may be rendered in its original size and shape to prevent distortion and loss in quality.
  • the entire background can be made to assume an arbitrary shape and an arbitrary size without distortion or loss in quality. Consequently, user interfaces based on the principles of the present invention are more versatile, more dynamic and allow a much richer user experience than is possible with the prior art. Most significantly, user interfaces based on the principles of the present invention are arbitrary dimensional and can be used to manage arbitrary dimensional data without the limitations inherent in the prior art.
  • FIG. 5 shows a flowchart for automatic processing of the user interface.
  • the user interface could comprise a background image that is to be partitioned. Partitioning could be based on the nature of the image. For example, the texture of the image could be used.
  • the image could be prepared for processing.
  • preparation could involve storing the image in memory and providing means of accessing the data representing the image. It could also comprise - in the case of a network-based system - the streaming or transmission of the data representing the image for further manipulation. If required, preparation could also involve pre-processing steps such as filtering and de-noising of the image or the application of any combination of any required pre-processing steps as is well-known in the art.
  • an automatic process for the identification of distinct partitions or regions within the image could be carried out.
  • the identification could be based on the texture of the image.
  • Image segmentation techniques including those popular in the literature
  • Any other suitable process for automatically identifying partitions could also be used.
  • the process could be completely automated - in which case the results of an automatic image processing step such as image segmentation are used as the basis for partitioning the image.
  • a semi-automatic process could be used - in which case the automatic partition identification process could be augmented (via user or designer inspection) to manually correct any misidentifications or to more closely conform to user taste.
  • the step of identifying partitions in the user interface comprises assigning a label to each element in the image such that elements with the same label share common characteristics.
  • each such element would be a pixel.
  • the texture of the image could be chosen as the characteristic on which the labeling of elements is based. It should be noted that any other suitable characteristic (including, but not limited to, shape) could be chosen as the basis of the partitioning.
  • Typical labels could be SIMPLE (for elements with a simple or uniform texture), COMPLEX (for elements with a relatively more complex texture), HORZONTAL (for elements with a texture that appears horizontal) and VERTICAL (for elements with a texture that might appear vertical). Other suitable labels could be used.
  • partitioning could be carried out manually on the basis of a visual inspection of the user interface.
  • Manual identification of partitions could be accomplished on a computer system via suitable instructions (possibly embodied in application software) that permit the designer or user to identify and/or label partitions. This could be accomplished by clicking and dragging a computer mouse over the background image to demarcate or identify and/or label partitions.
  • the labels for example SIMPLE, COMPLEX, HORIZONTAL, VERTICAL, and so on mentioned for automatic and/or semi-automatic partition identification could also be applied to manual partition identification.
  • each partition identified in step 120 could be associated with a set of rules defining the characteristics of the partition. For example, based on the texture of the identified partition, a specific partition could be designated for vertical tiling during rendering or presentation.
  • a partition labeled SIMPLE could be assigned a rendering or presentation rule that effectively causes the partition to be stretched to fit its destination. In the case of a two-dimensional digital image, this could be accomplished via simple two-dimensional (horizontal and vertical) pixel replication as is well known in the field.
  • a partition labeled COMPLEX could be assigned a rendering or presentation rule that effectively causes the partition to be rendered at its actual size - in which case any destination region allocated to the partition could be constrained to the same size and shape as the original partition.
  • a partition labeled HORIZONTAL could be assigned a rendering or presentation rule that effectively causes the partition to be tiled horizontally while a partition labeled VERTICAL could be assigned a rendering or presentation rule that effectively causes be partition to be tiled vertically during rendering or presentation on a destination surface or device.
  • mapping or table associating rules or sets of rules for rendering (or any other chosen characteristic) with a label or sets of labels identifying partitions within the interface.
  • assigned rules can be applied in step 140 to the associated partitions in the storage, presentation and/or more generally further manipulation of the user interface.
  • identification of partitions and/or assignment of rules to identified partitions could also be based on a selected user or designer profile.
  • a profile could be built up automatically on the basis of prior user interaction with the user interface, user preferences or other relevant data gathered about the user or designer. For example, a specific user or designer could prefer that SIMPLE partitions be tiled vertically while another could prefer that such partitions be simply stretched via pixel replication or an equivalent process.
  • the user or designer Via appropriate program code or computer software, the user or designer could be permitted to edit such a profile or build a new profile from scratch.
  • user or designer profiles could be built explicitly on the basis of user or designer input.
  • Another option is to use a semi-automatic approach in which a user or designer profile could first be built automatically (possibly on the basis of the known behavior and/or preferences of a wide spectrum of users or designers or via some other suitable means) and the automatically synthesized profile subjected to optional editing by users or designers.
  • the background, partitions, associated user interface elements, user or designer profiles and any required configuration information - and more generally any aspect of the user interface - could be managed as elements in a universal file format.
  • a universal file format would specify a header identifying the file type and containing information as to the number, types, locations and sizes of the elements it contains.
  • Each element in the file is in turn described by a header specifying the type of the element, its size and any relevant data or attributes and the types, locations and sizes of any additional elements it contains.
  • the data comprised 8-dimensional or 8D representations of miscellaneous colonies of bacteria that were studied under a microscope to monitor their development under a specific set of conditions. Some species in the colonies exhibited luminescence or emitted detectable light when exposed to a specific chemical or reagent.
  • 3 -dimensional or 3D video micrographs effectively comprising 4-dimensional or 4D image data were captured under controlled conditions. Each video was of the same duration.
  • the primary data for the example is the 4D image data (referred to as 4D_IMAGE for the purpose of this illustration) that the 3D video micrographs represent.
  • One of the parameters used in the capture of the data was the wavelength of the light used. Three separate wavelengths representing the Red, Blue and Green colors on the color spectrum were used.
  • WAVELENGTH dimension the original population of the colonies at the beginning of each recording could be represented.
  • the sixth dimension for the data was the environment (labeled ENVIRONMENT for the purpose of this illustration) under which the bacteria were monitored. Two separate environments were utilized. In one environment, the bacteria were exposed to a hostile fungus that caused a certain portion of the colony to die off. In the other environment, no hostile fungus was present and conditions were favorable for the development of the colonies.
  • ENVIRONMENT dimension could be adapted to represent the percentage change in the population at the end of the recording as compared with the original population at the beginning of the recording.
  • the first four dimensions are tied up with the 3D video data - constituting a 4D image data set (referred to as 4D_IMAGE for the purpose of this illustration) with pixel elements.
  • the wavelength of the light at which the 3D video data was captured has been utilized.
  • This dimension is labeled WAVELENGTH for the purpose of this illustration and permits three separate values, namely RED (for the red light wavelength), GREEN (for the green light wavelength) and BLUE (for the blue light wavelength). Instances associated with the WAVELENGTH variable could be configured to display the 3D video data as captured under RED, GREEN or BLUE light in the appropriate columns.
  • the sixth dimension (the second row or row 2 of the 2D face of the 3D representative matrix) is tied up with the environment (labeled ENVIRONMENT for the purpose of this illustration) under which the bacterial colonies develop.
  • FAVORABLE is a value of the ENVIRONMENT parameter indicating a favorable environment marked by the absence of hostile fungi
  • HOSTILE is a value of the ENVIRONMENT parameter indicating a hostile environment marked by the presence of hostile fungi.
  • Instances associated with the ENVIRONMENT parameter could be configured to display only those bacterial populations that remain alive under HOSTILE or FAVORABLE environmental conditions as well as information - possibly in the form of text or textual elements - indicating the percentage of population of remaining bacteria compared with the original population of the colony.
  • the ENVIRONMENT information unique to the sixth dimension could be combined with the associated wavelength information from the fifth dimension.
  • column entries within the second row on any surface on which instances are disposed could display the 3D video in RED, GREEN or BLUE light (depending on the column number - for example RED for column 1, GREEN for column 2 and BLUE for column 3) filtered to display only those bacterial populations that remain alive in the presence of a HOSTILE or FAVORABLE environment depending on the specific value of the ENVIRONMENT parameter specified for the affected instance.
  • Luminescence in the presence of a certain chemical or reagent constitutes the subject of the seventh dimension (the third row or row 3 of the 2D face of the 3D representative matrix) and instances associated with this parameter could be configured to display the 3D video with a filter permitting only the population of luminescent bacterial species to be displayed and possibly with information - in the form of text or textual elements - indicating the level of light emitted from the bacterial population.
  • the PHOTOLUMINESCENCE information unique to the seventh dimension could be combined with the associated wavelength information from the fifth dimension as well as the associated environment information from the sixth dimension.
  • column entries within the third row on any surface on which instances are disposed could display the 3D video in RED, GREEN or BLUE light (depending on the column number - for example RED for column 1, GREEN for column 2 and BLUE for column 3) filtered to display only those bacterial populations that BOTH exhibit luminescence AND remain alive in the presence of a HOSTILE or FAVORABLE environment depending on the specific value of the ENVIRONMENT parameter specified for the affected instance.
  • the day of the week (labeled DAY for the purpose of this illustration) on which data was captured constitutes the subject of the eighth dimension (the third dimension of the 3D representative matrix) and instances associated with this parameter could be disposed on a separate surface for a specific value of the parameter.
  • data was not available for Tuesday, Thursday and Friday, the principles of the present invention permit the synthesis of data or instances for each of these days for which data was not available.
  • data could be synthesized for sub-day points, that is, for any hour or in fact for any time (even down to the millisecond or down to any arbitrary desired time precision, limited only by the precision of the hardware and/or software systems on which the embodiment of the present invention is implemented) whatsoever between the first day - Monday - and the last day - Sunday - for which data was available.
  • the present invention permits seamless navigation of the arbitrary dimensional data (8D data in this specific application) even for points or days for which data was not available or for which data was not collected or captured.
  • the synthesis or creation of instances for Tuesday on the basis of the principles of the present invention will be illustrated shortly.
  • the following table shows the associated column entries or instances for Monday.
  • the following table depicts the associated column entries or instances for Friday.
  • the following table illustrates the associated column entries or instances for Sunday.
  • Each instance could contain user interface elements such as buttons, sliders, text, and so on for data navigation (including 3D video navigation in this example) and for page flipping to enable display of data for other days (and in fact for any arbitrarily selected day, time or point between the first and the last days for which data was collected) apart from the specific day of the week which the instance represents.
  • This feature could be made common to all instances.
  • user interface elements enabling arbitrary selection of data for any desired day of the week for display could be provided in each instance.
  • GREEN light in BLUE light.
  • Contains user interface Contains user interface Contains user interface elements for selection of elements for selection of elements for selection of desired environment desired environment desired environment variable, namely, variable, namely, variable, namely,
  • the principles of the present invention permit the creation or synthesis of new instances for days (in the case of this specific MEDICAL / SCIENTIFIC application) or more generally for points or times for which data was not originally available.
  • any desired instance for any desired day or hour or minute or second or any arbitrarily selected time between the first day - Monday - and the last day - Sunday - for which instance data was available could be synthesized or created.
  • this instance could be treated as located at x, y, z coordinates (0.5, 0.5, 0.5) by considering the origin of the coordinate system comprising the eight nei hboring instances, namely, ⁇ n be located at the instance labeled ⁇ - ⁇ 31 an( j assuming a normalized 3D coordinate system with unit length (1.0) as the distance between the two neighboring 3 x 3 representative matrices for Monday and Wednesday. Ul 99 ⁇ T(0.5, 0.5, 0.5) and interpret the resulting
  • V101 ⁇ J J Wednesday
  • VO 11 i ⁇ - ⁇ 11 ⁇ Wednesday
  • any desired new instance could be synthesized, predicted, computed or created using existing neighboring instances.
  • One common way to represent data associated with user interface instances is to interpret the data as an image comprising groups of pixels or groups of fractions of pixels for sub-pixel precision. Such pixels could be interpreted as representing suitably formatted colors such as the typical red-blue-green-alpha (RGBA) color quad as well as associated 2D or 3D coordinates. Other interpretations suited to different applications are possible.
  • RGBA red-blue-green-alpha
  • Efficient data management techniques pertaining to such applications including the use of predictive loading of relevant data and possible subsequent presentation on a display window or computer monitor or any other suitable device or system based on a dynamic prediction of the user's point of view within the data stream could be applied to enable practical implementation and acceptable performance of applications of the present invention for very large data sets and related applications on off-the-shelf personal computer systems.
  • the first level contains a virtual view of an entire image frame as a single continuous set of pixels.
  • FIG. 6 illustrates the first level for a two-dimensional image frame of width p w and height p n pixels. Coordinate axes are labeled X-axis and Y-axis in FIG. 6.
  • the region of interest or view window is indicated as V in FIG. 6. Since a single image frame can be very large, it is generally impractical to attempt to load the entire image frame (typically corresponding to tens of gigabytes or even terabytes or more of physical memory for certain applications) into memory at once. Consequently, the second level comprises a segmentation or partitioning of each image frame into distinct image blocks of a size and color depth that facilitates straightforward manipulation on an average personal computer. This partitioning scheme is shown in FIG. 7, where the image of FIG.
  • the use of a two-tier image representation scheme permits alternate views of the image data that make further manipulation easier.
  • the simplicity of the first level permits the applications of a multi-resolution pyramid representation of the image data, such as that described by Peter J. Burt et al. in "The Laplacian Pyramid as a Compact Image Code", IEEE Transactions on Communications, 1983, pp. 532-540, for efficient compression, storage and transmission and optionally for adaptive rendering that maintains a constant frame rate.
  • a thumbnail of the entire image could also be generated at the first level. Such a thumbnail could be used to display a lower resolution version of the view window while waiting for image data to be retrieved and/or decompressed.
  • the dynamic view prediction and on-demand loading algorithms described hereinafter are readily applicable to the second tier's image block representation.
  • a view window V is specified as illustrated in FIG. 6.
  • the view window represents the segment of the current image frame that is indicated by the view parameters.
  • three view parameters such as the pan angle, the tilt angle or azimuth and the zoom or scale factor could be used to control the view.
  • Other relevant view parameters or factors could be considered as appropriate for any given application.
  • User input could be received via the keyboard and/or mouse clicks within the view window.
  • Suitable gesture recognition interfaces or touch-based interfaces or brain-computer interfaces or any other suitable interface could be used to receive input, provide feedback or generally enable user interaction.
  • a head-mounted display and orientation sensing mechanism could also be used.
  • Views could be generated based on view window size and received input.
  • the rate of change of each of the view parameters with respect to time could be computed dynamically. The computed rate of change could then be used to predict the value of the parameter at any desired time in the past or future.
  • P is the predicted value of the parameter at time T
  • n is the current value of the parameter
  • a is the dynamically computed rate of change of the parameter with respect to time
  • K is a scale factor, usually 1.
  • the values of the parameters predicted by the foregoing equation could be used to determine which specific image blocks need to be loaded into memory at any given time.
  • a computer software implementation using a background thread dedicated to loading those image blocks that are covered by the current view as well as any additional image blocks that might be needed for rendering the view in the future or past, that is, a number of future or past time steps, could be used.
  • the exact value or duration (or optimum value for applications in which variations in the time step are acceptable) of the time step depends on the requirements of a given application.
  • the image data could be distributed from a server over the Internet or other network or accessed from local storage on a host computer. Any other alternative source and method of distribution could be used where appropriate.
  • any graphical user interface including, but not limited to, contemporary user interfaces such as those available in widely used operating systems such as Microsoft Windows, Apple OS (iPhone OS, iPad OS, and so on), Google Android OS and so on, could readily be converted into a much more versatile, useful, responsive and efficient arbitrary dimensional graphical user interface by applying the principles of the present invention.
  • Any graphical user interface could be converted into an arbitrary dimensional (a - dim ensional ) user interface according to the principles of the present invention by disposing one or more instances of the user interface converted into arbitrary dimensional instances on the basis of the principles of the present invention as disclosed in the foregoing specification and suggested by equivalents and alternatives thereto on one or more arbitrarily-sized and arbitrarily-shaped surfaces.
  • the foregoing embodiments illustrate how this conversion could be carried out.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention concerne une interface graphique dimensionnelle arbitraire permettant l'affichage ou la présentation d'une ou de plusieurs instances de l'interface utilisateur sur une ou plusieurs surfaces de taille et de forme arbitraires, chaque instance comprenant un ou plusieurs éléments de fond dimensionnel arbitraires dont chacun est divisé en une ou plusieurs partitions dimensionnelles arbitraires de taille et de forme arbitraires, chaque partition pouvant contenir un ou plusieurs éléments d'interface utilisateur et étant associé à un ou plusieurs ensembles de règles qui définissent le rendu, le positionnement, le placement des éléments et d'autres attributs et de comportements pertinents, lesdites règles pouvant être spécifiées de manière à permettre audit fond dimensionnel arbitraire d'adopter toute forme arbitraire souhaitée et de faciliter l'expansion à n'importe quelle taille arbitraire souhaitée sans distorsion ni perte de qualité. Dans ce contexte, une instance d'une interface graphique correspond à un affichage ou à une présentation de toute représentation de n'importe quel aspect de l'interface utilisateur. En outre, la présente invention porte sur des systèmes de traitement automatique, semi-automatique ou manuel d'interfaces graphiques dimensionnelles arbitraires fondées sur les principes de la présente invention.
PCT/US2012/066159 2012-11-20 2012-11-20 Interfaces utilisateurs dimensionnelles arbitraires WO2014081420A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201213681424A 2012-11-20 2012-11-20
US13/681,424 2012-11-20

Publications (1)

Publication Number Publication Date
WO2014081420A1 true WO2014081420A1 (fr) 2014-05-30

Family

ID=47594976

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/066159 WO2014081420A1 (fr) 2012-11-20 2012-11-20 Interfaces utilisateurs dimensionnelles arbitraires

Country Status (1)

Country Link
WO (1) WO2014081420A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050225572A1 (en) * 2004-04-03 2005-10-13 Ekpar Frank E Versatile user interface
US20100054578A1 (en) * 2008-08-26 2010-03-04 Frank Edughom Ekpar Method and apparatus for interactive visualization and distribution of very large image data sets
US20120042268A1 (en) * 2004-04-03 2012-02-16 Frank Edughom Ekpar Processing user interfaces

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050225572A1 (en) * 2004-04-03 2005-10-13 Ekpar Frank E Versatile user interface
US20120042268A1 (en) * 2004-04-03 2012-02-16 Frank Edughom Ekpar Processing user interfaces
US20100054578A1 (en) * 2008-08-26 2010-03-04 Frank Edughom Ekpar Method and apparatus for interactive visualization and distribution of very large image data sets

Similar Documents

Publication Publication Date Title
US10031928B2 (en) Display, visualization, and management of images based on content analytics
US6990637B2 (en) Graphical user interface for 3-dimensional view of a data collection based on an attribute of the data
JP4341408B2 (ja) 画像表示方法及び装置
US8564623B2 (en) Integrated data visualization for multi-dimensional microscopy
US7719548B2 (en) Viewing digital images using a floating controller
US9436673B2 (en) Automatic application of templates to content
US20090307618A1 (en) Annotate at multiple levels
US20120120086A1 (en) Interactive and Scalable Treemap as a Visualization Service
US20130167079A1 (en) Smart and flexible layout context manager
US20090144653A1 (en) Method and Apparatus for Dynamically Resizing Windows
US20120189221A1 (en) Image File Generation Device, Image Processing Device, Image File Generation Method, And Image Processing Method.
Ward et al. Interaction spaces in data and information visualization.
JP2004213631A (ja) グラフィカルユーザインタフェース装置、ディジタル画像を編成してユーザへ表示する方法、及び該方法をプロセッサに実行させるプログラム
US10606455B2 (en) Method for processing information
US8456471B2 (en) Point-cloud clip filter
US20100042938A1 (en) Interactive Navigation of a Dataflow Process Image
TW200426623A (en) Systems, methods, and computer program products to modify the graphical display of data entities and relational database structures
Perkins et al. Scalable desktop visualisation of very large radio astronomy data cubes
Huang et al. A space-filling multidimensional visualization (sfmdvis for exploratory data analysis
US7991225B2 (en) Methods and systems for dynamic color equalization
US20120042268A1 (en) Processing user interfaces
Ekpar A novel system for processing user interfaces
WO2014081420A1 (fr) Interfaces utilisateurs dimensionnelles arbitraires
US20220206676A1 (en) Modifying drawing characteristics of digital raster images utilizing stroke properties
Pietriga et al. Exploratory visualization of astronomical data on ultra-high-resolution wall displays

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12816544

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 02.11.2015)

122 Ep: pct application non-entry in european phase

Ref document number: 12816544

Country of ref document: EP

Kind code of ref document: A1