EP3014387A1 - Methods and systems for generating dynamic user interface - Google Patents
Methods and systems for generating dynamic user interfaceInfo
- Publication number
- EP3014387A1 EP3014387A1 EP14834567.1A EP14834567A EP3014387A1 EP 3014387 A1 EP3014387 A1 EP 3014387A1 EP 14834567 A EP14834567 A EP 14834567A EP 3014387 A1 EP3014387 A1 EP 3014387A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- applications
- application
- output
- hardware
- client
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/951—Indexing; Web crawling techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/958—Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
- G06F16/972—Access to data in other repository systems, e.g. legacy data or dynamic Web page generation
Definitions
- the present invention relates to methods and systems for generating user interfaces on electronic devices.
- the present invention relates to methods and systems for generating user interfaces on electronic devices that provide dynamic outputs of items presented on the interfaces to users.
- online search results and mobile app markets typically show static text and/or images for each item presented in the interface, whether the item is a webpage, a software application, an online PC game, a mobile app, etc....
- the text and/or images are static because they are usually predetermined by developers who created the items. This static information offers useful but limited information to users because the static information presented may be stale as the items may have recently changed.
- static text and images provide little information regarding look and feel for each item.
- users would need to download, execute and/or install those items. For example, if the item is a web page, users would need to enter that website, or, if the item is a mobile app, users would need to download, install and execute the app.
- an improved user interface that can provide more than just static pre-determined text and/or images so that users can receive robust, up-to-date information for the items presented on the interface as well as to gain a good idea regarding look and feel for the item or even interact with the items directly on the interface without having to download, install or execute the items on their own devices.
- the present invention provides methods and systems for generating user interfaces on electronic devices that provide dynamic outputs of items presented on the interfaces to users.
- the invention provide a system for generating a dynamic user interface, comprising one or more clients; one or more servers networked with the one or more clients; one or more applications residing in the one or more servers; and one or more robots residing in the one or more servers, wherein the one or more robots are configured to execute the one or more applications and the one or more servers are configured to provide output of the one or more applications to the one or more clients as the one or more applications are being operated by the one or more robots.
- the present invention provides a system for generating dynamic user interface comprising: one or more clients; one or more servers networked with the one or more clients; one or more applications residing in the one or more servers; and one or more client applications residing in the one or more clients; wherein the one or more client applications are coupled with the one or more applications residing in the server in order to enable user interaction with the one or more applications via the one or more clients.
- the system further comprises a supervisor residing in the one or more servers.
- the output of the system comprises text, snapshots or a series of snapshots, or partial or full time-lapsed visual and/or audio data taken from output of the one or more applications.
- the one or more servers may receive one or more requests from the one or more clients.
- the request may be related to an online search, a request for one or more applications for application rental purposes or a request for one or more application for application purchase purposes.
- the output of the one or more applications is transmitted to the client from applications relevant to the one or more requests. And the relevance may be determined using in-app data.
- the system may comprise a database for storing in-app data, which includes the output from the one or more applications.
- the source of the output to the one or more clients comprises output stored in the database.
- the system may further comprise a client application residing in the client configured to display output of the one or more applications transmitted from the server via the media output.
- the system further comprises a media output residing in the client.
- the system also comprises a client application residing in the client configured to display output of the one or more applications transmitted from the server via the media output.
- the output of the applications shown by the client application displayed on the media output may be configured to allow user interaction with the one or more applications via client application.
- the output of the one or more applications shown by the client application displayed on the media output may also be coupled with the one or more corresponding applications, wherein the coupling comprises communication of a coordinate and event tag pair.
- the system may further comprises a means for simulating physical motion that is required for interacting with the one or more applications by simulation based on user interaction with the output of the one or more applications displayed by the client applications on the media output.
- the client further comprises one or more hardware devices.
- the one or more applications may be coupled with the one or more hardware devices, wherein the coupling comprises communication of hardware values.
- the one or more applications may be configured to receive the hardware values from at least one of a driver on the client corresponding to the one or more hardware devices, a pseudo driver configured to receive the hardware values, an HAL layer and a library coupled with the one or more applications.
- system further comprising one or more virtual machines to assist the one or more applications that cannot run natively on the one or more servers.
- an instance of the one or more applications is created to facilitate user interaction.
- the instance of the application created for the user interaction may begin at the beginning of the application, begin at the place at which the application was executing when the user interaction with the application was initiated or begin at the place in the application that is most relevant to the user request.
- the invention provides a method for generating a dynamic user interface comprising the steps of: executing one or more applications on one or more servers using one or more robots; and transmitting output of the one or more applications to one or more clients.
- the transmitted output of the one or more applications includes text, snapshots or a series of snapshots, partial or full time-lapsed visual and/or audio output of the one or more applications.
- the method further comprises the step of receiving one or more requests from the one or more clients.
- the one or more requests comprise one or more online search queries, wherein the one or more requests comprise one or more requests for one or more applications for application rental purposes or for application purchase purposes.
- the step of transmitting output comprises output of the one or more applications relevant to the one or more requests, wherein relevance of the one or more applications is determined using in-app data.
- the step of transmitting output of the one or more applications may be done live or near live as the output is being generated by the one or more applications.
- the method further comprises the step of storing in-app data, including output from the one or more applications in the one or more databases.
- the step of transmitting output of the one or more applications may be done by transmitting output stored in the one or more databases.
- the method may also comprise the step of supporting the execution of the one or more applications using one or more application servers.
- the method further comprises the step of storing in-app data, which includes data exchanged between the one or more applications and the one or more corresponding application servers, in one or more databases.
- the method further comprises the step of displaying output of the one or more applications on one or more media outputs.
- the method may further comprise the step of displaying output of the one or more applications on one or more media outputs using one or more client applications according to certain example of the invention. And in other examples, the method further comprises the step of allowing user interaction with the one or more applications via output of the one or more applications displayed by the client applications on the media output, the step of coupling output of the one or more applications shown by the one or more client application displayed on the one or more media output to the one or more corresponding applications. According to one example, the step of coupling output of the one or more applications shown by the one or more client application displayed on the one or more media output to the one or more corresponding applications comprises communicating one or more coordinate and event tag pairs.
- the method may further comprises the step of simulating physical motion that is required for interacting with the one or more applications by simulation based on user interaction with output of the one or more applications displayed by the one or more client applications on the one or more media outputs.
- the method comprises the step of creating an instance of the application to enable user interaction with the instance of the application.
- the present invention provides a method for generating a dynamic user interface comprising the steps of: creating and initiating instances of an application on one or more servers; and coupling the instances of applications on one or more servers with one or more clients located remotely with respect to the one or more servers to enable user interaction with the one or more applications using the one or more clients.
- FIG. 1 illustrates an output of an embodiment of the dynamic user interface of the present invention.
- FIG. 2 illustrates an output of second embodiment of the dynamic user interface of the present invention in which user interaction with the interface is possible.
- FIG. 3 illustrates a preferred embodiment of the system of dynamic user interface system of the present invention.
- FIG. 4 illustrates a software and related hardware architecture of a preferred embodiment of the system of the dynamic user interface of the present invention.
- FIGs. 5a and b illustrate process flows of a preferred embodiment of the method of the dynamic user interface of the present invention.
- FIGs. 5c and d illustrate process flows of a second preferred embodiment of the method of the dynamic user interface of the present invention where no robots are required to execute applications.
- FIG. 6 illustrates a preferred embodiment of the output of dynamic user interface showing output of an application related to taking photos using cameras.
- An exemplary dynamic user interface of the present invention preferably comprises a method and system for generating an interface that displays information related to one or more applications, wherein, for an application, the dynamic user interface preferably displays output of the application including text, one or more images, one or more snapshots, part or full time-lapsed visual and/or audio of the applications as the applications are being executed without requiring users to download, install or execute the application.
- FIG. 1 illustrates an output of a preferred embodiment of the dynamic user interface of the present invention where at least a part of the output is preferably streamed from a remote server (e.g., such as a server 10 described and illustrated with reference to FIG. 3) on which the applications are being executed by robots.
- a remote server e.g., such as a server 10 described and illustrated with reference to FIG.
- the dynamic user interface of the present invention provides information regarding the applications, including look and feel of the applications.
- the dynamic user interface is configured to allow users to interact with the applications.
- the applications running on remote servers may be coupled with hardware devices (e.g., a sensor or an input device) located within user devices such that hardware values may be passed to the applications.
- the dynamic user interface of the present invention is preferably capable of handling a variety of applications.
- an application comprises a mobile app
- the dynamic user interface of the present invention preferably displays text, one or more snapshots or part or full time-lapsed visual and/or audio output of the mobile app as the app is being executed.
- the dynamic user interface of the present invention preferably displays text, one or more snapshots or part or full time-lapsed visual and/or audio output generated by the application when one or more robots operate the game as if the game was being played by a user on a smart device, and a user looking at the dynamic user interface can, therefore, be watching the mobile game as it is being played by the robot.
- an application comprises a website with multiple web pages
- the dynamic user interface of the present invention preferably displays text, one or more snapshots or part or full time-lapsed visual and/or audio output of the website as if a user is clicking through various web pages of the website.
- the present invention can be applied to any application whose output comprises text, moving images and/or audio such as videos, animations or multiple static images such as pictures or webpages.
- the robots are not necessary for implementing the dynamic user interface of the present invention since some of the applications may already come with a subprogram or an activity that runs at least a part of the applications to illustrate to users how applications are executed/operated (i.e., these kind of applications can generate dynamic outputs automatically after the subprogram/activity is initiated).
- the system of the present invention only needs to initiate/activate this type of application and stream the output to the corresponding part of the dynamic user interface without requiring one or more robots to operate the application.
- the dynamic user interface of the present invention not only provides output of the applications but also allows users to interact with the applications via the dynamic user interface without requiring the users to download, install or execute the applications.
- FIG. 2 illustrates a preferred embodiment of the present invention that is configured for user interaction.
- the interface preferably allows users to input text messages in box 912 and submit that text message to the application without the need to download the application 900 locally
- users can control map application 800 via the dynamic user interface of the present invention by clicking home button 918 or map button 920 directly on the interface.
- dynamic interface of the present invention not only displays output of applications, but also provides users with the opportunity to interact with the applications via the dynamic user interface.
- the dynamic user interface is configured to allow users to interact with applications displayed on the interface that requires physical motion to control the applications such as rotating, tilting or shaking user devices.
- these physical motions can preferably be simulated with user interaction with output of the applications displayed on the interface.
- a user can interact with an application displayed on the dynamic user interface by dragging visual output of the application in one or more directions in order to simulate physical motion.
- the application is a sports car driving game that allows a user to control direction of the car in the game by physically tilting a game device left or right
- dragging the output of the application to the right or left on the dynamic user interface allows the user to simulate this tilting motion.
- a user can interact with the application by dragging the search result left and right in quick succession in order to simulate shaking motion in order to interact with the search result.
- the dynamic user interface of the present invention is configured to couple with hardware devices of the device on which the interface of the present invention is displayed and use hardware values generated by the hardware devices as input for the application in question.
- the application in question is preferably capable of receiving actual or simulated coordinates (or geographic location) from the user device on which the interface of the present invention is displayed via a GPS module of the device or even simulated location information via such means as IP2Location function or other functions capable of finding geo-locations by IP addresses.
- the application is capable of gathering and receiving changes in orientations of the interface of the present invention or the device on which the interface of the present invention resides and rotate accordingly to help users better control/operate the application.
- the user interface can be configured to focus on a specific application by, for example, highlighting the applications/the UI or output of the application while concealing those of other applications shown on the dynamic user interface.
- the dynamic user interface of the present invention is equipped with the ability to communicate intelligently with the device in the attempt to provide users with the same experience as if the search results are installed locally in the device.
- the dynamic user interface of the present invention is particular useful in context of providing search results.
- search results provided by popular search engines such as Google and Yahoo are limited to static data and images.
- the dynamic user interface of the present invention is capable of providing text, one or more snapshots or part or full time-lapsed visual and/or audio output of the search results as the search results are being executed by robots.
- the present invention has access to data generated by the applications while the applications are running (such dynamic data is otherwise termed "in-app data"), the search may be based on the in-app data in addition to static description of a search result.
- the dynamic user interface of the present invention allows user to interact with application outputs displayed on the search result interface such as illustrated in FIGs. 1 and 2.
- the present invention is able to provide more accuracy in searching more information as well as much better idea of look and feel of the search results to the user than conventional search results without requiring users to download, install or execute the applications.
- the dynamic user interface of the present invention is also potentially valuable for application purchase purposes or application rental purposes.
- the present invention can be applied to mobile app marketplaces such as the Android App stores where the present invention allows users to experience the applications before purchasing and/or downloading them.
- the present invention can also be useful for businesses that wish to rent out applications rather than sell applications where, rather than downloading applications, users can use the rented applications via the dynamic user interface of the present invention.
- the present invention can be used in a plethora of business contexts.
- FIG. 3 depicts a preferred embodiment of a system for generating dynamic user interface of the present invention.
- the system preferably comprises one or more servers 10, one or more clients 20 and/or one or more application servers 30.
- Each server 10 preferably further comprises a supervisor 100, one or more robots
- the supervisor 100 preferably comprises a software program that acts as a controller for the present invention.
- Each robot 110 preferably comprises a software program configured to run the one or more applications 120.
- Each application 120 preferably comprises software applications, online PC games, mobile apps, web browsers, etc....
- the processor 130 is preferably configured to process data for components of the server 10 such as one of the supervisor 100, the robot 110, the applications 120, the network interface 140, a database 150 or one or more virtual machines 160.
- the database 150 preferably stores data when required by the system.
- the system of the present invention preferably further comprises the one or more virtual machines 160 that are capable of assisting the applications 120 run on the server 10 if any of the applications 120 are unable to run natively on an operation system of the server 10.
- the server 10 preferably comprises multiple virtual machines 160 so that the system 10 is capable of emulating a diversity of operation systems such as Android, Windows, iOS, UNIX, etc.
- the application servers 30 preferably comprise servers that are configured to communicate with and/or support execution of corresponding application 120.
- an application 120 may comprise an online PC game; in this case, corresponding application server 30 preferably comprises a server that hosts the online PC game 120 and performs tasks such as but not limited to receiving data, providing data and/or processing data for the online PC game 120.
- application 120 may comprise a mobile app; in this case, the corresponding application server 30 preferably comprises a server that hosts the mobile app 120 and performs tasks such as but not limited to receiving data, providing data and/or processing data for the corresponding mobile app 120.
- application 120 may comprise a web browser capable of displaying websites; in this case, the corresponding application server 30 preferably comprises a web server that hosts websites and performs tasks such as but not limited to receiving data, providing data and/or processing data for the web browser application 120.
- application 120 may be a stand-alone software application that does not require any application server 30 to operate so that no corresponding application servers 30 are required in the system of this example.
- the one or more clients 20 comprise one or more input modules 210, one or more network interfaces 220, one or more processors 230, one or more media outputs 240, one or more hardware devices 250 and/or one or more client applications 260.
- the one or more input modules 210 are configured to receive inputs such as user requests.
- the input module 210 may comprise an onscreen keyboard, a physical keyboard, a handwriting input module, a voice input such as a microphone or a combination thereof.
- the one or more network interfaces 220 allow communications between the server 10 and the client 20.
- the media output 240 is preferably capable of outputting text, visual and/or audio information such as output received from the server 10 related to the one or more applications 120.
- the media output 240 is also capable of receiving input from a user.
- the media output 240 is capable of detecting position of a pointing device such as a cursor or, in case of a touch sensitive screen, position where a user makes physical contact with the media output 240 such as the tip of a stylus or a finger.
- the client 20 preferably further comprises one or more hardware devices 250 comprising (but not limited to) a camera, a microphone, a GPS, an accelerometer, a gyroscope, a light sensor, a thermometer, a magnetometer, a barometer, a proximity sensor, a hygrometer, an NFC, a loudspeaker or an image sensor.
- the hardware devices 250 are preferably configured to sense environmental values such as images, sounds, acceleration, ambient temperature, rate of rotation, ambient light level, geomagnetic field, ambient air pressure, proximity of an object relative to the view screen of the device, the relative ambient humidity, coordinates (GPS/AGPS module), etc....
- the system of the present invention is preferably configured to allow communication between the hardware devices 250 and the applications 120 including these environmental values. A process flow for such communication is explained below in connection with FIG. 5b.
- the client application 260 can preferably comprise or be configured to couple with a software application, such as a web browser or a customized software application (app) including or coupled with the media output 240, capable of displaying output of the dynamic user interface of the present invention.
- FIG. 4 illustrates a preferred software and related hardware architecture of the server 10 and the client 20 in a preferred embodiment of the present invention in which users are able to interact with applications displayed in the dynamic user interface.
- the architecture of the server 10 preferably comprises the virtual machine 160 (if required by the application 120), a kernel 410, a hardware abstraction layer 420, one or more libraries 430, an application framework 440, applications 450 and/or a pseudo driver 460.
- An architecture of the client 20 preferably comprises the hardware devices 250, a device driver 520, a memory 530, a hardware abstraction layer 530, one or more libraries 550, an application framework 560 and/or an applications 260.
- the hardware abstraction layers (HAL) 420 and 540 preferably comprise pieces of software that provides access for applications 450 to hardware resources. It should be noted that, although HAL is a standard part of the Android operating system software architecture, HAL may not exist in exactly in the same form or even exists at all in other operating systems such as iOS and Windows. Therefore, alternative embodiments of the present invention in which client 20 runs on non-Android operation systems preferably comprise similar software architecture for handling hardware control and communication such as Windows Driver Model. In another preferred embodiment HAL is not needed.
- the applications 450 preferably comprise the supervisor 100, the robots 110 and/or the applications 120. In one preferred embodiment, the application 260 comprises a web browser.
- the libraries 430 and 550 as well as the application framework 440 and 560 preferably provide software platform on which the applications 450 and 260 may run. As mentioned before, the virtual machine 160 may not be required if the application 120 is able to run natively on the server 10.
- the memory 530 preferably comprises random access memory on which hardware values generated by the hardware devices 250 may be stored.
- the pseudo driver 460 is preferably software that converts hardware values received from client 20 to data that the application 120 or an API of application 120 can understand and transmit the converted data to the application 120 or the API.
- a pseudo driver can be configured to work with on one set of hardware values (e.g., it could be configured to handle only GPS coordinates and pass the values to the API related to locations).
- the pseudo driver 460 can preferably be configured to handle multiple hardware devices 250 to one application 120 (i.e., it can be configured to become capable of converting/passing different kinds of hardware values from various hardware devices), and therefore facilitating coupling between the application 120 to more than one hardware devices. Pseudo drivers are described in further details below in connection with figure 5b.
- various components of the system may be combined into fewer servers or even one single computer.
- the server 10 and any of the servers 30 may both reside on one machine.
- various components of the server 10, the client 20 and/or the application server 30 do not need to necessarily reside within one single server or client, but may be located in separate servers and/or clients.
- the database 150 of the server 10 can be located in its own database server that is separate but networked with the server 10.
- the media output 240 of the client 20 may not need to be a built-in screen but may be a separate stand-alone monitor networked to the client 20.
- Figure 5a is a flowchart illustrating a preferred embodiment of the method for generating the dynamic user interface of the present invention.
- the supervisor 10 preferably initiates the one or more robots 110 to run the one or more applications 120.
- Robots 110 are preferably programmed to mimic user behavior to automatically execute the applications.
- robots 110 can be configured to randomly operating applications 120 by randomly probing UIs of the applications 120. This type of robots 110 is suitable for operating a wide variety of applications 120. Examples of various embodiments of robot(s) 110 are described in U.S. Patent App. No. 13960779.
- the robot 110 comprises a software program that uses preprogrammed logic to run the applications 120.
- the robot 110 comprises a software program that uses an OCR logic to control the applications 120.
- the robot 110 comprises a software program that operates the applications 120 according to pre-recorded human manipulation of the applications 120, including using logic learnt from human manipulation of the applications 120.
- the user behavior e.g., a click on the interfaced that simulates a "touch/tap” event or a drag movement that simulates a "sliding/moving” event
- the client application 260 and/or the one or more hardware devices 250 are preferably detected by the client application 260 and/or the one or more hardware devices 250 and transmitted back to the supervisor 100 and/or the robot 110 to be recorded to form a script (or a part of a programming code) to help robots operating/controlling the same application later using the script.
- the robots 110 preferably become more "intelligent" by learning to behave more like human. Accordingly, output of the applications 120 shown on the dynamic user interface will be more meaningful to users since the robots 110 become more "human-like.”
- the robot 110 comprises a software program that operates the applications 120 according to a combination of two or more of the four types of logic described.
- the supervisor 100 or the robot 110 preferably determines whether each of the applications 120 is capable of running natively on the operating system of the server 10 or would need to be executed on the virtual machine 160 in step 1020. In a preferred embodiment, if it is already known that applications 120 can be natively run on a specific OS other than the original OS run on the server 10, one can skip the step 1010 and go directly to step 1020 to allow the applications 120 to be executed on the corresponding OS on the virtual machine 160. In step 1030, if needed, the applications 120 connect to the one or more corresponding application servers 30 in order to run properly in step 1040. In step 1050, the applications 120 output data, including but not limited to text, visual and/or audio output. In step 1050, the supervisor 100 determines whether or not to store the data output from the applications 120 in the database 150. If storage is required, the data output by the applications 120 is preferably stored in the database 150 in step 1070.
- data stored within the database 150 comprises output of the one or more applications 120 as they are being executed by the robot 110.
- the output preferably comprises text, visual and/or audio output.
- data stored within the database 150 comprises data transmitted between the application 120 and its corresponding application server 30.
- data stored within the database 150 comprises both types of data described.
- the supervisor 100 decides to store all output from the applications 120 as well as communication between the applications 120 and corresponding the application servers 30 in their entirety in the database 150. In another preferred embodiment of the present invention, the supervisor 100 may decide to store only partial data. This may preferably be done for reasons such as to conserve the server 10 resources including processing power and/or storage space. For example, if one of the applications 120 in question comprises a full length movie, rather than storing the entire movie, the system of the present invention stores only a series of snapshots of the movie, short snippet of the movie, a series of short snippets of the movie or a full length version of the movie but in lower resolution quality.
- steps 1000 to 1070 are repeated continuously. In another preferred embodiment, steps 1000 to 1070 are repeated only periodically or on as-needed basis. For example, the present invention would run only if there is a user, if a user requests information that is not available in the database 150 or if there is adequate system resources, etc....
- a user of the system of the present invention can generate a request via input module 210.
- the input module 210 comprises a physical keyboard
- requests may be generated by typing certain instruction(s)/keyword(s).
- the input module comprises a voice input device such as a microphone
- requests may be generated after receiving an audio form of the instruction(s)/keyword(s) and recognizing the audio form to generate the request.
- the application 260 transmits the request to the supervisor 100 in step 2010 via network interfaces 220 and 140.
- the supervisor 100 receives that request in step 2020.
- the supervisor 100 identifies the applications 120 that are relevant to the request.
- data stored within the database 150 may be used by the supervisor 100 to determine relevance of one of the applications 120 to a particular request.
- data stored within the database 150 may comprise in-app data which preferably comprises text, visual and/or audio output of the one of the applications 120 as the one of the applications 120 is running as well as data communicated between the one of the applications 120 and its corresponding application server 30. Determination of relevance may be performed using a variety of algorithms, the simplest of which comprises matching words of the request to the underlying search data.
- step 2040 the supervisor 100 decides to transmit output of the applications
- the supervisor 100 transmits output from the one of the applications 120 in its entirety. In another preferred embodiment of the present invention, the supervisor 100 is preferably configured to transmit only partial output. For example, in a preferred embodiment, the supervisor 100 is capable of limiting transmissions to the client 20 in order to conserve system resources such as processing power and/or storage space. Specifically, if the one of the applications 120 in question comprises a full length movie, rather than transmitting the whole movie, the supervisor 100 preferably limits transmission of output to the client application 260 to only a series of snapshots of the movie, short snippet of the movie or a series of short snippets of the movie.
- the client application 260 Upon receiving output of applications 120 relevant to the request in question in step 2050, the client application 260 displays the output on the media output 230 in step 2060.
- the media output 240 is configured to display one or more outputs from the one or more relevant applications 120.
- the output preferably comprises text, audio, one or more snapshots and/or part or full time-lapsed visual and/or audio output of applications 120 as the applications 120 are being executed or near real time.
- the data streamed from server 10 is not live or near live but, rather, sourced from data stored previously in database 150.
- FIG. 5b illustrates a preferred method of the present invention where a user is able to interact with one or more applications 120 from the dynamic user interface.
- the supervisor 100 preferably couples to client application output displayed on the media output 240 and/or the input module 210 so that the supervisor 100 is able to detect if user initiates interaction with a particular one of the applications 120.
- This preferably comprises mapping output of the client application 260 displayed on the media output 240 using coordinates.
- Cartesian coordinates [33, 88] preferably indicates a location having the 33th pixel in row and 88 pixel in col from the left- top pixel of the window displaying an activity of the application 120.
- output of the client application 260 displayed on the media output 240 may be coupled to the supervisor 100.
- coordinate systems other than Cartesian coordinate systems may be used as required.
- coupling output of the client application 260 displayed on the media output 240 preferably further comprises use of one or more event tags to indicate an event associated with the coordinates.
- “TOUCH[33, 66]” represents user clicking on or touching screen location addressed [33, 66] in pixels on the media output 240.
- the coordinates and/or event tags may require proper translation or conversion for the supervision 100 for such reasons as the supervisor 100 may be configured for different resolution than resolution of the media output 240 or only part of the media output 240 is used for displaying dynamic user interface.
- the supervisor 100 is preferably able to detect when user action indicates that the user wishes to initiate interaction with a particular one of the applications 120 displayed on the dynamic user interface of the present invention.
- User preferably initiates interaction with the applications 120 in step 3010, which can be done in a variety of ways.
- the media output 240 comprises a touch sensitive screen
- a user preferably makes physical contact with output of a particular one of the applications 120 displayed on media output 240, such as with tip of a finger.
- a user can interact with the one of the applications 120 with an onscreen cursor by aiming and clicking a mouse on the output of the particular one of the applications 120 displayed on the media output 240.
- a set of coordinates and touch event tags are sent to the supervisor 100 to indicate that a user wishes to interact with the one of the applications 120 corresponding to location of the coordinates.
- the input module comprises a keyboard
- users can simply hit specific keys, such as number 9, in order to initiate interaction with the one of the application 120 corresponding to number 9.
- supervisor 100 creates a new instance of application 120 specifically to interact solely with that user. Since this new instance of the application is preferably manually operated by the user, there is no need to connect it to a robot 110. If there is more than one user wanting to interact with one application 120, multiple instances of that application 120 can be created on the server 10 such that each user can interact with the instance of the application 120 independently. It should be noted that, if application 120 requires a virtual machine 160 to run, in one preferred embodiment, the new instance of application 120 can be created within the same virtual machine 160 as other instances of application 120. In another preferred embodiment, the new instance of application 120 can be created running within its own new virtual machine 160 that is not shared with any other instances of application 120.
- the new instance of the application 120 is preferably initialized from the beginning as if the user just started the application. For example, if the application in question is a game, the new instance of the application 120 can start running at the very beginning of the game so that the user can experience the game from the beginning. In another example, if the application 120 is a mobile app that offers the users the ability to listen to streamed audio, the new instance of mobile app can start running at the first default page where users can browse through different categories of music.
- the new instance of the application 120 can run from exactly or approximately where the user requested interaction with the application 120.
- the application 120 in question is a mobile app that displays an animation or video
- the new instance of mobile app can start running at exactly or approximately where the animation played to when user requested interaction with that application 120.
- the new instance of application 120 can bring the user directly to the particular page of the multi-page mobile app that shows the location information of a reseller of the new car
- the new instance of the application 120 can start running at the part of the application 120 that is most relevant to the associated request.
- application 120 in question is a multi-webpage website for restaurant X and the request is regarding location of restaurant X
- the robot can bring the new instance of the webpage to directly to the particular webpage of the multi-webpage website that shows location information of restaurant X.
- application 120 in question is a multiscreen mobile app for restaurant X and the request is regarding location of restaurant X
- the robot can bring the new instance of the to the particular screen of the multi- screen mobile app that shows location information of restaurant X.
- client 20 is preferably coupled with application 120 running on server 10. This preferably comprises coupling media module 240 as well as hardware devices 250 to application 120.
- step 3030 as with coupling to supervisor 100, coupling output of client application 260 displayed on media output 240 to application
- mapping 120 preferably comprises mapping using coordinates and/or event tags.
- the coordinates may require proper translation or conversion for application 120 for reason such as that application 120 may be configured for different resolution than resolution of media output 240 and/or output of application 120 occupies only a portion of output of client application 260 on media output 240.
- the dynamic user interface of the present invention can be configured to allow users to interact with an application that requires physical motion to control the application.
- a user can preferably interact with or operate and control an application displayed on the dynamic user interface by dragging visual output of application 120 in one or more directions in order to simulate physical motion.
- the dragging motion could preferably cause a series of "DRAG[X, Y]" coordinate and event tag pair to be generated where changes in X and Y values could be interpreted by corresponding application 120 as a particular direction.
- client application 260 may convert the series of "DRAG[X,Y]" coordinate and event tag pair to data that application 120 can interpret as a direction for properly interacting with application 120.
- the application is a sports car driving game that allows a user to control direction of the car in the game by physically tilting a game device left or right
- dragging the output of the application to the right or left on the dynamic user interface allows the user to simulate this tilting motion in application 120.
- the game allows a user to utilize a shaking motion to interact with the game
- a user can interact with the search result by dragging the search result left and right in quick succession in order to simulate shaking motion in order to interact with application 120.
- hardware devices 250 of client 20 can also be coupled with application 120. Coupling sensor devices 250 to application 120 preferably comprises step 3040 where client 20 obtains hardware settings required by application 120.
- Hardware device setting is preferably a set of values used to configure hardware devices 250 required by application 120.
- the hardware setting can preferably take the form of eight digits (each has a value of "0" or "1") to represent the requirement of the hardware values of application 120.
- a hardware setting of [0, 1, 1, 0, 0, 0, 0, 0] may be used to indicate that the 2nd and 3rd driver/hardware devices are required and should be redirected from client 20 to application 120.
- the hardware setting of application 120 can be obtained by analyzing application 120.
- the AndroidManifest.xml file indicates how many activities, intent filters, etc. is needed for executing the app and therefore also provides hardware requirement of the app.
- each app executed on the virtual machine can have at least one hardware setting.
- supervisor 100 Upon receiving the hardware settings, supervisor 100 preferably initiates and couples required hardware devices 250 to application 120 in step 3050. This step preferably involves use of pseudo driver 460 and driver 520.
- application 120 can be coupled with a plurality of drivers regardless of application 120's hardware setting and transmitting the hardware value to the second environment from the driver selected by the hardware setting.
- the driver(s) does not need to run all the time. They can be configured to initiate after receiving the hardware setting, and the driver(s) can be stopped when they are no longer needed by application 120 or client 20 can turn it off because the user switches to other application 120.
- memory 530 receives one or more hardware values from the driver 520 in step 3060.
- Client 20 then transmits the hardware values to application 120 by way of HAL 420 of server 10 in step 3070.
- Pseudo driver 460 receives the hardware values, converts the hardware values into a format appropriate for application 120 and transmits the converted hardware values to application 120 for processing..
- Hardware values can be transmitted to HAL 420 and passed to application 120 directly.
- the navigation app is actually running without directly coupling to a real GPS/AGPS module.
- the GPS signal generated on client 20 cannot be transmitted to the navigation app 120 because the dynamic user interface itself is an application and may not be configured to receive hardware values (e.g., for an Android app, the programmer is required to write down a line in the program code for loading a class called "Gps Satellite" in the android.location package to get the coordinates before packing the application package file (e.g., an .apk file in Android).
- the dynamic user interface may by default load every class for servicing.
- the dynamic user interface can dynamically configure itself to load specific kinds of class or to couple with particular hardware after receiving the relevant hardware values (i.e., a program implemented in the dynamic user interface in response to the hardware values to load corresponding classes).
- drivers corresponding to hardware devices 250 can be coupled with application 120 continuously so that corresponding hardware value(s)/hardware-generated file(s) can be sent to application 120 whenever required via step 3050.
- hardware values can be buffered in memory 560 when there is no network service and only transferred to application 120 when the network service resumes.
- hardware values comprise images, sounds, acceleration, ambient temperature, rate of rotation, ambient light level, geomagnetic field, ambient air pressure, proximity of an object relative to the view screen of the device, the relative ambient humidity, coordinates (GPS/AGPS module), or etc.... from hardware such as cameras, microphones, accelerometers, thermometers, gyroscopes, magnetometers, barometers, proximity sensors, hygrometers or etc... of client 20.
- the supervisor 100 performs many tasks. It should be noted that the supervisor 100 can be structured to be one single software module or the supervisor 100 can comprise several modules divided up by various functions the supervisor 100 performs. For example, there can be a module of the supervisor 100 specifically for transmitting output of the applications 120 and a different module to control database storage and yet another module for creating instances of applications, etc.... In addition, in alternative embodiments where certain tasks performed by supervisor 100 are not required, then those modules of supervisor 100 is not included in the method and system of the present invention. For example, if outputs of the application 120 is always transmitted to client 20 in their entirety, then there is no need for a module of the supervisor 100 for deciding what to send to client 20.
- supervisor 100 In the extreme, if most tasks to be performed by supervisor 100 are not required, then remaining functions of the supervisor 100 may be folded into other components of the method and system of the present invention such as robot 110. For example, if request from a particular user always refers to one and only one application, the output is always the entirety of the application and there is no need for user interaction, then controls and decision making functions of supervisor 100 is not needed. Instead, robot 110 can handle receipt of request step 220 and transmission of the requested application 120 in steps 230 and 240.
- figures 4,5b and 6 may be used to describe a preferred embodiment where a user uses camera as a hardware device 250 to take a photo for application 120.
- the dash lines represent function calls or instructions (calling/instructing the hardware or the corresponding API(s)); and the solid lines represent real data transfer, e.g., the picture or any other kinds of hardware values.
- supervisor 100 preferably couples to output of client application 260 on media output 240 and listens for user action.
- a user preferably selects application 120 displayed on the dynamic user interface of the present invention which sends coordinates and user events to supervisor 100.
- supervisor 100 Upon receiving the coordinates and touch event, supervisor 100 initiates application 120.
- supervisor 100 creates a new instance of application 120 specifically for the user.
- supervisor 100 couples output of client application 260 displayed on media output 240 to application 120 using coordinates and event tag.
- supervisor 100 receives hardware settings of application 120 in which one of the hardware 250 required would be a camera.
- supervisor 100 couples the camera's driver to application 120.
- step 3060 hardware 250 is triggered. This may involve a user hitting a button corresponding to a camera in visual image of application displayed on dynamic user interface such as button 610 of Figure 6. After receiving the touch event, supervisor 100 then recognizes the touch event and applies that touch event to application 120. After any proper conversion for resolution differences,, application 120 recognizes that the user wishes to take a picture since touch event coordinates that the touch event occurred within camera trigger button. In an alternative embodiment of the present invention, application 120 may send a set of coordinates to configure an area 620 of media output 240 that correspond with a button for triggering the camera, as shown in FIG. 6.
- the step of transmitting a set of coordinates back to application 120 is not required because client application 260 may be configured to recognize the location where the user touched the screen to generate the initial touch event.
- client application 260 may be configured to recognize the location where the user touched the screen to generate the initial touch event.
- a view corresponding to a button can be generated locally by the dynamic output interface, and thus its resolution can be fixed locally.
- the rest of the screen of dynamic user interface is for displaying output from application 120, and the resolution can be configured to adjust to bandwidth of the network (e.g., becoming 1080P when the bandwidth is high and 360P when the bandwidth is low).
- hardware value is sent to memory 530 in step 3070.
- Hardware value preferably comprises one or more images taken.
- this data is then sent to application 120 via HALs 420 and 540.
- step 4000 user enters a request via input module 210.
- Client 20 then transmits the request to server 10 in step 4010.
- supervisor 100 determines the application 120 relevant to the request in step 4030.
- supervisor 100 creates a new instance of the application 120. In creating this new instance of the application 120, supervisor 100 determines whether the application 120 requires a virtual machine as well as application server 30 to run properly in steps 4050-4080. Once the new instance of the application 120 has been created, the application 120 initiates and begins to generate output in step 4090. It should be emphasized that the output of the application 120 at this point is related to normal initiation of the application 120 such as showing the starting screen of the application 120 and not caused by execution of robots 110.
- supervisor 100 transmits the output to client 20 in step 4100 which client 20 receives in step 4110.
- client 20 displays the output of the application 120 transmitted from server 10 via client application 260 displayed on media output 240.
- supervisor couples output of application 260 displayed on media output 240 to application 120 using XY coordinates and/or event tags to allow interaction between user and the application 120 via output of the application 120 shown by client application 260 displayed on media output 240.
- client 20 obtains hardware settings from application 120.
- supervisor 100 couples required hardware device 250 to the application 120 using driver 520, HAL 530, pseudo driver 460 and HAL 420. Once the application 120 and hardware devices 250 are coupled, it is then possible to trigger hardware device 250 in step 4160.
- hardware device 4160 generates hardware values which can be passed back to the application 120 via pseudo driver 460 which converts the hardware values to a form that can be processed by the application 120. With the application 120 fully coupled to device 20, user can now use the application 120 via client 20 as if the application 120 is running on client 20 without having to actually having to download, install and/or execute the application 120 on device 20.
- the specification may have presented the method and/or process of the present invention as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps set forth herein, the method or process should not be limited to the particular sequence of steps described. As one of ordinary skill in the art would appreciate, other sequences of steps may be possible. Therefore, the particular order of the steps set forth in the specification should not be construed as limitations on the claims. In addition, the claims directed to the method and/or process of the present invention should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
Claims
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361862967P | 2013-08-07 | 2013-08-07 | |
US201461922860P | 2014-01-01 | 2014-01-01 | |
US201461951548P | 2014-03-12 | 2014-03-12 | |
US201461971029P | 2014-03-27 | 2014-03-27 | |
PCT/US2014/050248 WO2015021341A1 (en) | 2013-08-07 | 2014-08-07 | Methods and systems for generating dynamic user interface |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3014387A1 true EP3014387A1 (en) | 2016-05-04 |
EP3014387A4 EP3014387A4 (en) | 2017-01-04 |
Family
ID=52461949
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP14834567.1A Withdrawn EP3014387A4 (en) | 2013-08-07 | 2014-08-07 | Methods and systems for generating dynamic user interface |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP3014387A4 (en) |
JP (1) | JP6145577B2 (en) |
CN (1) | CN106062663A (en) |
WO (1) | WO2015021341A1 (en) |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5910903A (en) * | 1997-07-31 | 1999-06-08 | Prc Inc. | Method and apparatus for verifying, analyzing and optimizing a distributed simulation |
US9003461B2 (en) * | 2002-12-10 | 2015-04-07 | Ol2, Inc. | Streaming interactive video integrated with recorded video segments |
JP2006079292A (en) * | 2004-09-08 | 2006-03-23 | Aruze Corp | Content trial system |
US8100766B2 (en) * | 2007-03-23 | 2012-01-24 | Electrified Games, Inc. | Method and system for personalized digital game creation |
KR20100028974A (en) * | 2008-09-05 | 2010-03-15 | 엔에이치엔(주) | Method and system for managing cooperative online game |
US8866701B2 (en) * | 2011-03-03 | 2014-10-21 | Citrix Systems, Inc. | Transparent user interface integration between local and remote computing environments |
JP2012236327A (en) * | 2011-05-11 | 2012-12-06 | Canon Inc | Printing apparatus, method of controlling the same, and program |
US9672355B2 (en) * | 2011-09-16 | 2017-06-06 | Veracode, Inc. | Automated behavioral and static analysis using an instrumented sandbox and machine learning classification for mobile security |
EP2611207A1 (en) * | 2011-12-29 | 2013-07-03 | Gface GmbH | Cloud-rendered high-quality advertisement frame |
-
2014
- 2014-08-07 EP EP14834567.1A patent/EP3014387A4/en not_active Withdrawn
- 2014-08-07 CN CN201480040011.0A patent/CN106062663A/en active Pending
- 2014-08-07 WO PCT/US2014/050248 patent/WO2015021341A1/en active Application Filing
- 2014-08-07 JP JP2016527158A patent/JP6145577B2/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
JP6145577B2 (en) | 2017-06-14 |
EP3014387A4 (en) | 2017-01-04 |
CN106062663A (en) | 2016-10-26 |
WO2015021341A1 (en) | 2015-02-12 |
JP2016533576A (en) | 2016-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11385760B2 (en) | Augmentable and spatially manipulable 3D modeling | |
US8352903B1 (en) | Interaction with partially constructed mobile device applications | |
CN111198730B (en) | Method, device, terminal and computer storage medium for starting sub-application program | |
CN102362251B (en) | For the user interface providing the enhancing of application programs to control | |
US8239840B1 (en) | Sensor simulation for mobile device applications | |
EP2184668B1 (en) | Method, system and graphical user interface for enabling a user to access enterprise data on a portable electronic device | |
AU2011358860B2 (en) | Operating method of terminal based on multiple inputs and portable terminal supporting the same | |
US20130041938A1 (en) | Dynamic Mobile Interaction Using Customized Interfaces | |
US10768881B2 (en) | Multi-screen interaction method and system in augmented reality scene | |
US20190391715A1 (en) | Digital supplement association and retrieval for visual search | |
KR20160141838A (en) | Expandable application representation | |
JP7104242B2 (en) | Methods for sharing personal information, devices, terminal equipment and storage media | |
EP2553561A2 (en) | Interacting with remote applications displayed within a virtual desktop of a tablet computing device | |
KR20140147095A (en) | Instantiable gesture objects | |
US20240320269A1 (en) | Digital supplement association and retrieval for visual search | |
KR20160140932A (en) | Expandable application representation and sending content | |
WO2019157870A1 (en) | Method and device for accessing webpage application, storage medium, and electronic apparatus | |
JP2024112912A (en) | Digital supplemental association and retrieval for visual search | |
KR101710667B1 (en) | Device and method for providing service application using robot | |
Helal et al. | Mobile platforms and development environments | |
WO2022083554A1 (en) | User interface layout and interaction method, and three-dimensional display device | |
JP2002169640A (en) | Information processing equipment, method and recording medium | |
US10290151B2 (en) | AR/VR device virtualisation | |
JP6145577B2 (en) | Method and system for generating a dynamic user interface | |
US10845953B1 (en) | Identifying actionable content for navigation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20160128 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20161206 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06F 17/30 20060101AFI20161130BHEP |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
18W | Application withdrawn |
Effective date: 20170519 |