EP3014387A1 - Methods and systems for generating dynamic user interface - Google Patents

Methods and systems for generating dynamic user interface

Info

Publication number
EP3014387A1
EP3014387A1 EP14834567.1A EP14834567A EP3014387A1 EP 3014387 A1 EP3014387 A1 EP 3014387A1 EP 14834567 A EP14834567 A EP 14834567A EP 3014387 A1 EP3014387 A1 EP 3014387A1
Authority
EP
European Patent Office
Prior art keywords
applications
application
output
hardware
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP14834567.1A
Other languages
German (de)
French (fr)
Other versions
EP3014387A4 (en
Inventor
Rei-Cheng HSU
Hsiu-Ping Lin
Chi-Jen WU
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fiiser Inc
Original Assignee
Fiiser Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fiiser Inc filed Critical Fiiser Inc
Publication of EP3014387A1 publication Critical patent/EP3014387A1/en
Publication of EP3014387A4 publication Critical patent/EP3014387A4/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • G06F16/972Access to data in other repository systems, e.g. legacy data or dynamic Web page generation

Definitions

  • the present invention relates to methods and systems for generating user interfaces on electronic devices.
  • the present invention relates to methods and systems for generating user interfaces on electronic devices that provide dynamic outputs of items presented on the interfaces to users.
  • online search results and mobile app markets typically show static text and/or images for each item presented in the interface, whether the item is a webpage, a software application, an online PC game, a mobile app, etc....
  • the text and/or images are static because they are usually predetermined by developers who created the items. This static information offers useful but limited information to users because the static information presented may be stale as the items may have recently changed.
  • static text and images provide little information regarding look and feel for each item.
  • users would need to download, execute and/or install those items. For example, if the item is a web page, users would need to enter that website, or, if the item is a mobile app, users would need to download, install and execute the app.
  • an improved user interface that can provide more than just static pre-determined text and/or images so that users can receive robust, up-to-date information for the items presented on the interface as well as to gain a good idea regarding look and feel for the item or even interact with the items directly on the interface without having to download, install or execute the items on their own devices.
  • the present invention provides methods and systems for generating user interfaces on electronic devices that provide dynamic outputs of items presented on the interfaces to users.
  • the invention provide a system for generating a dynamic user interface, comprising one or more clients; one or more servers networked with the one or more clients; one or more applications residing in the one or more servers; and one or more robots residing in the one or more servers, wherein the one or more robots are configured to execute the one or more applications and the one or more servers are configured to provide output of the one or more applications to the one or more clients as the one or more applications are being operated by the one or more robots.
  • the present invention provides a system for generating dynamic user interface comprising: one or more clients; one or more servers networked with the one or more clients; one or more applications residing in the one or more servers; and one or more client applications residing in the one or more clients; wherein the one or more client applications are coupled with the one or more applications residing in the server in order to enable user interaction with the one or more applications via the one or more clients.
  • the system further comprises a supervisor residing in the one or more servers.
  • the output of the system comprises text, snapshots or a series of snapshots, or partial or full time-lapsed visual and/or audio data taken from output of the one or more applications.
  • the one or more servers may receive one or more requests from the one or more clients.
  • the request may be related to an online search, a request for one or more applications for application rental purposes or a request for one or more application for application purchase purposes.
  • the output of the one or more applications is transmitted to the client from applications relevant to the one or more requests. And the relevance may be determined using in-app data.
  • the system may comprise a database for storing in-app data, which includes the output from the one or more applications.
  • the source of the output to the one or more clients comprises output stored in the database.
  • the system may further comprise a client application residing in the client configured to display output of the one or more applications transmitted from the server via the media output.
  • the system further comprises a media output residing in the client.
  • the system also comprises a client application residing in the client configured to display output of the one or more applications transmitted from the server via the media output.
  • the output of the applications shown by the client application displayed on the media output may be configured to allow user interaction with the one or more applications via client application.
  • the output of the one or more applications shown by the client application displayed on the media output may also be coupled with the one or more corresponding applications, wherein the coupling comprises communication of a coordinate and event tag pair.
  • the system may further comprises a means for simulating physical motion that is required for interacting with the one or more applications by simulation based on user interaction with the output of the one or more applications displayed by the client applications on the media output.
  • the client further comprises one or more hardware devices.
  • the one or more applications may be coupled with the one or more hardware devices, wherein the coupling comprises communication of hardware values.
  • the one or more applications may be configured to receive the hardware values from at least one of a driver on the client corresponding to the one or more hardware devices, a pseudo driver configured to receive the hardware values, an HAL layer and a library coupled with the one or more applications.
  • system further comprising one or more virtual machines to assist the one or more applications that cannot run natively on the one or more servers.
  • an instance of the one or more applications is created to facilitate user interaction.
  • the instance of the application created for the user interaction may begin at the beginning of the application, begin at the place at which the application was executing when the user interaction with the application was initiated or begin at the place in the application that is most relevant to the user request.
  • the invention provides a method for generating a dynamic user interface comprising the steps of: executing one or more applications on one or more servers using one or more robots; and transmitting output of the one or more applications to one or more clients.
  • the transmitted output of the one or more applications includes text, snapshots or a series of snapshots, partial or full time-lapsed visual and/or audio output of the one or more applications.
  • the method further comprises the step of receiving one or more requests from the one or more clients.
  • the one or more requests comprise one or more online search queries, wherein the one or more requests comprise one or more requests for one or more applications for application rental purposes or for application purchase purposes.
  • the step of transmitting output comprises output of the one or more applications relevant to the one or more requests, wherein relevance of the one or more applications is determined using in-app data.
  • the step of transmitting output of the one or more applications may be done live or near live as the output is being generated by the one or more applications.
  • the method further comprises the step of storing in-app data, including output from the one or more applications in the one or more databases.
  • the step of transmitting output of the one or more applications may be done by transmitting output stored in the one or more databases.
  • the method may also comprise the step of supporting the execution of the one or more applications using one or more application servers.
  • the method further comprises the step of storing in-app data, which includes data exchanged between the one or more applications and the one or more corresponding application servers, in one or more databases.
  • the method further comprises the step of displaying output of the one or more applications on one or more media outputs.
  • the method may further comprise the step of displaying output of the one or more applications on one or more media outputs using one or more client applications according to certain example of the invention. And in other examples, the method further comprises the step of allowing user interaction with the one or more applications via output of the one or more applications displayed by the client applications on the media output, the step of coupling output of the one or more applications shown by the one or more client application displayed on the one or more media output to the one or more corresponding applications. According to one example, the step of coupling output of the one or more applications shown by the one or more client application displayed on the one or more media output to the one or more corresponding applications comprises communicating one or more coordinate and event tag pairs.
  • the method may further comprises the step of simulating physical motion that is required for interacting with the one or more applications by simulation based on user interaction with output of the one or more applications displayed by the one or more client applications on the one or more media outputs.
  • the method comprises the step of creating an instance of the application to enable user interaction with the instance of the application.
  • the present invention provides a method for generating a dynamic user interface comprising the steps of: creating and initiating instances of an application on one or more servers; and coupling the instances of applications on one or more servers with one or more clients located remotely with respect to the one or more servers to enable user interaction with the one or more applications using the one or more clients.
  • FIG. 1 illustrates an output of an embodiment of the dynamic user interface of the present invention.
  • FIG. 2 illustrates an output of second embodiment of the dynamic user interface of the present invention in which user interaction with the interface is possible.
  • FIG. 3 illustrates a preferred embodiment of the system of dynamic user interface system of the present invention.
  • FIG. 4 illustrates a software and related hardware architecture of a preferred embodiment of the system of the dynamic user interface of the present invention.
  • FIGs. 5a and b illustrate process flows of a preferred embodiment of the method of the dynamic user interface of the present invention.
  • FIGs. 5c and d illustrate process flows of a second preferred embodiment of the method of the dynamic user interface of the present invention where no robots are required to execute applications.
  • FIG. 6 illustrates a preferred embodiment of the output of dynamic user interface showing output of an application related to taking photos using cameras.
  • An exemplary dynamic user interface of the present invention preferably comprises a method and system for generating an interface that displays information related to one or more applications, wherein, for an application, the dynamic user interface preferably displays output of the application including text, one or more images, one or more snapshots, part or full time-lapsed visual and/or audio of the applications as the applications are being executed without requiring users to download, install or execute the application.
  • FIG. 1 illustrates an output of a preferred embodiment of the dynamic user interface of the present invention where at least a part of the output is preferably streamed from a remote server (e.g., such as a server 10 described and illustrated with reference to FIG. 3) on which the applications are being executed by robots.
  • a remote server e.g., such as a server 10 described and illustrated with reference to FIG.
  • the dynamic user interface of the present invention provides information regarding the applications, including look and feel of the applications.
  • the dynamic user interface is configured to allow users to interact with the applications.
  • the applications running on remote servers may be coupled with hardware devices (e.g., a sensor or an input device) located within user devices such that hardware values may be passed to the applications.
  • the dynamic user interface of the present invention is preferably capable of handling a variety of applications.
  • an application comprises a mobile app
  • the dynamic user interface of the present invention preferably displays text, one or more snapshots or part or full time-lapsed visual and/or audio output of the mobile app as the app is being executed.
  • the dynamic user interface of the present invention preferably displays text, one or more snapshots or part or full time-lapsed visual and/or audio output generated by the application when one or more robots operate the game as if the game was being played by a user on a smart device, and a user looking at the dynamic user interface can, therefore, be watching the mobile game as it is being played by the robot.
  • an application comprises a website with multiple web pages
  • the dynamic user interface of the present invention preferably displays text, one or more snapshots or part or full time-lapsed visual and/or audio output of the website as if a user is clicking through various web pages of the website.
  • the present invention can be applied to any application whose output comprises text, moving images and/or audio such as videos, animations or multiple static images such as pictures or webpages.
  • the robots are not necessary for implementing the dynamic user interface of the present invention since some of the applications may already come with a subprogram or an activity that runs at least a part of the applications to illustrate to users how applications are executed/operated (i.e., these kind of applications can generate dynamic outputs automatically after the subprogram/activity is initiated).
  • the system of the present invention only needs to initiate/activate this type of application and stream the output to the corresponding part of the dynamic user interface without requiring one or more robots to operate the application.
  • the dynamic user interface of the present invention not only provides output of the applications but also allows users to interact with the applications via the dynamic user interface without requiring the users to download, install or execute the applications.
  • FIG. 2 illustrates a preferred embodiment of the present invention that is configured for user interaction.
  • the interface preferably allows users to input text messages in box 912 and submit that text message to the application without the need to download the application 900 locally
  • users can control map application 800 via the dynamic user interface of the present invention by clicking home button 918 or map button 920 directly on the interface.
  • dynamic interface of the present invention not only displays output of applications, but also provides users with the opportunity to interact with the applications via the dynamic user interface.
  • the dynamic user interface is configured to allow users to interact with applications displayed on the interface that requires physical motion to control the applications such as rotating, tilting or shaking user devices.
  • these physical motions can preferably be simulated with user interaction with output of the applications displayed on the interface.
  • a user can interact with an application displayed on the dynamic user interface by dragging visual output of the application in one or more directions in order to simulate physical motion.
  • the application is a sports car driving game that allows a user to control direction of the car in the game by physically tilting a game device left or right
  • dragging the output of the application to the right or left on the dynamic user interface allows the user to simulate this tilting motion.
  • a user can interact with the application by dragging the search result left and right in quick succession in order to simulate shaking motion in order to interact with the search result.
  • the dynamic user interface of the present invention is configured to couple with hardware devices of the device on which the interface of the present invention is displayed and use hardware values generated by the hardware devices as input for the application in question.
  • the application in question is preferably capable of receiving actual or simulated coordinates (or geographic location) from the user device on which the interface of the present invention is displayed via a GPS module of the device or even simulated location information via such means as IP2Location function or other functions capable of finding geo-locations by IP addresses.
  • the application is capable of gathering and receiving changes in orientations of the interface of the present invention or the device on which the interface of the present invention resides and rotate accordingly to help users better control/operate the application.
  • the user interface can be configured to focus on a specific application by, for example, highlighting the applications/the UI or output of the application while concealing those of other applications shown on the dynamic user interface.
  • the dynamic user interface of the present invention is equipped with the ability to communicate intelligently with the device in the attempt to provide users with the same experience as if the search results are installed locally in the device.
  • the dynamic user interface of the present invention is particular useful in context of providing search results.
  • search results provided by popular search engines such as Google and Yahoo are limited to static data and images.
  • the dynamic user interface of the present invention is capable of providing text, one or more snapshots or part or full time-lapsed visual and/or audio output of the search results as the search results are being executed by robots.
  • the present invention has access to data generated by the applications while the applications are running (such dynamic data is otherwise termed "in-app data"), the search may be based on the in-app data in addition to static description of a search result.
  • the dynamic user interface of the present invention allows user to interact with application outputs displayed on the search result interface such as illustrated in FIGs. 1 and 2.
  • the present invention is able to provide more accuracy in searching more information as well as much better idea of look and feel of the search results to the user than conventional search results without requiring users to download, install or execute the applications.
  • the dynamic user interface of the present invention is also potentially valuable for application purchase purposes or application rental purposes.
  • the present invention can be applied to mobile app marketplaces such as the Android App stores where the present invention allows users to experience the applications before purchasing and/or downloading them.
  • the present invention can also be useful for businesses that wish to rent out applications rather than sell applications where, rather than downloading applications, users can use the rented applications via the dynamic user interface of the present invention.
  • the present invention can be used in a plethora of business contexts.
  • FIG. 3 depicts a preferred embodiment of a system for generating dynamic user interface of the present invention.
  • the system preferably comprises one or more servers 10, one or more clients 20 and/or one or more application servers 30.
  • Each server 10 preferably further comprises a supervisor 100, one or more robots
  • the supervisor 100 preferably comprises a software program that acts as a controller for the present invention.
  • Each robot 110 preferably comprises a software program configured to run the one or more applications 120.
  • Each application 120 preferably comprises software applications, online PC games, mobile apps, web browsers, etc....
  • the processor 130 is preferably configured to process data for components of the server 10 such as one of the supervisor 100, the robot 110, the applications 120, the network interface 140, a database 150 or one or more virtual machines 160.
  • the database 150 preferably stores data when required by the system.
  • the system of the present invention preferably further comprises the one or more virtual machines 160 that are capable of assisting the applications 120 run on the server 10 if any of the applications 120 are unable to run natively on an operation system of the server 10.
  • the server 10 preferably comprises multiple virtual machines 160 so that the system 10 is capable of emulating a diversity of operation systems such as Android, Windows, iOS, UNIX, etc.
  • the application servers 30 preferably comprise servers that are configured to communicate with and/or support execution of corresponding application 120.
  • an application 120 may comprise an online PC game; in this case, corresponding application server 30 preferably comprises a server that hosts the online PC game 120 and performs tasks such as but not limited to receiving data, providing data and/or processing data for the online PC game 120.
  • application 120 may comprise a mobile app; in this case, the corresponding application server 30 preferably comprises a server that hosts the mobile app 120 and performs tasks such as but not limited to receiving data, providing data and/or processing data for the corresponding mobile app 120.
  • application 120 may comprise a web browser capable of displaying websites; in this case, the corresponding application server 30 preferably comprises a web server that hosts websites and performs tasks such as but not limited to receiving data, providing data and/or processing data for the web browser application 120.
  • application 120 may be a stand-alone software application that does not require any application server 30 to operate so that no corresponding application servers 30 are required in the system of this example.
  • the one or more clients 20 comprise one or more input modules 210, one or more network interfaces 220, one or more processors 230, one or more media outputs 240, one or more hardware devices 250 and/or one or more client applications 260.
  • the one or more input modules 210 are configured to receive inputs such as user requests.
  • the input module 210 may comprise an onscreen keyboard, a physical keyboard, a handwriting input module, a voice input such as a microphone or a combination thereof.
  • the one or more network interfaces 220 allow communications between the server 10 and the client 20.
  • the media output 240 is preferably capable of outputting text, visual and/or audio information such as output received from the server 10 related to the one or more applications 120.
  • the media output 240 is also capable of receiving input from a user.
  • the media output 240 is capable of detecting position of a pointing device such as a cursor or, in case of a touch sensitive screen, position where a user makes physical contact with the media output 240 such as the tip of a stylus or a finger.
  • the client 20 preferably further comprises one or more hardware devices 250 comprising (but not limited to) a camera, a microphone, a GPS, an accelerometer, a gyroscope, a light sensor, a thermometer, a magnetometer, a barometer, a proximity sensor, a hygrometer, an NFC, a loudspeaker or an image sensor.
  • the hardware devices 250 are preferably configured to sense environmental values such as images, sounds, acceleration, ambient temperature, rate of rotation, ambient light level, geomagnetic field, ambient air pressure, proximity of an object relative to the view screen of the device, the relative ambient humidity, coordinates (GPS/AGPS module), etc....
  • the system of the present invention is preferably configured to allow communication between the hardware devices 250 and the applications 120 including these environmental values. A process flow for such communication is explained below in connection with FIG. 5b.
  • the client application 260 can preferably comprise or be configured to couple with a software application, such as a web browser or a customized software application (app) including or coupled with the media output 240, capable of displaying output of the dynamic user interface of the present invention.
  • FIG. 4 illustrates a preferred software and related hardware architecture of the server 10 and the client 20 in a preferred embodiment of the present invention in which users are able to interact with applications displayed in the dynamic user interface.
  • the architecture of the server 10 preferably comprises the virtual machine 160 (if required by the application 120), a kernel 410, a hardware abstraction layer 420, one or more libraries 430, an application framework 440, applications 450 and/or a pseudo driver 460.
  • An architecture of the client 20 preferably comprises the hardware devices 250, a device driver 520, a memory 530, a hardware abstraction layer 530, one or more libraries 550, an application framework 560 and/or an applications 260.
  • the hardware abstraction layers (HAL) 420 and 540 preferably comprise pieces of software that provides access for applications 450 to hardware resources. It should be noted that, although HAL is a standard part of the Android operating system software architecture, HAL may not exist in exactly in the same form or even exists at all in other operating systems such as iOS and Windows. Therefore, alternative embodiments of the present invention in which client 20 runs on non-Android operation systems preferably comprise similar software architecture for handling hardware control and communication such as Windows Driver Model. In another preferred embodiment HAL is not needed.
  • the applications 450 preferably comprise the supervisor 100, the robots 110 and/or the applications 120. In one preferred embodiment, the application 260 comprises a web browser.
  • the libraries 430 and 550 as well as the application framework 440 and 560 preferably provide software platform on which the applications 450 and 260 may run. As mentioned before, the virtual machine 160 may not be required if the application 120 is able to run natively on the server 10.
  • the memory 530 preferably comprises random access memory on which hardware values generated by the hardware devices 250 may be stored.
  • the pseudo driver 460 is preferably software that converts hardware values received from client 20 to data that the application 120 or an API of application 120 can understand and transmit the converted data to the application 120 or the API.
  • a pseudo driver can be configured to work with on one set of hardware values (e.g., it could be configured to handle only GPS coordinates and pass the values to the API related to locations).
  • the pseudo driver 460 can preferably be configured to handle multiple hardware devices 250 to one application 120 (i.e., it can be configured to become capable of converting/passing different kinds of hardware values from various hardware devices), and therefore facilitating coupling between the application 120 to more than one hardware devices. Pseudo drivers are described in further details below in connection with figure 5b.
  • various components of the system may be combined into fewer servers or even one single computer.
  • the server 10 and any of the servers 30 may both reside on one machine.
  • various components of the server 10, the client 20 and/or the application server 30 do not need to necessarily reside within one single server or client, but may be located in separate servers and/or clients.
  • the database 150 of the server 10 can be located in its own database server that is separate but networked with the server 10.
  • the media output 240 of the client 20 may not need to be a built-in screen but may be a separate stand-alone monitor networked to the client 20.
  • Figure 5a is a flowchart illustrating a preferred embodiment of the method for generating the dynamic user interface of the present invention.
  • the supervisor 10 preferably initiates the one or more robots 110 to run the one or more applications 120.
  • Robots 110 are preferably programmed to mimic user behavior to automatically execute the applications.
  • robots 110 can be configured to randomly operating applications 120 by randomly probing UIs of the applications 120. This type of robots 110 is suitable for operating a wide variety of applications 120. Examples of various embodiments of robot(s) 110 are described in U.S. Patent App. No. 13960779.
  • the robot 110 comprises a software program that uses preprogrammed logic to run the applications 120.
  • the robot 110 comprises a software program that uses an OCR logic to control the applications 120.
  • the robot 110 comprises a software program that operates the applications 120 according to pre-recorded human manipulation of the applications 120, including using logic learnt from human manipulation of the applications 120.
  • the user behavior e.g., a click on the interfaced that simulates a "touch/tap” event or a drag movement that simulates a "sliding/moving” event
  • the client application 260 and/or the one or more hardware devices 250 are preferably detected by the client application 260 and/or the one or more hardware devices 250 and transmitted back to the supervisor 100 and/or the robot 110 to be recorded to form a script (or a part of a programming code) to help robots operating/controlling the same application later using the script.
  • the robots 110 preferably become more "intelligent" by learning to behave more like human. Accordingly, output of the applications 120 shown on the dynamic user interface will be more meaningful to users since the robots 110 become more "human-like.”
  • the robot 110 comprises a software program that operates the applications 120 according to a combination of two or more of the four types of logic described.
  • the supervisor 100 or the robot 110 preferably determines whether each of the applications 120 is capable of running natively on the operating system of the server 10 or would need to be executed on the virtual machine 160 in step 1020. In a preferred embodiment, if it is already known that applications 120 can be natively run on a specific OS other than the original OS run on the server 10, one can skip the step 1010 and go directly to step 1020 to allow the applications 120 to be executed on the corresponding OS on the virtual machine 160. In step 1030, if needed, the applications 120 connect to the one or more corresponding application servers 30 in order to run properly in step 1040. In step 1050, the applications 120 output data, including but not limited to text, visual and/or audio output. In step 1050, the supervisor 100 determines whether or not to store the data output from the applications 120 in the database 150. If storage is required, the data output by the applications 120 is preferably stored in the database 150 in step 1070.
  • data stored within the database 150 comprises output of the one or more applications 120 as they are being executed by the robot 110.
  • the output preferably comprises text, visual and/or audio output.
  • data stored within the database 150 comprises data transmitted between the application 120 and its corresponding application server 30.
  • data stored within the database 150 comprises both types of data described.
  • the supervisor 100 decides to store all output from the applications 120 as well as communication between the applications 120 and corresponding the application servers 30 in their entirety in the database 150. In another preferred embodiment of the present invention, the supervisor 100 may decide to store only partial data. This may preferably be done for reasons such as to conserve the server 10 resources including processing power and/or storage space. For example, if one of the applications 120 in question comprises a full length movie, rather than storing the entire movie, the system of the present invention stores only a series of snapshots of the movie, short snippet of the movie, a series of short snippets of the movie or a full length version of the movie but in lower resolution quality.
  • steps 1000 to 1070 are repeated continuously. In another preferred embodiment, steps 1000 to 1070 are repeated only periodically or on as-needed basis. For example, the present invention would run only if there is a user, if a user requests information that is not available in the database 150 or if there is adequate system resources, etc....
  • a user of the system of the present invention can generate a request via input module 210.
  • the input module 210 comprises a physical keyboard
  • requests may be generated by typing certain instruction(s)/keyword(s).
  • the input module comprises a voice input device such as a microphone
  • requests may be generated after receiving an audio form of the instruction(s)/keyword(s) and recognizing the audio form to generate the request.
  • the application 260 transmits the request to the supervisor 100 in step 2010 via network interfaces 220 and 140.
  • the supervisor 100 receives that request in step 2020.
  • the supervisor 100 identifies the applications 120 that are relevant to the request.
  • data stored within the database 150 may be used by the supervisor 100 to determine relevance of one of the applications 120 to a particular request.
  • data stored within the database 150 may comprise in-app data which preferably comprises text, visual and/or audio output of the one of the applications 120 as the one of the applications 120 is running as well as data communicated between the one of the applications 120 and its corresponding application server 30. Determination of relevance may be performed using a variety of algorithms, the simplest of which comprises matching words of the request to the underlying search data.
  • step 2040 the supervisor 100 decides to transmit output of the applications
  • the supervisor 100 transmits output from the one of the applications 120 in its entirety. In another preferred embodiment of the present invention, the supervisor 100 is preferably configured to transmit only partial output. For example, in a preferred embodiment, the supervisor 100 is capable of limiting transmissions to the client 20 in order to conserve system resources such as processing power and/or storage space. Specifically, if the one of the applications 120 in question comprises a full length movie, rather than transmitting the whole movie, the supervisor 100 preferably limits transmission of output to the client application 260 to only a series of snapshots of the movie, short snippet of the movie or a series of short snippets of the movie.
  • the client application 260 Upon receiving output of applications 120 relevant to the request in question in step 2050, the client application 260 displays the output on the media output 230 in step 2060.
  • the media output 240 is configured to display one or more outputs from the one or more relevant applications 120.
  • the output preferably comprises text, audio, one or more snapshots and/or part or full time-lapsed visual and/or audio output of applications 120 as the applications 120 are being executed or near real time.
  • the data streamed from server 10 is not live or near live but, rather, sourced from data stored previously in database 150.
  • FIG. 5b illustrates a preferred method of the present invention where a user is able to interact with one or more applications 120 from the dynamic user interface.
  • the supervisor 100 preferably couples to client application output displayed on the media output 240 and/or the input module 210 so that the supervisor 100 is able to detect if user initiates interaction with a particular one of the applications 120.
  • This preferably comprises mapping output of the client application 260 displayed on the media output 240 using coordinates.
  • Cartesian coordinates [33, 88] preferably indicates a location having the 33th pixel in row and 88 pixel in col from the left- top pixel of the window displaying an activity of the application 120.
  • output of the client application 260 displayed on the media output 240 may be coupled to the supervisor 100.
  • coordinate systems other than Cartesian coordinate systems may be used as required.
  • coupling output of the client application 260 displayed on the media output 240 preferably further comprises use of one or more event tags to indicate an event associated with the coordinates.
  • “TOUCH[33, 66]” represents user clicking on or touching screen location addressed [33, 66] in pixels on the media output 240.
  • the coordinates and/or event tags may require proper translation or conversion for the supervision 100 for such reasons as the supervisor 100 may be configured for different resolution than resolution of the media output 240 or only part of the media output 240 is used for displaying dynamic user interface.
  • the supervisor 100 is preferably able to detect when user action indicates that the user wishes to initiate interaction with a particular one of the applications 120 displayed on the dynamic user interface of the present invention.
  • User preferably initiates interaction with the applications 120 in step 3010, which can be done in a variety of ways.
  • the media output 240 comprises a touch sensitive screen
  • a user preferably makes physical contact with output of a particular one of the applications 120 displayed on media output 240, such as with tip of a finger.
  • a user can interact with the one of the applications 120 with an onscreen cursor by aiming and clicking a mouse on the output of the particular one of the applications 120 displayed on the media output 240.
  • a set of coordinates and touch event tags are sent to the supervisor 100 to indicate that a user wishes to interact with the one of the applications 120 corresponding to location of the coordinates.
  • the input module comprises a keyboard
  • users can simply hit specific keys, such as number 9, in order to initiate interaction with the one of the application 120 corresponding to number 9.
  • supervisor 100 creates a new instance of application 120 specifically to interact solely with that user. Since this new instance of the application is preferably manually operated by the user, there is no need to connect it to a robot 110. If there is more than one user wanting to interact with one application 120, multiple instances of that application 120 can be created on the server 10 such that each user can interact with the instance of the application 120 independently. It should be noted that, if application 120 requires a virtual machine 160 to run, in one preferred embodiment, the new instance of application 120 can be created within the same virtual machine 160 as other instances of application 120. In another preferred embodiment, the new instance of application 120 can be created running within its own new virtual machine 160 that is not shared with any other instances of application 120.
  • the new instance of the application 120 is preferably initialized from the beginning as if the user just started the application. For example, if the application in question is a game, the new instance of the application 120 can start running at the very beginning of the game so that the user can experience the game from the beginning. In another example, if the application 120 is a mobile app that offers the users the ability to listen to streamed audio, the new instance of mobile app can start running at the first default page where users can browse through different categories of music.
  • the new instance of the application 120 can run from exactly or approximately where the user requested interaction with the application 120.
  • the application 120 in question is a mobile app that displays an animation or video
  • the new instance of mobile app can start running at exactly or approximately where the animation played to when user requested interaction with that application 120.
  • the new instance of application 120 can bring the user directly to the particular page of the multi-page mobile app that shows the location information of a reseller of the new car
  • the new instance of the application 120 can start running at the part of the application 120 that is most relevant to the associated request.
  • application 120 in question is a multi-webpage website for restaurant X and the request is regarding location of restaurant X
  • the robot can bring the new instance of the webpage to directly to the particular webpage of the multi-webpage website that shows location information of restaurant X.
  • application 120 in question is a multiscreen mobile app for restaurant X and the request is regarding location of restaurant X
  • the robot can bring the new instance of the to the particular screen of the multi- screen mobile app that shows location information of restaurant X.
  • client 20 is preferably coupled with application 120 running on server 10. This preferably comprises coupling media module 240 as well as hardware devices 250 to application 120.
  • step 3030 as with coupling to supervisor 100, coupling output of client application 260 displayed on media output 240 to application
  • mapping 120 preferably comprises mapping using coordinates and/or event tags.
  • the coordinates may require proper translation or conversion for application 120 for reason such as that application 120 may be configured for different resolution than resolution of media output 240 and/or output of application 120 occupies only a portion of output of client application 260 on media output 240.
  • the dynamic user interface of the present invention can be configured to allow users to interact with an application that requires physical motion to control the application.
  • a user can preferably interact with or operate and control an application displayed on the dynamic user interface by dragging visual output of application 120 in one or more directions in order to simulate physical motion.
  • the dragging motion could preferably cause a series of "DRAG[X, Y]" coordinate and event tag pair to be generated where changes in X and Y values could be interpreted by corresponding application 120 as a particular direction.
  • client application 260 may convert the series of "DRAG[X,Y]" coordinate and event tag pair to data that application 120 can interpret as a direction for properly interacting with application 120.
  • the application is a sports car driving game that allows a user to control direction of the car in the game by physically tilting a game device left or right
  • dragging the output of the application to the right or left on the dynamic user interface allows the user to simulate this tilting motion in application 120.
  • the game allows a user to utilize a shaking motion to interact with the game
  • a user can interact with the search result by dragging the search result left and right in quick succession in order to simulate shaking motion in order to interact with application 120.
  • hardware devices 250 of client 20 can also be coupled with application 120. Coupling sensor devices 250 to application 120 preferably comprises step 3040 where client 20 obtains hardware settings required by application 120.
  • Hardware device setting is preferably a set of values used to configure hardware devices 250 required by application 120.
  • the hardware setting can preferably take the form of eight digits (each has a value of "0" or "1") to represent the requirement of the hardware values of application 120.
  • a hardware setting of [0, 1, 1, 0, 0, 0, 0, 0] may be used to indicate that the 2nd and 3rd driver/hardware devices are required and should be redirected from client 20 to application 120.
  • the hardware setting of application 120 can be obtained by analyzing application 120.
  • the AndroidManifest.xml file indicates how many activities, intent filters, etc. is needed for executing the app and therefore also provides hardware requirement of the app.
  • each app executed on the virtual machine can have at least one hardware setting.
  • supervisor 100 Upon receiving the hardware settings, supervisor 100 preferably initiates and couples required hardware devices 250 to application 120 in step 3050. This step preferably involves use of pseudo driver 460 and driver 520.
  • application 120 can be coupled with a plurality of drivers regardless of application 120's hardware setting and transmitting the hardware value to the second environment from the driver selected by the hardware setting.
  • the driver(s) does not need to run all the time. They can be configured to initiate after receiving the hardware setting, and the driver(s) can be stopped when they are no longer needed by application 120 or client 20 can turn it off because the user switches to other application 120.
  • memory 530 receives one or more hardware values from the driver 520 in step 3060.
  • Client 20 then transmits the hardware values to application 120 by way of HAL 420 of server 10 in step 3070.
  • Pseudo driver 460 receives the hardware values, converts the hardware values into a format appropriate for application 120 and transmits the converted hardware values to application 120 for processing..
  • Hardware values can be transmitted to HAL 420 and passed to application 120 directly.
  • the navigation app is actually running without directly coupling to a real GPS/AGPS module.
  • the GPS signal generated on client 20 cannot be transmitted to the navigation app 120 because the dynamic user interface itself is an application and may not be configured to receive hardware values (e.g., for an Android app, the programmer is required to write down a line in the program code for loading a class called "Gps Satellite" in the android.location package to get the coordinates before packing the application package file (e.g., an .apk file in Android).
  • the dynamic user interface may by default load every class for servicing.
  • the dynamic user interface can dynamically configure itself to load specific kinds of class or to couple with particular hardware after receiving the relevant hardware values (i.e., a program implemented in the dynamic user interface in response to the hardware values to load corresponding classes).
  • drivers corresponding to hardware devices 250 can be coupled with application 120 continuously so that corresponding hardware value(s)/hardware-generated file(s) can be sent to application 120 whenever required via step 3050.
  • hardware values can be buffered in memory 560 when there is no network service and only transferred to application 120 when the network service resumes.
  • hardware values comprise images, sounds, acceleration, ambient temperature, rate of rotation, ambient light level, geomagnetic field, ambient air pressure, proximity of an object relative to the view screen of the device, the relative ambient humidity, coordinates (GPS/AGPS module), or etc.... from hardware such as cameras, microphones, accelerometers, thermometers, gyroscopes, magnetometers, barometers, proximity sensors, hygrometers or etc... of client 20.
  • the supervisor 100 performs many tasks. It should be noted that the supervisor 100 can be structured to be one single software module or the supervisor 100 can comprise several modules divided up by various functions the supervisor 100 performs. For example, there can be a module of the supervisor 100 specifically for transmitting output of the applications 120 and a different module to control database storage and yet another module for creating instances of applications, etc.... In addition, in alternative embodiments where certain tasks performed by supervisor 100 are not required, then those modules of supervisor 100 is not included in the method and system of the present invention. For example, if outputs of the application 120 is always transmitted to client 20 in their entirety, then there is no need for a module of the supervisor 100 for deciding what to send to client 20.
  • supervisor 100 In the extreme, if most tasks to be performed by supervisor 100 are not required, then remaining functions of the supervisor 100 may be folded into other components of the method and system of the present invention such as robot 110. For example, if request from a particular user always refers to one and only one application, the output is always the entirety of the application and there is no need for user interaction, then controls and decision making functions of supervisor 100 is not needed. Instead, robot 110 can handle receipt of request step 220 and transmission of the requested application 120 in steps 230 and 240.
  • figures 4,5b and 6 may be used to describe a preferred embodiment where a user uses camera as a hardware device 250 to take a photo for application 120.
  • the dash lines represent function calls or instructions (calling/instructing the hardware or the corresponding API(s)); and the solid lines represent real data transfer, e.g., the picture or any other kinds of hardware values.
  • supervisor 100 preferably couples to output of client application 260 on media output 240 and listens for user action.
  • a user preferably selects application 120 displayed on the dynamic user interface of the present invention which sends coordinates and user events to supervisor 100.
  • supervisor 100 Upon receiving the coordinates and touch event, supervisor 100 initiates application 120.
  • supervisor 100 creates a new instance of application 120 specifically for the user.
  • supervisor 100 couples output of client application 260 displayed on media output 240 to application 120 using coordinates and event tag.
  • supervisor 100 receives hardware settings of application 120 in which one of the hardware 250 required would be a camera.
  • supervisor 100 couples the camera's driver to application 120.
  • step 3060 hardware 250 is triggered. This may involve a user hitting a button corresponding to a camera in visual image of application displayed on dynamic user interface such as button 610 of Figure 6. After receiving the touch event, supervisor 100 then recognizes the touch event and applies that touch event to application 120. After any proper conversion for resolution differences,, application 120 recognizes that the user wishes to take a picture since touch event coordinates that the touch event occurred within camera trigger button. In an alternative embodiment of the present invention, application 120 may send a set of coordinates to configure an area 620 of media output 240 that correspond with a button for triggering the camera, as shown in FIG. 6.
  • the step of transmitting a set of coordinates back to application 120 is not required because client application 260 may be configured to recognize the location where the user touched the screen to generate the initial touch event.
  • client application 260 may be configured to recognize the location where the user touched the screen to generate the initial touch event.
  • a view corresponding to a button can be generated locally by the dynamic output interface, and thus its resolution can be fixed locally.
  • the rest of the screen of dynamic user interface is for displaying output from application 120, and the resolution can be configured to adjust to bandwidth of the network (e.g., becoming 1080P when the bandwidth is high and 360P when the bandwidth is low).
  • hardware value is sent to memory 530 in step 3070.
  • Hardware value preferably comprises one or more images taken.
  • this data is then sent to application 120 via HALs 420 and 540.
  • step 4000 user enters a request via input module 210.
  • Client 20 then transmits the request to server 10 in step 4010.
  • supervisor 100 determines the application 120 relevant to the request in step 4030.
  • supervisor 100 creates a new instance of the application 120. In creating this new instance of the application 120, supervisor 100 determines whether the application 120 requires a virtual machine as well as application server 30 to run properly in steps 4050-4080. Once the new instance of the application 120 has been created, the application 120 initiates and begins to generate output in step 4090. It should be emphasized that the output of the application 120 at this point is related to normal initiation of the application 120 such as showing the starting screen of the application 120 and not caused by execution of robots 110.
  • supervisor 100 transmits the output to client 20 in step 4100 which client 20 receives in step 4110.
  • client 20 displays the output of the application 120 transmitted from server 10 via client application 260 displayed on media output 240.
  • supervisor couples output of application 260 displayed on media output 240 to application 120 using XY coordinates and/or event tags to allow interaction between user and the application 120 via output of the application 120 shown by client application 260 displayed on media output 240.
  • client 20 obtains hardware settings from application 120.
  • supervisor 100 couples required hardware device 250 to the application 120 using driver 520, HAL 530, pseudo driver 460 and HAL 420. Once the application 120 and hardware devices 250 are coupled, it is then possible to trigger hardware device 250 in step 4160.
  • hardware device 4160 generates hardware values which can be passed back to the application 120 via pseudo driver 460 which converts the hardware values to a form that can be processed by the application 120. With the application 120 fully coupled to device 20, user can now use the application 120 via client 20 as if the application 120 is running on client 20 without having to actually having to download, install and/or execute the application 120 on device 20.
  • the specification may have presented the method and/or process of the present invention as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps set forth herein, the method or process should not be limited to the particular sequence of steps described. As one of ordinary skill in the art would appreciate, other sequences of steps may be possible. Therefore, the particular order of the steps set forth in the specification should not be construed as limitations on the claims. In addition, the claims directed to the method and/or process of the present invention should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The present invention relates to methods and systems for generating dynamic user interfaces on electronic devices where the dynamic user interface displays output of one or more applications that are running in a remote server. In one preferred embodiment, the one or more applications are operated by robots. In another preferred embodiment, the one or more applications are coupled to client devices, including hardware devices of the client devices, as to allow robust user interaction with the one or more applications remotely from the electronic device.

Description

Methods and Systems for Generating Dynamic User Interface
Cross-Reference To Related Applications
The application claims priority to U.S. Provisional Application No. 61862967, filed on August 7, 2013, entitled "METHODS AND SYSTEMS FOR IN-APP
SEARCH AND APP BROWSERS," U.S. Provisional Application No. 61922860, filed on January 1, 2014, entitled "METHODS AND SYSTEMS FOR GENERATING USER INTERFACES RELATING TO APP SEARCH," U.S. Provisional Application No. 61951548, filed on March 12, 2014, entitled "METHODS AND SYSTEMS FOR GENERATING USER INTERFACE RELATING TO APP SEARCH" and U.S.
Provisional Application No. 61971029, filed on March 27, 2014, entitled
"METHODS AND SYSTEMS FOR COMMUNICATIONS BETWEEN APPS AND VIRTUAL MACHINES," which are incorporated by reference herein in their entirety.
Technical Field
The present invention relates to methods and systems for generating user interfaces on electronic devices. In particular, the present invention relates to methods and systems for generating user interfaces on electronic devices that provide dynamic outputs of items presented on the interfaces to users.
Background of the Invention
Currently, user interfaces generated for presenting items such as search results of online search engines such as Google or Yahoo or apps of mobile app markets such as Android's Play Store or Apple's App Store are mostly static in nature. For example, online search results and mobile app markets typically show static text and/or images for each item presented in the interface, whether the item is a webpage, a software application, an online PC game, a mobile app, etc.... The text and/or images are static because they are usually predetermined by developers who created the items. This static information offers useful but limited information to users because the static information presented may be stale as the items may have recently changed. In addition, static text and images provide little information regarding look and feel for each item. To obtain more information as well as experience look and feel, users would need to download, execute and/or install those items. For example, if the item is a web page, users would need to enter that website, or, if the item is a mobile app, users would need to download, install and execute the app.
Therefore, there is a need for an improved user interface that can provide more than just static pre-determined text and/or images so that users can receive robust, up-to-date information for the items presented on the interface as well as to gain a good idea regarding look and feel for the item or even interact with the items directly on the interface without having to download, install or execute the items on their own devices.
Summary of the Invention The present invention provides methods and systems for generating user interfaces on electronic devices that provide dynamic outputs of items presented on the interfaces to users.
Therefore, in a first principal embodiment, the invention provide a system for generating a dynamic user interface, comprising one or more clients; one or more servers networked with the one or more clients; one or more applications residing in the one or more servers; and one or more robots residing in the one or more servers, wherein the one or more robots are configured to execute the one or more applications and the one or more servers are configured to provide output of the one or more applications to the one or more clients as the one or more applications are being operated by the one or more robots. According to a further embodiment, the present invention provides a system for generating dynamic user interface comprising: one or more clients; one or more servers networked with the one or more clients; one or more applications residing in the one or more servers; and one or more client applications residing in the one or more clients; wherein the one or more client applications are coupled with the one or more applications residing in the server in order to enable user interaction with the one or more applications via the one or more clients.
In some examples of the invention, the system further comprises a supervisor residing in the one or more servers. And the output of the system comprises text, snapshots or a series of snapshots, or partial or full time-lapsed visual and/or audio data taken from output of the one or more applications. The one or more servers may receive one or more requests from the one or more clients. In other examples, the request may be related to an online search, a request for one or more applications for application rental purposes or a request for one or more application for application purchase purposes. The output of the one or more applications is transmitted to the client from applications relevant to the one or more requests. And the relevance may be determined using in-app data.
According to certain examples of the invention, the system may comprise a database for storing in-app data, which includes the output from the one or more applications. And the source of the output to the one or more clients comprises output stored in the database. The system may further comprise a client application residing in the client configured to display output of the one or more applications transmitted from the server via the media output. In yet other examples of the invention, the system further comprises a media output residing in the client. And the system also comprises a client application residing in the client configured to display output of the one or more applications transmitted from the server via the media output. In one example, the output of the applications shown by the client application displayed on the media output may be configured to allow user interaction with the one or more applications via client application. The output of the one or more applications shown by the client application displayed on the media output may also be coupled with the one or more corresponding applications, wherein the coupling comprises communication of a coordinate and event tag pair. In other examples, the system may further comprises a means for simulating physical motion that is required for interacting with the one or more applications by simulation based on user interaction with the output of the one or more applications displayed by the client applications on the media output.
In accordance with some examples of the invention, the client further comprises one or more hardware devices. And the one or more applications may be coupled with the one or more hardware devices, wherein the coupling comprises communication of hardware values. The one or more applications may be configured to receive the hardware values from at least one of a driver on the client corresponding to the one or more hardware devices, a pseudo driver configured to receive the hardware values, an HAL layer and a library coupled with the one or more applications.
In other examples, the system further comprising one or more virtual machines to assist the one or more applications that cannot run natively on the one or more servers. And an instance of the one or more applications is created to facilitate user interaction. For example, the instance of the application created for the user interaction may begin at the beginning of the application, begin at the place at which the application was executing when the user interaction with the application was initiated or begin at the place in the application that is most relevant to the user request.
In another embodiment, the invention provides a method for generating a dynamic user interface comprising the steps of: executing one or more applications on one or more servers using one or more robots; and transmitting output of the one or more applications to one or more clients. The transmitted output of the one or more applications includes text, snapshots or a series of snapshots, partial or full time-lapsed visual and/or audio output of the one or more applications.
For some examples of the invention, the method further comprises the step of receiving one or more requests from the one or more clients. And the one or more requests comprise one or more online search queries, wherein the one or more requests comprise one or more requests for one or more applications for application rental purposes or for application purchase purposes. In accordance to certain examples, the step of transmitting output comprises output of the one or more applications relevant to the one or more requests, wherein relevance of the one or more applications is determined using in-app data. In yet other examples, the step of transmitting output of the one or more applications may be done live or near live as the output is being generated by the one or more applications. The method further comprises the step of storing in-app data, including output from the one or more applications in the one or more databases. And the step of transmitting output of the one or more applications may be done by transmitting output stored in the one or more databases. The method may also comprise the step of supporting the execution of the one or more applications using one or more application servers. In one example, the method further comprises the step of storing in-app data, which includes data exchanged between the one or more applications and the one or more corresponding application servers, in one or more databases. In another example, the method further comprises the step of displaying output of the one or more applications on one or more media outputs.
The method may further comprise the step of displaying output of the one or more applications on one or more media outputs using one or more client applications according to certain example of the invention. And in other examples, the method further comprises the step of allowing user interaction with the one or more applications via output of the one or more applications displayed by the client applications on the media output, the step of coupling output of the one or more applications shown by the one or more client application displayed on the one or more media output to the one or more corresponding applications. According to one example, the step of coupling output of the one or more applications shown by the one or more client application displayed on the one or more media output to the one or more corresponding applications comprises communicating one or more coordinate and event tag pairs. And the method may further comprises the step of simulating physical motion that is required for interacting with the one or more applications by simulation based on user interaction with output of the one or more applications displayed by the one or more client applications on the one or more media outputs. In yet other example of the invention, the method comprises the step of creating an instance of the application to enable user interaction with the instance of the application.
In yet another embodiment, the present invention provides a method for generating a dynamic user interface comprising the steps of: creating and initiating instances of an application on one or more servers; and coupling the instances of applications on one or more servers with one or more clients located remotely with respect to the one or more servers to enable user interaction with the one or more applications using the one or more clients.
Additional advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Brief Description of the Drawings
FIG. 1 illustrates an output of an embodiment of the dynamic user interface of the present invention.
FIG. 2 illustrates an output of second embodiment of the dynamic user interface of the present invention in which user interaction with the interface is possible.
FIG. 3 illustrates a preferred embodiment of the system of dynamic user interface system of the present invention.
FIG. 4 illustrates a software and related hardware architecture of a preferred embodiment of the system of the dynamic user interface of the present invention. FIGs. 5a and b illustrate process flows of a preferred embodiment of the method of the dynamic user interface of the present invention.
FIGs. 5c and d illustrate process flows of a second preferred embodiment of the method of the dynamic user interface of the present invention where no robots are required to execute applications.
FIG. 6 illustrates a preferred embodiment of the output of dynamic user interface showing output of an application related to taking photos using cameras.
Detailed Description of the Invention
An exemplary dynamic user interface of the present invention preferably comprises a method and system for generating an interface that displays information related to one or more applications, wherein, for an application, the dynamic user interface preferably displays output of the application including text, one or more images, one or more snapshots, part or full time-lapsed visual and/or audio of the applications as the applications are being executed without requiring users to download, install or execute the application. FIG. 1 illustrates an output of a preferred embodiment of the dynamic user interface of the present invention where at least a part of the output is preferably streamed from a remote server (e.g., such as a server 10 described and illustrated with reference to FIG. 3) on which the applications are being executed by robots. By displaying output of the applications as they are being executed, the dynamic user interface of the present invention provides information regarding the applications, including look and feel of the applications. In addition, in a preferred embodiment of the present invention, the dynamic user interface is configured to allow users to interact with the applications. In yet another preferred embodiment, the applications running on remote servers may be coupled with hardware devices (e.g., a sensor or an input device) located within user devices such that hardware values may be passed to the applications.
The dynamic user interface of the present invention is preferably capable of handling a variety of applications. For example, if an application comprises a mobile app, in addition to providing static text descriptions and/or static images relating to the mobile app, the dynamic user interface of the present invention preferably displays text, one or more snapshots or part or full time-lapsed visual and/or audio output of the mobile app as the app is being executed. In another example, if an application comprises a mobile game, in addition to providing static text descriptions and/or static images relating to the mobile game, the dynamic user interface of the present invention preferably displays text, one or more snapshots or part or full time-lapsed visual and/or audio output generated by the application when one or more robots operate the game as if the game was being played by a user on a smart device, and a user looking at the dynamic user interface can, therefore, be watching the mobile game as it is being played by the robot. In another example, if an application comprises a website with multiple web pages, in addition to providing static text descriptions and/or static images about the website, the dynamic user interface of the present invention preferably displays text, one or more snapshots or part or full time-lapsed visual and/or audio output of the website as if a user is clicking through various web pages of the website. As a person skilled in the art can appreciate, the present invention can be applied to any application whose output comprises text, moving images and/or audio such as videos, animations or multiple static images such as pictures or webpages.
In another example, the robots are not necessary for implementing the dynamic user interface of the present invention since some of the applications may already come with a subprogram or an activity that runs at least a part of the applications to illustrate to users how applications are executed/operated (i.e., these kind of applications can generate dynamic outputs automatically after the subprogram/activity is initiated). In this example, the system of the present invention only needs to initiate/activate this type of application and stream the output to the corresponding part of the dynamic user interface without requiring one or more robots to operate the application.
Furthermore, in a preferred embodiment, the dynamic user interface of the present invention not only provides output of the applications but also allows users to interact with the applications via the dynamic user interface without requiring the users to download, install or execute the applications.
FIG. 2 illustrates a preferred embodiment of the present invention that is configured for user interaction. As shown in FIG. 2, for application 900 displayed on the dynamic user interface of the present invention, the interface preferably allows users to input text messages in box 912 and submit that text message to the application without the need to download the application 900 locally In another example, users can control map application 800 via the dynamic user interface of the present invention by clicking home button 918 or map button 920 directly on the interface. In this way, dynamic interface of the present invention not only displays output of applications, but also provides users with the opportunity to interact with the applications via the dynamic user interface.
In another embodiment of the present invention, the dynamic user interface is configured to allow users to interact with applications displayed on the interface that requires physical motion to control the applications such as rotating, tilting or shaking user devices. In this embodiment of the invention, these physical motions can preferably be simulated with user interaction with output of the applications displayed on the interface. Specifically, a user can interact with an application displayed on the dynamic user interface by dragging visual output of the application in one or more directions in order to simulate physical motion. As an illustration, if the application is a sports car driving game that allows a user to control direction of the car in the game by physically tilting a game device left or right, dragging the output of the application to the right or left on the dynamic user interface allows the user to simulate this tilting motion. Similarly, if the game allows a user to utilize a shaking motion to interact with the game, within the dynamic user interface, a user can interact with the application by dragging the search result left and right in quick succession in order to simulate shaking motion in order to interact with the search result.
In another preferred embodiment, the dynamic user interface of the present invention is configured to couple with hardware devices of the device on which the interface of the present invention is displayed and use hardware values generated by the hardware devices as input for the application in question. For example, the application in question is preferably capable of receiving actual or simulated coordinates (or geographic location) from the user device on which the interface of the present invention is displayed via a GPS module of the device or even simulated location information via such means as IP2Location function or other functions capable of finding geo-locations by IP addresses. In another example, the application is capable of gathering and receiving changes in orientations of the interface of the present invention or the device on which the interface of the present invention resides and rotate accordingly to help users better control/operate the application. If there are multiple applications that require physical control, the user interface can be configured to focus on a specific application by, for example, highlighting the applications/the UI or output of the application while concealing those of other applications shown on the dynamic user interface. The dynamic user interface of the present invention is equipped with the ability to communicate intelligently with the device in the attempt to provide users with the same experience as if the search results are installed locally in the device.
The dynamic user interface of the present invention is particular useful in context of providing search results. Currently, search results provided by popular search engines such as Google and Yahoo are limited to static data and images. In addition to static data and images as provided by conventional search services, the dynamic user interface of the present invention is capable of providing text, one or more snapshots or part or full time-lapsed visual and/or audio output of the search results as the search results are being executed by robots. Moreover, because the present invention has access to data generated by the applications while the applications are running (such dynamic data is otherwise termed "in-app data"), the search may be based on the in-app data in addition to static description of a search result. By performing the search based on in-app data, there is a richer pool of data from which to perform the search, resulting in more accurate search results compared to conventional search methods and systems. More details regarding in-app search method and system is described in U.S. Patent App. No. 13960779. Furthermore, in one embodiment of the present invention, the dynamic user interface of the present invention allows user to interact with application outputs displayed on the search result interface such as illustrated in FIGs. 1 and 2. By applying the present invention to the search context, the present invention is able to provide more accuracy in searching more information as well as much better idea of look and feel of the search results to the user than conventional search results without requiring users to download, install or execute the applications.
In addition, the dynamic user interface of the present invention is also potentially valuable for application purchase purposes or application rental purposes. Specifically, the present invention can be applied to mobile app marketplaces such as the Android App stores where the present invention allows users to experience the applications before purchasing and/or downloading them. Furthermore, the present invention can also be useful for businesses that wish to rent out applications rather than sell applications where, rather than downloading applications, users can use the rented applications via the dynamic user interface of the present invention. As one in the art can appreciate, the present invention can be used in a plethora of business contexts.
FIG. 3 depicts a preferred embodiment of a system for generating dynamic user interface of the present invention. As shown in FIG. 3, the system preferably comprises one or more servers 10, one or more clients 20 and/or one or more application servers 30.
Each server 10 preferably further comprises a supervisor 100, one or more robots
110, one or more applications 120, one or more processors 130 and/or one or more network interfaces 140. The supervisor 100 preferably comprises a software program that acts as a controller for the present invention. Each robot 110 preferably comprises a software program configured to run the one or more applications 120. Each application 120 preferably comprises software applications, online PC games, mobile apps, web browsers, etc.... The processor 130 is preferably configured to process data for components of the server 10 such as one of the supervisor 100, the robot 110, the applications 120, the network interface 140, a database 150 or one or more virtual machines 160. The database 150 preferably stores data when required by the system.
The system of the present invention preferably further comprises the one or more virtual machines 160 that are capable of assisting the applications 120 run on the server 10 if any of the applications 120 are unable to run natively on an operation system of the server 10. It should be noted that the server 10 preferably comprises multiple virtual machines 160 so that the system 10 is capable of emulating a diversity of operation systems such as Android, Windows, iOS, UNIX, etc....
The application servers 30 preferably comprise servers that are configured to communicate with and/or support execution of corresponding application 120. For example, in one preferred embodiment of the present invention, an application 120 may comprise an online PC game; in this case, corresponding application server 30 preferably comprises a server that hosts the online PC game 120 and performs tasks such as but not limited to receiving data, providing data and/or processing data for the online PC game 120. In another preferred embodiment of the present invention, application 120 may comprise a mobile app; in this case, the corresponding application server 30 preferably comprises a server that hosts the mobile app 120 and performs tasks such as but not limited to receiving data, providing data and/or processing data for the corresponding mobile app 120. In another preferred embodiment of the present invention, application 120 may comprise a web browser capable of displaying websites; in this case, the corresponding application server 30 preferably comprises a web server that hosts websites and performs tasks such as but not limited to receiving data, providing data and/or processing data for the web browser application 120. In another preferred embodiment of the present invention, application 120 may be a stand-alone software application that does not require any application server 30 to operate so that no corresponding application servers 30 are required in the system of this example.
The one or more clients 20 comprise one or more input modules 210, one or more network interfaces 220, one or more processors 230, one or more media outputs 240, one or more hardware devices 250 and/or one or more client applications 260. The one or more input modules 210 are configured to receive inputs such as user requests. The input module 210 may comprise an onscreen keyboard, a physical keyboard, a handwriting input module, a voice input such as a microphone or a combination thereof. The one or more network interfaces 220 allow communications between the server 10 and the client 20. The media output 240 is preferably capable of outputting text, visual and/or audio information such as output received from the server 10 related to the one or more applications 120.
In addition, in a preferred embodiment of the present invention wherein user interaction with the application is allowed, the media output 240 is also capable of receiving input from a user. For example, the media output 240 is capable of detecting position of a pointing device such as a cursor or, in case of a touch sensitive screen, position where a user makes physical contact with the media output 240 such as the tip of a stylus or a finger.
Moreover, in this preferred embodiment, the client 20 preferably further comprises one or more hardware devices 250 comprising (but not limited to) a camera, a microphone, a GPS, an accelerometer, a gyroscope, a light sensor, a thermometer, a magnetometer, a barometer, a proximity sensor, a hygrometer, an NFC, a loudspeaker or an image sensor. In one example, the hardware devices 250 are preferably configured to sense environmental values such as images, sounds, acceleration, ambient temperature, rate of rotation, ambient light level, geomagnetic field, ambient air pressure, proximity of an object relative to the view screen of the device, the relative ambient humidity, coordinates (GPS/AGPS module), etc.... The system of the present invention is preferably configured to allow communication between the hardware devices 250 and the applications 120 including these environmental values. A process flow for such communication is explained below in connection with FIG. 5b. In one example, the client application 260 can preferably comprise or be configured to couple with a software application, such as a web browser or a customized software application (app) including or coupled with the media output 240, capable of displaying output of the dynamic user interface of the present invention.
FIG. 4 illustrates a preferred software and related hardware architecture of the server 10 and the client 20 in a preferred embodiment of the present invention in which users are able to interact with applications displayed in the dynamic user interface. Referring to FIG. 4, the architecture of the server 10 preferably comprises the virtual machine 160 (if required by the application 120), a kernel 410, a hardware abstraction layer 420, one or more libraries 430, an application framework 440, applications 450 and/or a pseudo driver 460. An architecture of the client 20 preferably comprises the hardware devices 250, a device driver 520, a memory 530, a hardware abstraction layer 530, one or more libraries 550, an application framework 560 and/or an applications 260.
The hardware abstraction layers (HAL) 420 and 540 preferably comprise pieces of software that provides access for applications 450 to hardware resources. It should be noted that, although HAL is a standard part of the Android operating system software architecture, HAL may not exist in exactly in the same form or even exists at all in other operating systems such as iOS and Windows. Therefore, alternative embodiments of the present invention in which client 20 runs on non-Android operation systems preferably comprise similar software architecture for handling hardware control and communication such as Windows Driver Model. In another preferred embodiment HAL is not needed. The applications 450 preferably comprise the supervisor 100, the robots 110 and/or the applications 120. In one preferred embodiment, the application 260 comprises a web browser. The libraries 430 and 550 as well as the application framework 440 and 560 preferably provide software platform on which the applications 450 and 260 may run. As mentioned before, the virtual machine 160 may not be required if the application 120 is able to run natively on the server 10.
The memory 530 preferably comprises random access memory on which hardware values generated by the hardware devices 250 may be stored. The pseudo driver 460 is preferably software that converts hardware values received from client 20 to data that the application 120 or an API of application 120 can understand and transmit the converted data to the application 120 or the API. In a preferred embodiment, a pseudo driver can be configured to work with on one set of hardware values (e.g., it could be configured to handle only GPS coordinates and pass the values to the API related to locations). In another preferred embodiment, the pseudo driver 460 can preferably be configured to handle multiple hardware devices 250 to one application 120 (i.e., it can be configured to become capable of converting/passing different kinds of hardware values from various hardware devices), and therefore facilitating coupling between the application 120 to more than one hardware devices. Pseudo drivers are described in further details below in connection with figure 5b.
As one practiced in the art can appreciate, various components of the system may be combined into fewer servers or even one single computer. For example, the server 10 and any of the servers 30 may both reside on one machine. On the other end of the spectrum, various components of the server 10, the client 20 and/or the application server 30 do not need to necessarily reside within one single server or client, but may be located in separate servers and/or clients. For example, the database 150 of the server 10 can be located in its own database server that is separate but networked with the server 10. As another example, the media output 240 of the client 20 may not need to be a built-in screen but may be a separate stand-alone monitor networked to the client 20.
Figure 5a is a flowchart illustrating a preferred embodiment of the method for generating the dynamic user interface of the present invention. In step 1000, the supervisor 10 preferably initiates the one or more robots 110 to run the one or more applications 120. Robots 110 are preferably programmed to mimic user behavior to automatically execute the applications. In another preferred embodiment, robots 110 can be configured to randomly operating applications 120 by randomly probing UIs of the applications 120. This type of robots 110 is suitable for operating a wide variety of applications 120. Examples of various embodiments of robot(s) 110 are described in U.S. Patent App. No. 13960779. In one preferred embodiment of the invention, the robot 110 comprises a software program that uses preprogrammed logic to run the applications 120. In another preferred embodiment of the invention, the robot 110 comprises a software program that uses an OCR logic to control the applications 120. In yet another preferred embodiment of the invention, the robot 110 comprises a software program that operates the applications 120 according to pre-recorded human manipulation of the applications 120, including using logic learnt from human manipulation of the applications 120.
For example, in the preferred embodiment of the present invention where users are able to interact with the applications displayed in the dynamic user interface, when a user operates a search result by running and controlling it via the dynamic user interface, the user behavior (e.g., a click on the interfaced that simulates a "touch/tap" event or a drag movement that simulates a "sliding/moving" event) is preferably detected by the client application 260 and/or the one or more hardware devices 250 and transmitted back to the supervisor 100 and/or the robot 110 to be recorded to form a script (or a part of a programming code) to help robots operating/controlling the same application later using the script. Through this teaching/machine-learning mechanism, the robots 110 preferably become more "intelligent" by learning to behave more like human. Accordingly, output of the applications 120 shown on the dynamic user interface will be more meaningful to users since the robots 110 become more "human-like." In a fourth preferred embodiment of the invention, the robot 110 comprises a software program that operates the applications 120 according to a combination of two or more of the four types of logic described.
In step 1010, the supervisor 100 or the robot 110 preferably determines whether each of the applications 120 is capable of running natively on the operating system of the server 10 or would need to be executed on the virtual machine 160 in step 1020. In a preferred embodiment, if it is already known that applications 120 can be natively run on a specific OS other than the original OS run on the server 10, one can skip the step 1010 and go directly to step 1020 to allow the applications 120 to be executed on the corresponding OS on the virtual machine 160. In step 1030, if needed, the applications 120 connect to the one or more corresponding application servers 30 in order to run properly in step 1040. In step 1050, the applications 120 output data, including but not limited to text, visual and/or audio output. In step 1050, the supervisor 100 determines whether or not to store the data output from the applications 120 in the database 150. If storage is required, the data output by the applications 120 is preferably stored in the database 150 in step 1070.
In one preferred embodiment, data stored within the database 150 comprises output of the one or more applications 120 as they are being executed by the robot 110. The output preferably comprises text, visual and/or audio output. In another preferred embodiment of the present invention, data stored within the database 150 comprises data transmitted between the application 120 and its corresponding application server 30. In yet another preferred embodiment of the present invention, data stored within the database 150 comprises both types of data described.
In one preferred embodiment of the present invention, the supervisor 100 decides to store all output from the applications 120 as well as communication between the applications 120 and corresponding the application servers 30 in their entirety in the database 150. In another preferred embodiment of the present invention, the supervisor 100 may decide to store only partial data. This may preferably be done for reasons such as to conserve the server 10 resources including processing power and/or storage space. For example, if one of the applications 120 in question comprises a full length movie, rather than storing the entire movie, the system of the present invention stores only a series of snapshots of the movie, short snippet of the movie, a series of short snippets of the movie or a full length version of the movie but in lower resolution quality.
In a preferred embodiment, steps 1000 to 1070 are repeated continuously. In another preferred embodiment, steps 1000 to 1070 are repeated only periodically or on as-needed basis. For example, the present invention would run only if there is a user, if a user requests information that is not available in the database 150 or if there is adequate system resources, etc.... In step 2000, a user of the system of the present invention can generate a request via input module 210. In one preferred embodiment of the present invention, if the input module 210 comprises a physical keyboard, requests may be generated by typing certain instruction(s)/keyword(s). In another preferred embodiment of the present invention, if the input module comprises a voice input device such as a microphone, requests may be generated after receiving an audio form of the instruction(s)/keyword(s) and recognizing the audio form to generate the request.
The application 260 transmits the request to the supervisor 100 in step 2010 via network interfaces 220 and 140. The supervisor 100 receives that request in step 2020. In step 2030, the supervisor 100 identifies the applications 120 that are relevant to the request. In a preferred embodiment of the present invention, data stored within the database 150 may be used by the supervisor 100 to determine relevance of one of the applications 120 to a particular request. As described above, data stored within the database 150 may comprise in-app data which preferably comprises text, visual and/or audio output of the one of the applications 120 as the one of the applications 120 is running as well as data communicated between the one of the applications 120 and its corresponding application server 30. Determination of relevance may be performed using a variety of algorithms, the simplest of which comprises matching words of the request to the underlying search data.
In step 2040, the supervisor 100 decides to transmit output of the applications
120 that are relevant to the request in question to the client 20 via the network interface 140 and 220. In one preferred embodiment of the present invention, the supervisor 100 transmits output from the one of the applications 120 in its entirety. In another preferred embodiment of the present invention, the supervisor 100 is preferably configured to transmit only partial output. For example, in a preferred embodiment, the supervisor 100 is capable of limiting transmissions to the client 20 in order to conserve system resources such as processing power and/or storage space. Specifically, if the one of the applications 120 in question comprises a full length movie, rather than transmitting the whole movie, the supervisor 100 preferably limits transmission of output to the client application 260 to only a series of snapshots of the movie, short snippet of the movie or a series of short snippets of the movie.
Upon receiving output of applications 120 relevant to the request in question in step 2050, the client application 260 displays the output on the media output 230 in step 2060. In one preferred embodiment of the invention, the media output 240 is configured to display one or more outputs from the one or more relevant applications 120. As mentioned above, the output preferably comprises text, audio, one or more snapshots and/or part or full time-lapsed visual and/or audio output of applications 120 as the applications 120 are being executed or near real time. In another preferred embodiment of the present invention, the data streamed from server 10 is not live or near live but, rather, sourced from data stored previously in database 150.
FIG. 5b illustrates a preferred method of the present invention where a user is able to interact with one or more applications 120 from the dynamic user interface. To facilitate this interaction, in step 3000, the supervisor 100 preferably couples to client application output displayed on the media output 240 and/or the input module 210 so that the supervisor 100 is able to detect if user initiates interaction with a particular one of the applications 120. This preferably comprises mapping output of the client application 260 displayed on the media output 240 using coordinates. For example, if the media output 240 comprises a screen with 480x800 resolution, Cartesian coordinates [33, 88] preferably indicates a location having the 33th pixel in row and 88 pixel in col from the left- top pixel of the window displaying an activity of the application 120. By providing information such as coordinates corresponding to the center of client application 260 and/or XY-coordinates pairs, e.g., [33, 88] and [109, 208], to specify positions of the left-top corner and the right-down corner of the view to be generated, output of the client application 260 displayed on the media output 240 may be coupled to the supervisor 100. It should be noted that coordinate systems other than Cartesian coordinate systems may be used as required.
In another preferred embodiment, coupling output of the client application 260 displayed on the media output 240 preferably further comprises use of one or more event tags to indicate an event associated with the coordinates. For example "TOUCH[33, 66]" represents user clicking on or touching screen location addressed [33, 66] in pixels on the media output 240. Those skilled in the art understand that the coordinates and/or event tags may require proper translation or conversion for the supervision 100 for such reasons as the supervisor 100 may be configured for different resolution than resolution of the media output 240 or only part of the media output 240 is used for displaying dynamic user interface. By providing coordinates and event tags, the supervisor 100 is preferably able to detect when user action indicates that the user wishes to initiate interaction with a particular one of the applications 120 displayed on the dynamic user interface of the present invention.
User preferably initiates interaction with the applications 120 in step 3010, which can be done in a variety of ways. For example, if the media output 240 comprises a touch sensitive screen, a user preferably makes physical contact with output of a particular one of the applications 120 displayed on media output 240, such as with tip of a finger. In another preferred embodiment of the present invention, a user can interact with the one of the applications 120 with an onscreen cursor by aiming and clicking a mouse on the output of the particular one of the applications 120 displayed on the media output 240. In both cases, a set of coordinates and touch event tags are sent to the supervisor 100 to indicate that a user wishes to interact with the one of the applications 120 corresponding to location of the coordinates. In yet another preferred embodiment, if the input module comprises a keyboard, users can simply hit specific keys, such as number 9, in order to initiate interaction with the one of the application 120 corresponding to number 9.
Next, in step 3020, supervisor 100 creates a new instance of application 120 specifically to interact solely with that user. Since this new instance of the application is preferably manually operated by the user, there is no need to connect it to a robot 110. If there is more than one user wanting to interact with one application 120, multiple instances of that application 120 can be created on the server 10 such that each user can interact with the instance of the application 120 independently. It should be noted that, if application 120 requires a virtual machine 160 to run, in one preferred embodiment, the new instance of application 120 can be created within the same virtual machine 160 as other instances of application 120. In another preferred embodiment, the new instance of application 120 can be created running within its own new virtual machine 160 that is not shared with any other instances of application 120.
When a new instance of application 120 has been created, the new instance of the application 120 is preferably initialized from the beginning as if the user just started the application. For example, if the application in question is a game, the new instance of the application 120 can start running at the very beginning of the game so that the user can experience the game from the beginning. In another example, if the application 120 is a mobile app that offers the users the ability to listen to streamed audio, the new instance of mobile app can start running at the first default page where users can browse through different categories of music.
Alternatively, when a new instance of the application 120 has been created, the new instance of the application 120 can run from exactly or approximately where the user requested interaction with the application 120. For example, if the application 120 in question is a mobile app that displays an animation or video, the new instance of mobile app can start running at exactly or approximately where the animation played to when user requested interaction with that application 120. In another example, if the application 120 in question is a mobile app including a plurality of activities or pages of which one of them is related to introduction of a new car and the request is regarding information such as a location to buy the new car, the new instance of application 120 can bring the user directly to the particular page of the multi-page mobile app that shows the location information of a reseller of the new car In yet another preferred embodiment of the present invention, when a new instance of the application 120 has been created, the new instance of the application 120 can start running at the part of the application 120 that is most relevant to the associated request. For example, if application 120 in question is a multi-webpage website for restaurant X and the request is regarding location of restaurant X, the robot can bring the new instance of the webpage to directly to the particular webpage of the multi-webpage website that shows location information of restaurant X. Similarly, if application 120 in question is a multiscreen mobile app for restaurant X and the request is regarding location of restaurant X, the robot can bring the new instance of the to the particular screen of the multi- screen mobile app that shows location information of restaurant X.
Next, in order to facilitate user interaction with application 120, client 20 is preferably coupled with application 120 running on server 10. This preferably comprises coupling media module 240 as well as hardware devices 250 to application 120.
In one preferred embodiment, in step 3030, as with coupling to supervisor 100, coupling output of client application 260 displayed on media output 240 to application
120 preferably comprises mapping using coordinates and/or event tags. As with coupling to supervisor 100, those skilled in the art understand that the coordinates may require proper translation or conversion for application 120 for reason such as that application 120 may be configured for different resolution than resolution of media output 240 and/or output of application 120 occupies only a portion of output of client application 260 on media output 240. By coupling output of client application 260 to application 120 allows application 120 to receive user action on media output 240.
When output of client application 260 displayed on media output 240 is coupled to application 120, the dynamic user interface of the present invention can be configured to allow users to interact with an application that requires physical motion to control the application. In an example mentioned earlier, a user can preferably interact with or operate and control an application displayed on the dynamic user interface by dragging visual output of application 120 in one or more directions in order to simulate physical motion. In one preferred embodiment, the dragging motion could preferably cause a series of "DRAG[X, Y]" coordinate and event tag pair to be generated where changes in X and Y values could be interpreted by corresponding application 120 as a particular direction. In an alternative embodiment, client application 260 may convert the series of "DRAG[X,Y]" coordinate and event tag pair to data that application 120 can interpret as a direction for properly interacting with application 120.
As an example of an application using this dragging mechanism, if the application is a sports car driving game that allows a user to control direction of the car in the game by physically tilting a game device left or right, dragging the output of the application to the right or left on the dynamic user interface allows the user to simulate this tilting motion in application 120. Similarly, if the game allows a user to utilize a shaking motion to interact with the game, within the dynamic user interface, a user can interact with the search result by dragging the search result left and right in quick succession in order to simulate shaking motion in order to interact with application 120.
In another preferred embodiment of the present invention, hardware devices 250 of client 20 can also be coupled with application 120. Coupling sensor devices 250 to application 120 preferably comprises step 3040 where client 20 obtains hardware settings required by application 120.
Hardware device setting is preferably a set of values used to configure hardware devices 250 required by application 120. In one preferred embodiment of the present invention, if client 20 has a total of eight hardware devices, then the hardware setting can preferably take the form of eight digits (each has a value of "0" or "1") to represent the requirement of the hardware values of application 120. As an illustration, a hardware setting of [0, 1, 1, 0, 0, 0, 0, 0] may be used to indicate that the 2nd and 3rd driver/hardware devices are required and should be redirected from client 20 to application 120. The hardware setting of application 120 can be obtained by analyzing application 120. In the example of an Android app, the AndroidManifest.xml file indicates how many activities, intent filters, etc. is needed for executing the app and therefore also provides hardware requirement of the app. In this example, each app executed on the virtual machine can have at least one hardware setting.
Upon receiving the hardware settings, supervisor 100 preferably initiates and couples required hardware devices 250 to application 120 in step 3050. This step preferably involves use of pseudo driver 460 and driver 520. Alternatively, application 120 can be coupled with a plurality of drivers regardless of application 120's hardware setting and transmitting the hardware value to the second environment from the driver selected by the hardware setting. Those skilled in the art understand that the driver(s) does not need to run all the time. They can be configured to initiate after receiving the hardware setting, and the driver(s) can be stopped when they are no longer needed by application 120 or client 20 can turn it off because the user switches to other application 120.
Once hardware devices 250 are coupled with application 120, when a hardware device 250 is triggered in step 3060, memory 530 receives one or more hardware values from the driver 520 in step 3060. Client 20 then transmits the hardware values to application 120 by way of HAL 420 of server 10 in step 3070. Pseudo driver 460 receives the hardware values, converts the hardware values into a format appropriate for application 120 and transmits the converted hardware values to application 120 for processing..
In another preferred embodiment of the present invention, there is no need to implement the pseudo driver. Hardware values can be transmitted to HAL 420 and passed to application 120 directly. For example, when a user uses dynamic user interface to operate a navigation app, the navigation app is actually running without directly coupling to a real GPS/AGPS module. The GPS signal generated on client 20 cannot be transmitted to the navigation app 120 because the dynamic user interface itself is an application and may not be configured to receive hardware values (e.g., for an Android app, the programmer is required to write down a line in the program code for loading a class called "Gps Satellite" in the android.location package to get the coordinates before packing the application package file (e.g., an .apk file in Android). Since it is impossible to predict or limit the applications that can be run on dynamic user interface, it is difficult to know all the functions provided by all possible hardware.) In one example of the present invention, the dynamic user interface may by default load every class for servicing. In another example of the present invention, the dynamic user interface can dynamically configure itself to load specific kinds of class or to couple with particular hardware after receiving the relevant hardware values (i.e., a program implemented in the dynamic user interface in response to the hardware values to load corresponding classes).
In one preferred embodiment, drivers corresponding to hardware devices 250 can be coupled with application 120 continuously so that corresponding hardware value(s)/hardware-generated file(s) can be sent to application 120 whenever required via step 3050. In another preferred embodiment, hardware values can be buffered in memory 560 when there is no network service and only transferred to application 120 when the network service resumes.
In a preferred embodiment of the present invention, hardware values comprise images, sounds, acceleration, ambient temperature, rate of rotation, ambient light level, geomagnetic field, ambient air pressure, proximity of an object relative to the view screen of the device, the relative ambient humidity, coordinates (GPS/AGPS module), or etc.... from hardware such as cameras, microphones, accelerometers, thermometers, gyroscopes, magnetometers, barometers, proximity sensors, hygrometers or etc... of client 20.
As illustrated by the description of the method of the present invention above, the supervisor 100 performs many tasks. It should be noted that the supervisor 100 can be structured to be one single software module or the supervisor 100 can comprise several modules divided up by various functions the supervisor 100 performs. For example, there can be a module of the supervisor 100 specifically for transmitting output of the applications 120 and a different module to control database storage and yet another module for creating instances of applications, etc.... In addition, in alternative embodiments where certain tasks performed by supervisor 100 are not required, then those modules of supervisor 100 is not included in the method and system of the present invention. For example, if outputs of the application 120 is always transmitted to client 20 in their entirety, then there is no need for a module of the supervisor 100 for deciding what to send to client 20. In the extreme, if most tasks to be performed by supervisor 100 are not required, then remaining functions of the supervisor 100 may be folded into other components of the method and system of the present invention such as robot 110. For example, if request from a particular user always refers to one and only one application, the output is always the entirety of the application and there is no need for user interaction, then controls and decision making functions of supervisor 100 is not needed. Instead, robot 110 can handle receipt of request step 220 and transmission of the requested application 120 in steps 230 and 240.
As an illustration, figures 4,5b and 6 may be used to describe a preferred embodiment where a user uses camera as a hardware device 250 to take a photo for application 120. Note that in Figure 5, the dash lines represent function calls or instructions (calling/instructing the hardware or the corresponding API(s)); and the solid lines represent real data transfer, e.g., the picture or any other kinds of hardware values.
In step 3000, supervisor 100 preferably couples to output of client application 260 on media output 240 and listens for user action. In step 3010, to initiate interaction with application 120, a user preferably selects application 120 displayed on the dynamic user interface of the present invention which sends coordinates and user events to supervisor 100. Upon receiving the coordinates and touch event, supervisor 100 initiates application 120. In step 3020, supervisor 100 creates a new instance of application 120 specifically for the user. Next, in step 3030, supervisor 100 couples output of client application 260 displayed on media output 240 to application 120 using coordinates and event tag. In step 3040, supervisor 100 receives hardware settings of application 120 in which one of the hardware 250 required would be a camera. In step 3050, supervisor 100 couples the camera's driver to application 120. This may require initiating driver 520 and pseudo driver 460. In step 3060, hardware 250 is triggered. This may involve a user hitting a button corresponding to a camera in visual image of application displayed on dynamic user interface such as button 610 of Figure 6. After receiving the touch event, supervisor 100 then recognizes the touch event and applies that touch event to application 120. After any proper conversion for resolution differences,, application 120 recognizes that the user wishes to take a picture since touch event coordinates that the touch event occurred within camera trigger button. In an alternative embodiment of the present invention, application 120 may send a set of coordinates to configure an area 620 of media output 240 that correspond with a button for triggering the camera, as shown in FIG. 6. For example, the step of transmitting a set of coordinates back to application 120 is not required because client application 260 may be configured to recognize the location where the user touched the screen to generate the initial touch event. In one example, a view corresponding to a button can be generated locally by the dynamic output interface, and thus its resolution can be fixed locally. However, the rest of the screen of dynamic user interface is for displaying output from application 120, and the resolution can be configured to adjust to bandwidth of the network (e.g., becoming 1080P when the bandwidth is high and 360P when the bandwidth is low).
Once the camera has been triggered, hardware value is sent to memory 530 in step 3070. Hardware value preferably comprises one or more images taken. In step 3080, this data is then sent to application 120 via HALs 420 and 540.
In another preferred embodiment of the present invention, it is not required for robots to run applications 120. Instead, users can simply request application 120 he or she wishes to use and, in response, server 10 creates a new instance of application 120 for the user to use without the need to first show the user output of the application 120 as the application 120 is being run by robot 110. Accordingly, the system corresponding to this particular embodiment as shown in figure 3 would not require robot 110. Figures 5c and 5d illustrate the process for this preferred embodiment of the present invention.
In step 4000, user enters a request via input module 210. Client 20 then transmits the request to server 10 in step 4010. Next, upon receiving the request in step 4020, supervisor 100 determines the application 120 relevant to the request in step 4030. In step 4040, supervisor 100 creates a new instance of the application 120. In creating this new instance of the application 120, supervisor 100 determines whether the application 120 requires a virtual machine as well as application server 30 to run properly in steps 4050-4080. Once the new instance of the application 120 has been created, the application 120 initiates and begins to generate output in step 4090. It should be emphasized that the output of the application 120 at this point is related to normal initiation of the application 120 such as showing the starting screen of the application 120 and not caused by execution of robots 110. In step 4100, supervisor 100 transmits the output to client 20 in step 4100 which client 20 receives in step 4110. In step 4120, client 20 displays the output of the application 120 transmitted from server 10 via client application 260 displayed on media output 240.
Next, in step 4130, supervisor couples output of application 260 displayed on media output 240 to application 120 using XY coordinates and/or event tags to allow interaction between user and the application 120 via output of the application 120 shown by client application 260 displayed on media output 240. If hardware is required for interaction with the application 120, client 20 obtains hardware settings from application 120. In step 4150, supervisor 100 couples required hardware device 250 to the application 120 using driver 520, HAL 530, pseudo driver 460 and HAL 420. Once the application 120 and hardware devices 250 are coupled, it is then possible to trigger hardware device 250 in step 4160. In step 4170, hardware device 4160 generates hardware values which can be passed back to the application 120 via pseudo driver 460 which converts the hardware values to a form that can be processed by the application 120. With the application 120 fully coupled to device 20, user can now use the application 120 via client 20 as if the application 120 is running on client 20 without having to actually having to download, install and/or execute the application 120 on device 20.
It will be appreciated by those skilled in the art that changes could be made to the examples described above without departing from the broad inventive concept thereof. It is understood, therefore, that this invention is not limited to the particular examples disclosed, but it is intended to cover modifications within the spirit and scope of the present invention as defined by the appended claims.
Further, in describing representative examples of the present invention, the specification may have presented the method and/or process of the present invention as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps set forth herein, the method or process should not be limited to the particular sequence of steps described. As one of ordinary skill in the art would appreciate, other sequences of steps may be possible. Therefore, the particular order of the steps set forth in the specification should not be construed as limitations on the claims. In addition, the claims directed to the method and/or process of the present invention should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the present invention.

Claims

What is claimed is:
1. A system for generating a dynamic user interface comprising:
One or more clients;
One or more servers networked with the one or more clients;
One or more applications residing in the one or more servers; and
One or more robots residing in the one or more servers, wherein the one or more robots are configured to execute the one or more applications and the one or more servers are configured to provide output of the one or more applications to the one or more clients as the one or more applications are being operated by the one or more robots.
2. The system of claim 1 further comprising a supervisor residing in the one or more servers.
3. The system of claim 1, wherein the output comprises text, snapshots or a series of snapshots, or partial or full time-lapsed visual and/or audio data taken from output of the one or more applications.
4. The system of claim 1 wherein the one or more servers receive one or more
requests from the one or more clients.
5. The system of claim 4 wherein the request relates to an online search.
6. The system of claim 4 where in the request relates to a request for one or more applications for application rental purposes
7. The system of claim 4 wherein the request relates to a request for one or more application for application purchase purposes.
8. The system of claim 4 wherein the output of the one or more applications
transmitted to the client is from applications relevant to the one or more requests.
9. The system of claim 8, wherein relevance is determined using in-app data.
10. The system of claim 1 further comprising a database for storing in-app data,
including the output from the one or more applications.
11. The system of claim 10 wherein source of the output to the one or more clients comprises output stored in the database
12. The system of claim 1 further comprising one or more application servers
configured to support execution of the one or more applications.
13. The system of claim 1 further comprising a database for storing in-app data,
including data exchanged between the one or more applications and corresponding application servers.
14. The system of claim 4 further comprising a media output residing in the client.
15. The system of claim 14 further comprising a client application residing in the client configured to display output of the one or more applications transmitted from the server via the media output.
16. The system of claim 15, wherein output of the applications shown by the client application displayed on the media output is configured to allow user interaction with the one or more applications via client application.
17. The system of claim 16, wherein output of the one or more applications shown by the client application displayed on the media output is coupled with the one or more corresponding applications.
18. The system of claim 17, wherein the coupling comprises communication of a coordinate and event tag pair.
19. The system of claim 16 further comprising means for simulating physical motion that is required for interacting with the one or more applications by simulation based on user interaction with the output of the one or more applications displayed by the client applications on the media output.
20. The system of claim 1, wherein the client further comprises one or more hardware devices.
21. The system of claim 20, wherein the one or more applications are coupled with the one or more hardware devices.
22. The system of claim 21, wherein the coupling comprises communication of hardware values.
23. The system of claim 22, wherein the one or more applications is configured to receive the hardware values from at least one of a driver on the client
corresponding to the one or more hardware devices, a pseudo driver configured to receive the hardware values, an HAL layer and a library coupled with the one or more applications.
24. The system of claim 1, further comprising one or more virtual machines to assist the one or more applications that cannot run natively on the one or more servers.
25. The system of claim 15, wherein an instance of the one or more applications is created to facilitate user interaction.
26. The system of claim 25, wherein the instance of the application created for the user interaction begins at the beginning of the application.
27. The system of claim 25, wherein the instance of the application created for user interaction begins at the place at which the application was executing when the user interaction with the application was initiated.
28. The system of claim 25, wherein the instance of the application created for user interaction begins at the place in the application that is most relevant to the user request.
29. A method for generating a dynamic user interface comprising the steps of:
Executing one or more applications on one or more servers using one or more robots; and
Transmitting output of the one or more applications to one or more clients.
30. The method of claim 29 wherein the transmitted output of the one or more
applications comprises text, snapshots or a series of snapshots, partial or full time-lapsed visual and/or audio output of the one or more applications.
31. The method of claim 29, further comprising the step of receiving one or more requests from the one or more clients
32. The method of claim 31, wherein the one or more requests comprise one or more online search queries.
33. The method of claim 31, wherein the one or more requests comprise one or more requests for one or more applications for application rental purposes.
34. The method of claim 31, wherein the one or more requests comprise one or more requests for the one or more applications for application purchase purposes.
35. The method of claim 29, wherein the step of transmitting output comprises output of the one or more applications relevant to the one or more requests.
36. The method of claim 35, wherein relevance of the one or more applications is determined using in-app data.
37. The method of claim 29 wherein the step of transmitting output of the one or more applications is done live or near live as the output is being generated by the one or more applications.
38. The method of claim 29 further comprising the step of storing in-app data,
including output from the one or more applications in the one or more databases.
39. The method of claim 38 wherein the step of transmitting output of the one or more applications is done by transmitting output stored in the one or more databases.
40. The method of claim 29 further comprising the step of supporting the execution of the one or more applications using one or more application servers
41. The method of claim 40 further comprising the step of storing in-app data,
including data exchanged between the one or more applications and the one or more corresponding application servers, in one or more databases.
42. The method of claim 29 further comprising the step of displaying output of the one or more applications on one or more media outputs.
43. The method of claim 42 further comprising the step of displaying output of the one or more applications on one or more media outputs using one or more client applications.
44. The method of claim 43 further comprising the step of allowing user interaction with the one or more applications via output of the one or more applications displayed by the client applications on the media output.
45. The method of claim 44 further comprising the step of coupling output of the one or more applications shown by the one or more client application displayed on the one or more media output to the one or more corresponding applications.
46. The method of claim 45 wherein the step of coupling output of the one or more applications shown by the one or more client application displayed on the one or more media output to the one or more corresponding applications comprises communicating one or more coordinate and event tag pairs.
47. The method of claim 45 further comprising the step of simulating physical motion that is required for interacting with the one or more applications by simulation based on user interaction with output of the one or more applications displayed by the one or more client applications on the one or more media outputs.
48. The method of claim 44, further comprising the step of creating an instance of the application to enable user interaction with the instance of the application .
49. The system of claim 48, wherein the instance of the application is created to enable user interaction with the instance of the application begins at the beginning of the application.
50. The system of claim 48, wherein the instance of the application created to enable user interaction with the instance of the application begins at the place at which the application was executing when the user interaction with the application is initiated.
51. The system of claim 48, wherein the instance of the application created for the user interaction begins at the place in the application that is most relevant to the user request.
52. The method of claim 29 further comprising the step of coupling the one or more applications to one or more hardware devices residing on the one or more clients.
53. The method of claim 52 wherein the step of coupling the one or more applications to one or more hardware devices residing on the one or more clients further comprises communication of one or more hardware values.
54. The method of claim 53 further comprising the step of the one or more
applications processing the hardware values.
55. The method of claim 29 further comprising the step of executing the one or more applications using one or more virtual servers.
56. A system for generating dynamic user interface comprising:
One or more clients;
One or more servers networked with the one or more clients;
One or more applications residing in the one or more servers; and
One or more client applications residing in the one or more clients;
wherein the one or more client applications are coupled with the one or more applications residing in the server in order to enable user interaction with the one or more applications via the one or more clients.
57. The system of claim 56, further comprising a media output configured to display output of the one or more applications shown by the one or more client application.
58. The system of claim 56 wherein the coupling between the one or more client applications with the one or more applications comprises communication of coordinate and event tag pair.
59. The system of claim 56 wherein the client further comprises one or more hardware devices.
60. The system of claim 59 wherein the one or more applications are coupled with the one or more hardware devices.
61. The system of claim 60 wherein the coupling comprises communication of one or more hardware values.
62. The system of claim 61, wherein the one or more application is configured to process hardware values from the hardware devices.
63. A method for generating a dynamic user interface comprising the steps of:
Creating and initiating instances of an application on one or more servers; and
Coupling the instances of applications on one or more servers with one or more clients located remotely with respect to the one or more servers to enable user interaction with the one or more applications using the one or more clients.
64. The method of claim 63 further comprising the step of displaying output of the one or more applications on one or more media outputs using one or more client applications.
65. The method of claim 64 further wherein the step of coupling comprises the step of coupling output of the applications shown by the client application displayed on the media output to the one or more corresponding instances of applications.
66. The method of claim 65 wherein the step of coupling output of the applications shown by the client application displayed on the media output to the one or more corresponding applications comprises communicating one or more coordinate and event tag pair.
67. The method of claim 63 further comprising the step of coupling the one or more applications to one or more hardware devices residing on the one or more clients.
68. The method of claim 67 wherein the step of coupling the one or more applications to one or more hardware devices residing on the one or more clients further comprises communication of one or more hardware values.
69. The method of claim 68 further comprising the step of the one or more
applications processing the one or more hardware values.
70. A method for configuring a touching sensitive area, a first layout or generating a first view of a first app executed on a first environment accordingly to a second layout or a second view of a second app executed on a second environment, the method comprising the steps of:
receiving a pair of XY-coordinates; and
generating a first view at a location on the first layout of the first app according to the received XY-coordinates.
71. The method of claim 68 further comprises the steps of:
receiving a tag representing/corresponding to a kind of events the first view handles,
wherein the kind of events handled by the first view is same or similar to the second view of the second app; and configuring the first view to have a function corresponding to the tag.
72. A method for transmitting hardware values come from hardware drivers from a first environment to a second environment, comprising the steps of:
coupling with the second environment;
receiving a hardware setting from the second environment;
configuring to receiving a hardware value from a driver based on the hardware setting; and
transmitting the received hardware value to the second environment.
73. The method of claim 70 further comprises the step of:
coupling with the driver for receiving the hardware value from the driver accordingly after receiving the hardware setting.
74. The method of claim 70 further comprises the steps of:
coupling with a plurality of drivers before receiving the hardware setting; and transmitting the hardware value to the second environment from the driver selected by the hardware setting.
75. The method of claim 70 further comprises the steps of:
buffering the hardware value when there is no network service; and
transferring the hardware value to the virtual machine when there is a network service.
76. A method for transmitting hardware values come from hardware drivers from a first environment to a second app executed on a second environment, comprising the steps of:
coupling with the virtual machine;
receiving a hardware setting from the virtual machine;
coupling with a driver accordingly to the hardware setting;
receiving a hardware value from the driver; and
transmitting the hardware value to the second app.
77. The method of claim 74 further comprises the steps of:
buffering the hardware value when there is no network service; and
transferring the hardware value to the second app when there is a network service.
78. A method for initiating an activity of a first app accordingly to a second app executed on a virtual machine, comprising the steps of:
receiving a touch event;
transmitting a pair of XY-coordinates representing a location on a screen which is touched to generate the touch event to the virtual machine;
receiving a configuration to initiate a first activity in the first app; and transmitting a first result generated by the first activity when receiving the touch event to the virtual machine.
79. The method of claim 76 further comprises the step of:
receiving two pairs of XY-coordinates representing a position of a left-top corner and a position of a right-bottom corner of an area which can be touched to generate the touch event.
80. The method of claim 77 further comprises the step of:
initiating the first activity if the area is touched.
81. The method of claim 78 further comprises the step of:
transmitting a second result generated by the first activity when receiving the touch event to the virtual machine.
82. The method of claim 76 further comprises the step of:
receiving a pair of XY-coordinates and a value of radius representing a center and a radius of a circular area which can be touched to generate the touch event.
83. The method of claim 80 further comprises the step of:
initiating the first activity if the area is touched.
84. The method of claim 80 further comprises the step of:
transmitting a second result generated by the first activity when receiving the touch event to the virtual machine.
85. The method of claim X wherein the hardware comprises at least one of Bluetooth, Wi-Fi, NFC, Camera, GPS, gyroscope, compass and accelerometer.
86. The method of claim X wherein the virtual machine comprises an operating
system capable of hosting applications.
87. A method of receiving an advertisement on a first app on a first computing device, comprising the steps of:
transmitting first XY-coordinates where a UI of the first app is touched to a virtual machine;
receiving second XY-coordinates from the virtual machine or another server; and receiving the advertisement when or before the first app is configured to form a touching area based on the second XY-coordinates,
wherein the touched area is capable of being touched to initiate an activity.
88. A method of receiving an advertisement on an first app on a first computing
device, comprising the steps of:
receiving XY-coordinates for configuring the first app to form a touching area on its UI or generating an activity when the UI of the first app displaying an opening screen of the first app or the advertisement,
wherein the touched area is capable of being touched to initiate an activity.
EP14834567.1A 2013-08-07 2014-08-07 Methods and systems for generating dynamic user interface Withdrawn EP3014387A4 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201361862967P 2013-08-07 2013-08-07
US201461922860P 2014-01-01 2014-01-01
US201461951548P 2014-03-12 2014-03-12
US201461971029P 2014-03-27 2014-03-27
PCT/US2014/050248 WO2015021341A1 (en) 2013-08-07 2014-08-07 Methods and systems for generating dynamic user interface

Publications (2)

Publication Number Publication Date
EP3014387A1 true EP3014387A1 (en) 2016-05-04
EP3014387A4 EP3014387A4 (en) 2017-01-04

Family

ID=52461949

Family Applications (1)

Application Number Title Priority Date Filing Date
EP14834567.1A Withdrawn EP3014387A4 (en) 2013-08-07 2014-08-07 Methods and systems for generating dynamic user interface

Country Status (4)

Country Link
EP (1) EP3014387A4 (en)
JP (1) JP6145577B2 (en)
CN (1) CN106062663A (en)
WO (1) WO2015021341A1 (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5910903A (en) * 1997-07-31 1999-06-08 Prc Inc. Method and apparatus for verifying, analyzing and optimizing a distributed simulation
US9003461B2 (en) * 2002-12-10 2015-04-07 Ol2, Inc. Streaming interactive video integrated with recorded video segments
JP2006079292A (en) * 2004-09-08 2006-03-23 Aruze Corp Content trial system
US8100766B2 (en) * 2007-03-23 2012-01-24 Electrified Games, Inc. Method and system for personalized digital game creation
KR20100028974A (en) * 2008-09-05 2010-03-15 엔에이치엔(주) Method and system for managing cooperative online game
US8866701B2 (en) * 2011-03-03 2014-10-21 Citrix Systems, Inc. Transparent user interface integration between local and remote computing environments
JP2012236327A (en) * 2011-05-11 2012-12-06 Canon Inc Printing apparatus, method of controlling the same, and program
US9672355B2 (en) * 2011-09-16 2017-06-06 Veracode, Inc. Automated behavioral and static analysis using an instrumented sandbox and machine learning classification for mobile security
EP2611207A1 (en) * 2011-12-29 2013-07-03 Gface GmbH Cloud-rendered high-quality advertisement frame

Also Published As

Publication number Publication date
JP6145577B2 (en) 2017-06-14
EP3014387A4 (en) 2017-01-04
CN106062663A (en) 2016-10-26
WO2015021341A1 (en) 2015-02-12
JP2016533576A (en) 2016-10-27

Similar Documents

Publication Publication Date Title
US11385760B2 (en) Augmentable and spatially manipulable 3D modeling
US8352903B1 (en) Interaction with partially constructed mobile device applications
CN111198730B (en) Method, device, terminal and computer storage medium for starting sub-application program
CN102362251B (en) For the user interface providing the enhancing of application programs to control
US8239840B1 (en) Sensor simulation for mobile device applications
EP2184668B1 (en) Method, system and graphical user interface for enabling a user to access enterprise data on a portable electronic device
AU2011358860B2 (en) Operating method of terminal based on multiple inputs and portable terminal supporting the same
US20130041938A1 (en) Dynamic Mobile Interaction Using Customized Interfaces
US10768881B2 (en) Multi-screen interaction method and system in augmented reality scene
US20190391715A1 (en) Digital supplement association and retrieval for visual search
KR20160141838A (en) Expandable application representation
JP7104242B2 (en) Methods for sharing personal information, devices, terminal equipment and storage media
EP2553561A2 (en) Interacting with remote applications displayed within a virtual desktop of a tablet computing device
KR20140147095A (en) Instantiable gesture objects
US20240320269A1 (en) Digital supplement association and retrieval for visual search
KR20160140932A (en) Expandable application representation and sending content
WO2019157870A1 (en) Method and device for accessing webpage application, storage medium, and electronic apparatus
JP2024112912A (en) Digital supplemental association and retrieval for visual search
KR101710667B1 (en) Device and method for providing service application using robot
Helal et al. Mobile platforms and development environments
WO2022083554A1 (en) User interface layout and interaction method, and three-dimensional display device
JP2002169640A (en) Information processing equipment, method and recording medium
US10290151B2 (en) AR/VR device virtualisation
JP6145577B2 (en) Method and system for generating a dynamic user interface
US10845953B1 (en) Identifying actionable content for navigation

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20160128

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20161206

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 17/30 20060101AFI20161130BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20170519