WO2010067365A2 - System and methods for adapting applications to incompatible output devices - Google Patents

System and methods for adapting applications to incompatible output devices Download PDF

Info

Publication number
WO2010067365A2
WO2010067365A2 PCT/IL2009/001176 IL2009001176W WO2010067365A2 WO 2010067365 A2 WO2010067365 A2 WO 2010067365A2 IL 2009001176 W IL2009001176 W IL 2009001176W WO 2010067365 A2 WO2010067365 A2 WO 2010067365A2
Authority
WO
WIPO (PCT)
Prior art keywords
display
text
program
input
display device
Prior art date
Application number
PCT/IL2009/001176
Other languages
French (fr)
Other versions
WO2010067365A3 (en
Inventor
Yaakov Romano
Alon Jacob Barnea
Yuval Drori
Yochai Shefi Simchon
Peter Ostrin
Ariel Ben Moshe
Original Assignee
Graphtech Computer Systems Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Graphtech Computer Systems Ltd. filed Critical Graphtech Computer Systems Ltd.
Publication of WO2010067365A2 publication Critical patent/WO2010067365A2/en
Publication of WO2010067365A3 publication Critical patent/WO2010067365A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1431Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display using a single graphics controller
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/003Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • G09G5/005Adapting incoming signals to the display format of the display terminal

Definitions

  • This invention generally relates to input devices and output devices for computer software and more specifically to mobile communication devices having small screens and limited power.
  • Wireless Internet which utilizes small screens of mobile communication devices instead of full-size personal computer screens, is known.
  • Text-to-speech technology is known.
  • Terminals such as mobile communication devices which sense their own tilt relative to a fixed frame of reference such as the earth, are known.
  • Terminal that is used to interact with the application has different characteristics and capabilities than those that the application was designed for. Such differences include differences in display size, display resolution, type and availability of a keyboard, mouse device, touch screen, tablet etc. likewisethere there are cases where the Terminal that is used to interact with the application is used in very different work environment (outside noise, direct sun light etc) Certain embodiments of the present invention provide techniques to address these differences and modify the application behavior during run-time.
  • API Interception refers to a method where application's API functions are replaced with a modified version of those functions. As a result the application behavior is modified and adds new functionality. API Interception is a technique known in the art and may be implemented using any suitable known methodology.
  • an API Interception method is used to modify application behavior and adapt it to remote terminal capabilities.
  • Additional complexity may be introduced through use of advanced graphics and media conversion (text to voice, voice commands) if it is desired to adapt the application to offer good user experience on a small device.
  • Implementing such adaptations is not feasible on remote devices as they require extensive computational and graphic power.
  • An improved functionality for running the application from a remote terminal can be achieved by redesigning and coding the application, yet it would be of great value and significant advantage to provide a method which does not require modification of the application source code, nor of its binary image.
  • API Interception techniques that enable tracking of the API calls made by an application and based on predefined rules or algorithms, may modify them if and as appropriate, on the fly, while the original application runs on the PC.
  • a particular feature of certain embodiments of the present invention is the ability to implement the desired adaptation of the application to the new device and use case by adding a client and server SW layer, together intercepting the relevant API calls made by the application and applying new rules whichaffect their execution and how they would look on the remote device.
  • Certain embodiments of the present invention provide a system, methods and apparatus for running visual applications from a remote terminal that is connected to a host computer over a network.
  • certain embodiments of the present invention comprise methods of manipulating the application's visual, text objects and imagery in general, such that it is useful on a remote terminal that has different characteristics than the display/input (in general, terminal) that the application was originally designed for.
  • Certain embodiments of the present invention pertain to an application, such as but not limited to a game, written for a particular device and context.
  • output instructions which may for example affect visual and/or audio output, are intercepted and adapted for the capabilities of a different device and/or context.
  • the displayed font may be dynamically increased (e.g. is determined after a specific client with specific screen size has been connected and/or is determined responsive to a scaling factor change during a lasting connection such as a client's screen which changes from portrait to landscape),relative to the scaling factor between the PC and the smaller display.
  • soft keys may be dynamically created and displayed on the screen.
  • a scaling factor may change during a lasting connection e.g. if a client's screen's orientation changes from portrait to landscape or vice versa.
  • a method for generating a first text display for a first display device comprising identifying a subset of text objects, each associated with a text string and being unsuitable for display on the first display device, from among all text objects in the second text display; and generating a first text display which differs from the second text display in that at least one text object of the subset of text objects is omitted and at least a portion of the text string associated therewith is presented orally.
  • a method for generating a first text display for a first display device the first text display representing a second text display generated by a program for a second display device, the method comprising identifying a subset of text objects, each associated with a text string and being unsuitable for display on the first display device, from among all text objects in the second text display; and generating a first text display which differs from the second text display in that at least a portion of the text string of least one text object of the subset of text objects is displayed piecemeal.
  • At least a portion of the text string of least one text object of the subset of text objects is displayed in ticker format.
  • the program has a source code and the identifying is performed without recourse to the source code.
  • the identifying proceeds at least partly on a basis of identifying text objects including characters which, when displayed on the first text display, are smaller than a predetermined threshold value.
  • a method for generating a first display for a first display device comprising determining whether the cursor is unsuitable for display on the first display device; and if the cursor is unsuitable, generating a first display which differs from the second display in that the cursor is omitted and replaced by a cursor suitable for display on the first display device.
  • the first display device is housed on a remote terminal.
  • the method also comprises accepting a human input defining the subset to include only text objects deemed by a human to be important to the application.
  • the human input defines the text objects deemed important in terms of at least one of the following text object characteristics: String content, location of the text object within the second text display, and color.
  • a method for generating a first text display for a first display device fixedly associated with a first input device, the first text display representing a second text display generated by a program for a second display device comprising determining if the orientations of the first and second display devices are one of the following: both landscape; and both portrait; and if not, mapping directional input functions into the first input device so as to enable the first display device fixedly associated therewith to be held and used rotated 90 degrees to the orientation of the second display device.
  • the first input device when rotated 90 degrees to the orientation of the second display device, includes at least two input modules having at least two of the following relative orientations: left, right, top and bottom; and the mapping comprises mapping at least two of the following input options: go left, go right, go up and go down, into the at least two input modules respectively.
  • the first display device comprises a keyboard and each of the input modules comprises a key in the keyboard.
  • the method also comprises providing a display device database storing at least one display characteristic of each of a plurality of display devices.
  • the program comprises a game.
  • a method for running a program written for a first input device having a first plurality of states and associated with a first display device on a terminal having a second input device having a second plurality of states and associated with a second display device comprising generating a display, for the second display device, which associates at least one input option with at least one of the second plurality of states.
  • the text objects being unsuitable for display comprise objects which, when re-sized proportionally to relative dimensions of the first and second text displays, are unsuitable for viewing on the first text display.
  • the cursor unsuitable for display comprises a cursor which, when re-sized proportionally to relative dimensions of the first and second display devices, is unsuitable for viewing on the first display device.
  • a system for adapting objects generated by programs and having output characteristics to run on each of a plurality of terminals each including a different output device comprising a terminal data repository operative to accept information regarding at least one characteristic of the output device of each of the plurality of terminals; and a graphics object modifier operative to modify at least one output characteristic of a graphics object outbound to an individual output device according to the at least one characteristic of the individual output device.
  • the graphics object modifier is operative to perform a global modification on at least most objects generated by an individual program outbound for an individual terminal; and to perform local modifications on at least one object generated by the individual program which, having undergone the global modification, becomes unsuitable for display on the output device of the individual terminal.
  • At least one of the terminals also includes an input device.
  • at least one of the output devices comprises a visual display device.
  • the modifier is operative to perform at least one of the following operations on at least a portion of at least one object: translation, rotation, scaling, occluding.
  • the modifier is operative to modify at least one of the color, texture, brightness and contrast of at least a portion of at least one object.
  • the characteristic of the output device includes an indication of whether the output device is intended for use outside or inside and the graphics object modifier is operative to modify at least one of at least one graphic object's brightness and contrast accordingly.
  • a method for modifying a program for display on a first display device wherein the program generates a plurality of display screens suitable for display on a second display device which differs from the first display device, the method comprising at least one display screen, identifying first and second portions of the display screen which can be rendered semi-transparently and superimposed onto one another; rendering the first and second portions of the display screen semi-transparently; and superimposing the first and second portions of the display screen onto one another.
  • a system for adapting a multi-mode program to run on a terminal including an output device and an input device capable of generating a first set of input events, the program being operative to branch responsive to occurrences of input events from among a second set of pre-defined input events, the system comprising an input event mapper operative to receive an event from the first set of input events and to generate, responsively, at least a simulation of an event from the second set of input events, thereby to cause the program to branch, wherein the event from the second set of input events generated at least in simulation by the input event mapper responsive to receiving an event from the first set of input events depends at least partly on the mode in which the program is operating.
  • the mapping of outgoing terminal events to incoming game events may be performed differently within each of the modes of the game or application. For example, if the game has 3 modes I, II and III which accept 2, 3 and 4 different input events respectively, and the terminal is capable of generating only four input events A, B, C and D, then A and B may be mapped to the 2 input events of Mode I respectively if the game is in Mode I. Input events C and D may be regarded as non-events if the game is in Mode I.
  • mapping refers to generating a particular input event that the game or application is capable of understanding, responsive to production by the terminal of a certain one of the input events that the terminal is capable of generating.
  • the program comprises at least one game
  • the first set of input events comprises a set of voice commands
  • the second set of input events comprises a set of application commands.
  • the program comprises at least one game and the set of application commands comprises a set of game controls.
  • a system for adapting a program to run on a terminal including an output device and being capable to sense its own tilt relative to a fixed frame of reference, events, the program being operative to branch responsive to occurrences of input events from among a set of pre-defined input events, the system comprising an input event mapper operative to receive a tilt value sensed by the terminal and to generate, responsively, at least a simulation of an event from the set of input events, thereby to cause the program to branch.
  • a system for generating a first text display for a first display device the first text display representing a second text display generated by a program for a second display device
  • the system comprising a text object analyzer operative to identify a subset of text objects, each associated with a text string and being unsuitable for display on the first display device, from among all text objects in the second text display; and a text display modifier operative to generate a first text display which differs from the second text display in that at least one text object of the subset of text objects is omitted and at least a portion of the text string associated therewith is presented orally.
  • a system for generating a first text display for a first display device the first text display representing a second text display generated by a program for a second display device
  • the system comprising a text object analyzer operative to identify a subset of text objects, each associated with a text string and being unsuitable for display on the first display device, from among all text objects in the second text display; and a text display modifier operative to generate a first text display which differs from the second text display in that at least a portion of the text string of least one text object of the subset of text objects is displayed piecemeal.
  • a system for generating a first display for a first display device the first display representing a second display generated by a program for a second display device and including a cursor
  • the system comprising a cursor analyzer operative to determine whether the cursor is unsuitable for display on the first display device; and a display modifier operative, if the cursor is unsuitable, to generate a first display which differs from the second display in that the cursor is replaced by a cursor suitable for display on the first display device.
  • a system for generating a first text display for a first display device fixedly associated with a first input device, the first text display representing a second text display generated by a program for a second display device the system comprising a display device orientation analyzer operative to determine if the orientations of the first and second display devices are one of the following: both landscape; and both portrait; and a directional input function mapper operative, and if not, to map directional input functions into the first input device so as to enable the first display device fixedly associated therewith to be held and used rotated 90 degrees to the orientation of the second display device.
  • a system for running a program written for a first input device having a first plurality of states and associated with a first display device on a terminal having a second input device having a second plurality of states and associated with a second display device comprising an input option associator operative to generate a display, for the second display device, which associates at least one input option with at least one of the second plurality of states.
  • a method for adapting objects generated by programs and having output characteristics to run on each of a plurality of terminals each including a different output device comprising accepting information regarding at least one characteristic of the output device of each of the plurality of terminals; and modifying at least one output characteristic of a graphics object outbound to an individual output device according to the at least one characteristic of the individual output device.
  • a system for modifying a program for display on a first display device wherein the program generates a plurality of display screens suitable for display on a second display device which differs from the first display device
  • the system comprising a display screen area analyzer operative, for at least one display screen, to identify first and second portions of the display screen which can be rendered semi-transparently and superimposed onto one another; a rendering functionality operative to render the first and second portions of the display screen semi-transparently; and a superimposing functionality operative to superimpose the first and second portions of the display screen onto one another.
  • a method for adapting a multi-mode program to run on a terminal including an output device and an input device capable of generating a first set of input events, the program being operative to branch, responsive to occurrences of input events from among a second set of pre-defined input events, the method comprising receiving an event from the first set of input events and to generate, responsively, at least a simulation of an event from the second set of input events, thereby to cause the program to branch, wherein the event from the second set of input events generated at least in simulation responsive to receiving an event from the first set of input events depends at least partly on the mode in which the program is operating.
  • a method for adapting a program to run on a terminal including an output device and being capable to sense its own tilt relative to a fixed frame of reference, events, the program being operative to branch responsive to occurrences of input events from among a set of pre-defined input events, the method comprising receiving a tilt value sensed by the terminal and to generate, responsively, at least a simulation of an event from the set of input events, thereby to cause the program to branch.
  • systems, methods and apparatus that dynamically adapt an application running on a host computer and which add functionality, such that the user is able to run an application from a remote terminal connected to the host computer e.g. via a communication network or via analog modems or by any other suitable technology or scheme.
  • the remote terminal may comprise a computing device that has means for display and optionally has user input receiving functionality, such as but not limited to a cellular phone, PDA, TV set top box (STB), TV set, or a desktop computer.
  • the system may dynamically modify the display that is rendered by the application to match the remote terminal capabilities.
  • the system dynamically and/or statically modifies that user's inputs to match application requirements.
  • a static modification is a change of key map effected by a user.
  • the functionality of running the application from a remote terminal preferably does not require modification to either the application's source code or its binary image. Instead, the system may use API Interception techniques that enable it to track API calls made by the application and modify these as appropriate e.g. as described below.
  • Adaptations suitable for specific applications may be described and stored, e.g. in XML format or in a configuration file which may be stored and read on the server and/or the client.
  • the configuration file may be built by editing a text file or by using automated and specific tools.
  • Those adaptations may be expressed as a set of filters also termed herein "object filters”, typically including filters of at least one of the following three types of filters:
  • Geometry filter - Applied to geometry rendered by the application. For example, a geometry filter which is used to intercept a certain "pop up message box", or to intercept a certain graphic element which appears on the screen and to enlarge it, so it is seen better on the client's screen.
  • Text filter - Applied to text displayed by the applications.
  • text filters which are used to intercept a certain string and to present it as speech e.g. via a suitable text to speech mechanism, or to display it as a ticker on the client's screen.
  • Pixel filter applied to an image rendered by the application. For example, a filter which is used to highlight/mark a certain region of the client's screen which was modified, or a filter which is used to enhance the image as to level of detail, and/or sharpness.
  • An application can have any number, including 0, of each of the above 3 types of filters and these may be applied sequentially to the application's API calls.
  • the specific object filters to be used by a particular application may be specified in the App specific section of the server configuration file.
  • manipulating may for example comprise any of the following in isolation or in any combination: translation, rotation, scaling, occluding, or changing the color, texture or other appearance attributes of an object.
  • a method for identifying text objects and converting them into an audio message that is played on a remote terminal is a method for identifying text objects and converting them into an audio message that is played on a remote terminal.
  • a method for identifying text objects and displaying them in a dedicated ticker, or moving text box, on a remote terminal is also provided. Additionally provided, in accordance with certain embodiments of the present invention, is a method for presenting multiple graphic objects at a single screen location using transparencies.
  • a method for translating an attitude or tilt of a remote terminal capable of sensing tilt, into game commands is a method for translating an attitude or tilt of a remote terminal capable of sensing tilt, into game commands.
  • an external modification (external to an application's own source code or binary files) is made to an application which displays text on a screen.
  • the modification may generally change e.g. decrease the size of an output screen generated by the application to fit a differently sized e.g. smaller screen, and process the image for adaptation to the smaller screen in terms of, for example, level of detail, sharpness, or color range.
  • font which, if decreased, becomes hard to read
  • other solutions are found, such as but not limited to oral presentation of the text, using conventional text-to-speech techniques, enlarging of the font of only a portion of the text and omitting other portions of text where the text object may stay the same size, presenting the text and another portion of the output screen superimposed on one another wherein at least one of the superimposed portions is transparent, and presenting the text piecewise within a text object of the same size e.g. using a ticker type format in which text is displayed one letter or word at a time, at reading pace.
  • an application written for a first display device may be operative to render one or more objects and an external (out of source code) modification of the application is effected which generally diminishes the size of the objects, however objects with text or other detail that is deemed, either by human input or by a computerized criterion, to be unsuitable for display on another given display device, receive special treatment.
  • an object might be diminished less in size, and optionally translated to another location on the screen, and/or rotated to another orientation, such that its relatively large size is less critical and does not obscure critical elements.
  • API calls generated by an application written for a source terminal including an output device and optionally an input device go through filters which adapt these calls to a target terminal which differs from the source terminal.
  • Each such filter includes an "identification” functionality which determines whether a particular API call deserves special treatment and an "action” functionality which stipulates what that treatment is to be.
  • the application may be a multi-mode application in which case filters may treat objects rendered by the application differently as a function of which mode the application is in when these objects occur.
  • a terminal which has a small number of input keys or no keys is used to provide input to an application written for a terminal which has a larger number of input keys.
  • the terminal used to provide input can generate voice commands, these may be translated, typically externally of the source code of the application, into input events recognized by the application. For example, mouse input events may be translated into touch screen input events, or vice versa.
  • the application has more than one mode, and the inputs generated by the terminal used to provide input are translated differently, depending on the mode the application is in. For example, the "4" key on a cellular telephone may be interpreted as a leftward arrow if the application is in a first mode and may be interpreted as an upward arrow or as a "yes" if the application is in a second mode.
  • processors Any suitable processor, display and input means may be used to process, display, store and accept information, including computer programs, in accordance with some or all of the teachings of the present invention, such as but not limited to a conventional personal computer processor, workstation or other programmable device or computer or electronic computing device, either general-purpose or specifically constructed, for processing; a display screen and/or printer and/or speaker for displaying; machine-readable memory such as optical disks, CDROMs, magnetic- optical discs or other discs; RAMs, ROMs, EPROMs, EEPROMs, magnetic or optical or other cards, for storing, and keyboard or mouse for accepting.
  • processor as used above is intended to include any type of computation or manipulation or transformation of data represented as physical, e.g.
  • the above devices may communicate via any conventional wired or wireless digital communication means, e.g. via a wired or cellular telephone network or a computer network such as the Internet.
  • the apparatus of the present invention may include, according to certain embodiments of the invention, machine readable memory containing or otherwise storing a program of instructions which, when executed by the machine, implements some or all of the apparatus, methods, features and functionalities of the invention shown and described herein.
  • the apparatus of the present invention may include, according to certain embodiments of the invention, a program as above which may be written in any conventional programming language, and optionally a machine for executing the program such as but not limited to a general purpose computer which may optionally be configured or activated in accordance with the teachings of the present invention.
  • object is used herein in the broadest sense of the word known in the art of programming to include, inter alia, a set of information sufficient to render a visual display of an item.
  • program is used to include any set of commands which a processor can perform.
  • Configuration file is used to include output of an "edit stage” provided in accordance with certain embodiments of the present invention which determines which global modifications to perform on a program, and/or which local modifications to perform on which objects within the program, to enable the program to run on a different terminal.
  • Soft button is intended to include a display area on a touch screen which when touched, constitutes a particular input event.
  • piecemeal display is intended to include any and all display modes in which information is displayed portion by portion instead of all at once.
  • Fig. Ia is a simplified block diagram illustration of a software application modification system constructed and operative in accordance with certain embodiments of the present invention.
  • Fig Ib is a simplified block diagram illustration of the connection creation process using the assistance of the rendezvous server.
  • the process of Fig. Ib comprises one possible implementation of the connection creation process indicated by arrow 115 of Fig. Ia.
  • Fig. 2 is a simplified flow diagram of a method for initializing the system of Fig. Ia.
  • Fig. 3 is a simplified flow diagram of a method for performing the "start application" step 206 of Fig. 2.
  • Fig. 4 is an example of API call Redirection which may be effected by rewriting step 304 of Fig. 3.
  • Fig. 5 is a simplified block diagram illustration of an example of a suitable data structure for the shared memory 117 of Fig. Ia.
  • Fig. 6 is a simplified block diagram illustration of client adaptation block 109 of Fig. Ia, constructed and operative in accordance with certain embodiments of the present invention.
  • Fig. 7 is a simplified flowchart illustration of a method of operation for the Geometry Filter of Fig. 6.
  • Fig. 8a is a simplified flowchart illustration of a method of operation for the Text Filter of Fig. 6.
  • Fig. 8b is a simplified flowchart illustration of a method of operation for the pixel Filter of Fig. 6.
  • Fig. 9 is a simplified flowchart illustration of a "Say command" sequence performed by a client adaptation block 104 in Fig. Ia.
  • Fig. 10 is an example of a screenshot rendered without use of the geometry filter 601 of Fig. 6.
  • Fig. 11 is an example of a screenshot rendered using the geometry filter 601 of Fig. 6.
  • Fig. 12 is an example of a screenshot with text in the upper right corner.
  • Figs. 13A — 13B are screenshots similar to the screenshot of Fig. 12 except that text filter 602 of Fig. 6 has been applied to draw the text in a ticker.
  • Fig. 14 is an example of a screenshot rendered without use of the pixel filter 603 of Fig. 6.
  • Fig. 15 is an example of a screenshot rendered using of the pixel filter 603 of Fig. 6 where the pixel filter is constructed and operative to perform 'highlight'.
  • Fig. 16 is a simplified flowchart illustration of a method of operation for the user input handling module 113 of Fig. Ia.
  • Fig. 17 is a simplified flowchart illustration of a key map loading process which may be performed during phase 206 of Fig. 2.
  • Fig. 18 is simplified flowchart illustration of a method for performing step 1602 of Fig. 16, including translation of a client key to a command.
  • the host system on which the software application runs may be a user device or may be capable of servicing more than one user at a time.
  • the interfacing system may be a user device or may be capable of servicing more than one user at a time.
  • the interfacing system provides user output (for example via a display or speaker) and optionally receives user input (for example via one or more of a keyboard/pad, touch screen, mouse, orientation sensor, camera, or microphone).
  • user devices which may be used as host systems and/or as interfacing systems include but are not limited to cellular telephones, desktop computers, laptop computers, game consoles (e.g.
  • the software application may be any suitable application.
  • any suitable interfacing system, software application, and host system may be employed, for the purposes of example and clarification, the specification describes, in addition to the general case, a particular embodiment in which a user uses a cellular telephone to interface with a game which runs on a desktop or laptop computer.
  • interfacing system software application running on another system
  • host system software application running on another system
  • the interfacing system and host system may be distinct from one another and may be coupled for example by a fixed (wired) or wireless connection.
  • One software program which allows a user to interact via a particular computer desktop with a software application running on another computer desktop is XIl Windows System, Real VNC, distributed at the following World Wide Web location: realvnc.com.
  • the interfacing system and host system have differing characteristics, and therefore unless the differing characteristics are taken into account it may not be optimal to interface via the interfacing system with a software application which runs on the host system.
  • Fig. Ia is a simplified functional block diagram illustration of a system 100 enabling interactive applications to run using a remote terminal comprising various modules, e.g. as shown, according to an embodiment of the present invention.
  • Each module illustrated in Fig. Ia may be made up of any combination of software, hardware and/or firmware which performs the functions as defined and explained herein.
  • the system 100 comprises a host computer 101 and a remote terminal 102 connected via a data network 115.
  • Computer 101 may run two programs, a server program 103 and the application program 116.
  • the remote terminal computer 102 runs client program 111.
  • Fig. Ia generally illustrates a network or apparatus for adaptation of a software application, according to an embodiment of the present invention.
  • the network includes a host system 101 and an interfacing system 102 (also termed herein "remote terminal") coupled via any appropriate wired or wireless coupling 115 such as but not limited to optical fiber, Ethernet, Wireless LAN, HomePNA, power line communication, cell phone, PDA, Blackberry GPRS, Satellite, or other mobile delivery.
  • host system 101 and interfacing system 102 may communicate using one or communication protocols of any appropriate type such as but not limited to IP, TCP, OSI, FTP, SMTP, and WIFI.
  • host system 101 and interfacing system 102 may be remotely situated from one another or may located in proximity of one another.
  • Host system 101 may comprise any combination of hardware, software and/or firmware capable of performing operations defined and explained herein. For simplicity of description, the description only explicitly describes the hardware, software and/or firmware in host system 101 which are directly related to implementing embodiments of the invention. For example, host system 101 is assumed to include various system software and application software. System software which runs on host system 101 and is directly related to implementing embodiments of the invention is termed “server” program 103, and the software application which runs on host system 101 and which is adapted in accordance with certain embodiments of the invention is termed herein "application” program 107. Shared memory 117 is shared by server process 103 and application process 116 and stores elements directly related to implementing some embodiments of the invention.
  • remote terminal 102 may comprise any combination of hardware, software and/or firmware capable of performing the operations defined and explained herein.
  • the description only explicitly describes the hardware, software and/or firmware in the remote terminal 102 which are directly related to implementing embodiments of the invention.
  • remote terminal 102 is assumed to include at least system software.
  • the system software which runs on remote terminal 101 and is directly related to implementing some embodiments of the invention is termed "client" program 111.
  • server program 103 includes one or more of the following modules: client adaptation module 104, audio/video encoding and streaming module 105 and input translation and injection module 106.
  • an injected DLL 118 is injected during run-time into the application process 116 which includes the original program code 107 and system provided libraries (API) 110.
  • the injected DLL 116 typically comprises an API interception module 108 and a client adaptation module 109.
  • client program 111 may include any of the following modules, inter-alia: audio and video decoding module 112, user input handling module 113, and input/output module 114. Certain embodiments of specific modules of server program 103, application program 116, and client program 111 are described below.
  • Server program 103, application program 116, and client program 111 are not necessarily bound by the modules illustrated in Fig. Ia and in some cases, any of server program 103, application program 116, and client program 111 may comprise fewer, more and/or different modules than those illustrated in Fig. Ia and/or a particular module may have more, less and/or different functionality than described herein.
  • modules illustrated as being separate in Fig. Ia may be part of the same module in other embodiments.
  • a particular module illustrated in Fig. Ia may be divided into a plurality of modules in other embodiments. The same is true of other block diagrams shown and described herein.
  • the system of Fig. Ia has a client/server architecture.
  • the server typically comprises a host computer that runs the visual application.
  • the server may comprise any computation device that is able to run the desired application such as but not limited to a Personal Computer (Desktop, laptop), a Game Console such as Sony Playstation 3, Nintendo Wii, Microsoft Xbox, a cell phone, or a PDA.
  • the server may launch the requested application.
  • the application updates its display, the updated content may be retrieved and sent as a video stream to the remote terminal device.
  • the audio that is generated by the application may be captured and sent to the remote terminal as an audio stream.
  • the client software may run on a remote terminal that serves as a display device and as a user input device, such as but not limited to a Personal Computer (Desktop, laptop), a Game Console such as Sony Playstation 3, Nintendo Wii, Microsoft Xbox, a cell phone, or a PDA.
  • the remote terminal receives the video stream that is sent by the server, decodes it and presents it to the user on its screen. Similarly, the audio stream that is sent by the server may be played to the user using the local audio facilities.
  • the client software may handle user inputs such as key press, mouse move, touch screen touches, device rotation, tilts and shakes. These user input events may be translated into application commands and sent to the server which translates them into application domain events and injects them into the application.
  • the client software may connect to the server software that runs on the host computer using Internet Protocol (IP).
  • IP Internet Protocol
  • the client may connect to the server directly, by virtue of having its network address, or may create such a connection using a third computer also termed herein a "rendezvous server" ,which provides the server address and assists with creating the initial connection, as described in Fig Ib.
  • a server becomes available it typically notifies the rendezvous server as indicated by the "Availability Notification" arrow in Fig Ib.
  • the client when trying to connect to the server, first connects to the rendezvous server and queries for the server address.
  • the rendezvous server then responds to the client request and notifies the client of the server's address as indicated by the "Phase 1 : Address Query” arrow in Fig Ib. Only then, typically, does the client create a direct connection to the server as indicated by the "Phase 2: Direct Connection” arrow in Fig Ib.
  • connection creation process may be effected by any suitable method.
  • One optional method for the connection creation process is termed herein “Simple Traversal of UDP through NAT”.
  • STUN The specification for connection using this method is termed "STUN” and is available on Internet at the following World Wide Web http address: ietf.org/rfc/rfc3489.txt.
  • initial information may for example include authentication data such as keys and passwords, client capabilities and available applications on the host computer.
  • initial message exchange between the client and the server is illustrated in Fig . 2.
  • an application Once an application is selected, it may be launched on the host computer. As the application launches, a dynamic library (DLL 5 Shared Object) 118 may be 'injected' into the application such that it is loaded as part of the application process 116.
  • DLL 5 Shared Object Dynamic Object
  • DLL injection is a conventional technique used to run code within the address space of another process by forcing that process to load a dynamic-link library.
  • the technique is generally applicable to any operating system that supports shared libraries, although the term most commonly assumes usage on Microsoft Windows.
  • An advantage of the DLL injection technique is that it does not require access to the application source code. As such, DLL Injection is often used by third-party developers to influence the behavior of a program externally. A description of conventional library injection methods appears in Wikipedia under "DLL injection”.
  • the injected library provides replacement versions for API functions calls that may be used by the application.
  • the specific API calls that are to be overridden depend on the type of object that is to be manipulated, such as but not limited to Graphics, Text, or audio types.
  • Audio and video encoding and decoding may for example be effected in accordance with known specification documents. Suitable specifications include but are not limited to the following:
  • MP3 MPEG-I layer 3 for audio, a specification document for which can be found at the following http www link: iso.ch/cate/d22412.html.
  • the dotted line marked “Capture and Override Data" connecting elements 103 and 116 in Fig. Ia functions as a means of communication for data transferred between the server process and the application process.
  • data may include, but is not limited to, one or more of the following: the captured image of the application process which is provided to block 105 , the override data exchange between the server and the application which is provided to block 118, a state machine data tracking the application current status or even the captured audio from the application process.
  • Element 115 typically functions as a mean of communication for all data transferred between the client and the server: such data may include, but is not limited to, one or more of the following: image and/or audio data sent from the server to the client which is processed by element 114, input injection commands sent from the client to the application via the server which are processed by element 106, server commands sent from the client (for example setROI commands which calibrate the captured image parameters), or client commands sent from the server (for example, the MoveCursor command which changes the cursor location on the client's screen).
  • image and/or audio data sent from the server to the client which is processed by element 114 input injection commands sent from the client to the application via the server which are processed by element 106
  • server commands sent from the client for example setROI commands which calibrate the captured image parameters
  • client commands sent from the server for example, the MoveCursor command which changes the cursor location on the client's screen.
  • Fig. 2 which describes a startup sequence for the system of Fig. Ia, is now described in detail.
  • the server program 103 is assumed to start before the initial client connection. In a typical embodiment, this may occur upon the host computer boot.
  • the user starts a session by starting the client program 111 and connecting it to the server (step 201).
  • the client and the server programs perform an authentication step 203.
  • the client 111 publishes its capabilities to the server 103.
  • capabilities are screen resolution, video/audio decoding capabilities, keyboard type, mouse type, and touch screen. This data may be used later to adapt the application to a particular client type.
  • the client may simply send its model/class and the server may hold a database that maps the client type to a set of capabilities.
  • the server presents to the client 111 the list of available applications in step 205. Once the user selects the desired application, the application program 116 starts in step 207.
  • the server may present only a single application. In such case, this application may be automatically selected without further input request from the user.
  • step 206 the application program 116 is started.
  • Fig. 3 describes an example start sequence in detail. More generally, the start sequence typically comprises:
  • API Interception and communication between the application program 116 and the server program 103.
  • the server 103 typically creates a shared memory block 117, e.g. as described in Fig. 5, that is used to communicate between the client adaptation layer 109 and the client adaptation layer on the server side 104.
  • Shared memory block 117 holds the adaptation configuration data that is employed by the adaptation layer.
  • the configuration data includes the description of the filters that may be applied to the application program.
  • Fig. 3 is a simplified flow diagram of a method for performing the "start application" step 206 of Fig. 2.
  • step 301 shared memory block 117 is created.
  • step 302 API interception and DLL injection occurs.
  • step 303 the application program 116 is started in step 303.
  • the import tables of the application program may be modified (step 304) such that API calls are redirected to the code that is provided by the interception DLL 118 rather than the Operating System/Host computer provided code.
  • the injected DLL 118 is connected to the shared memory block 117 as well for further communication between the modules.
  • step 307 the application program 116 notifies the server program 103 that launch has been completed.
  • Fig. 4 describes call redirection in accordance with an embodiment of the present invention.
  • Fig. 5 describes a possible data structure for Shared memory block 117.
  • This block typically performs one or both of the following functions inter alia: (i) serving as a communication means between the two parts of the Client Adaptation blocks; and/or (ii) storing all the 'context' that is used for client adaptation.
  • the context may comprise various sub-elements such as some or all of the following, inter alia: Client Command Queue 501 - used by the application side adaptation layer 109 to send commands to the client. Examples of such commands include but are not limited to the following: display a string, 'say' a string, show/hide cursor, set cursor position, change cursor icon. In the example embodiment, these commands may be sent to the client program 111 for execution.
  • Input injection queue 502 - used by module 104 to send an input command that may be later injected to the application 116.
  • the commands may be stored in the queue and read whenever the client calls API functions to read the input queue. Examples of such commands include but are not limited to the following: IDirectInputDevice7::GetDeviceData and IDirectInputDevice7::GetDeviceState from the Directlnput API, and PeekMessage(..) and GetMessage(..) from user32 API.
  • Client Capabilities 503 - is used to store the client capabilities. This data structure may be initialized upon client connection and may be referenced by the adaptation filters.
  • the capabilities that may be stored in the example embodiment may include, but are not limited to, some or all of the following capabilities: client display width/height, client sound capabilities, and client image decoder capabilities.
  • Frame queue 504 is used by the adaptation layer 109 to send newly acquired frames to the server 103. These frames may later be read by element 105 that may encode (compress) them and send them to the client program 111.
  • Blocks 505, 506 and 507 include the current filters descriptors (also termed herein "Current Filter Set”) e.g. as described in detail below. These filters may be initialized during step 206 by reading a per-application configuration file. While it is appreciated that any suitable file format may be used to store the configuration data, an example embodiment may use text based XML file format to store this data.
  • the current filter set may be changed upon execution of an 'application command' e.g. as described below. As described below, an 'application command' may result from an execution of filter action or a user input which sets the application into a new mode.
  • Geometry Filter Descriptors 505 - The geometry filter describes the geometry related commands that are to be modified (identification), and the actions that are to be taken. Geometry filters according to certain embodiments of the present invention are described herein below with reference to Fig. 7.
  • Text Filters Descriptors 506 - The text filters describe the text related commands that are to be modified and the action that is to be taken.
  • Text filters are initialized from a configuration file that may be read upon session initialization. Text filters according to certain embodiments of the present invention are described herein below with reference to Fig. 8A.
  • Pixel Filters Descriptors 507 - the pixel filters comprise operations applied to the result image before it is further processed and eventually sent to client 111.
  • Element 507 stores the filters that are to be applied.
  • Pixel filters are initialized from a configuration file. Pixel filters according to certain embodiments of the present invention are described hereinbelow with reference to Fig. 8b.
  • Module 109 typically comprises filters, e.g. one, some or all of the three types of Filters termed herein Geometry Filters (701), Text Filters (702) and Pixel Filters (703).
  • Each filter typically comprises a software module that receives an input information about API calls that may be made by the application.
  • Each filter's output typically comprises a new set of API calls that may be adapted to the remote terminal capabilities. The specific API calls that are made depend on the filter configuration e.g. as described above and stored in blocks 505-507.
  • Geometry filters (701) may be applied to the API's geometry rendering calls. Once the application calls a geometry function, the Adaptation layer compares, in block 702 (Fig. 6), the call parameter and the current graphics pipeline state against the filter's set of identification criteria which may be stored in the filter descriptor in block 505 (Fig. 5). As described in block 709, if a match is found, the filter's action may be executed. Once again, the filter action may be stored in block 505.
  • Examples of geometry filter criteria one or more of which can be used by a filter to identify a command, include:
  • Primitive type e.g. Triangle, Triangle strip, Triangle fan, lines list, connected line and points.
  • Primitive count the number of primitives that are rendered
  • Vertex stride 2 for 2D vertices, 3 for 3D vertices.
  • Texture color the color at a specific position of the currently bound texture
  • Highlight highlight the object e.g. by drawing a cross on its geometry extents.
  • the specific rendering calls depend on the rendering API that may be used by the application.
  • the applicable rendering commands that the geometry filters are applied for may be:
  • Text filters may be applied to text display commands.
  • the text filter identifies a command to be applied according to the command's parameter and according to the current state of the system.
  • the current state is typically influenced by API calls made previously.
  • Identification criteria may for example include one, some or all of the following:
  • Font characteristics weight, italic, size, and/or family
  • API calls in the Microsoft Windows operating system, which may be processed by the text filters may include the following calls, from the GDI, Direct3D and OpenGL APIs respectively:
  • the GDI API calls in the Microsoft Windows operating system which may be processed by the text filters, may include the following:
  • the Direct3D API calls in the Microsoft Windows operating system which may be processed by the text filters, may include the following:
  • the OpenGL API calls in the Microsoft Windows operating system which may be processed by the text filters, may include the following:
  • the list of example functions to be processed includes geometry related API, typically including occurrences in which the text is presented as part of a pre-rendered bitmap or texture.
  • the texture may be processed using an OCR (optical character recognition) module that extracts the text from the image.
  • Pixel filters may be applied to the final image that is rendered by the application. Pixel filters may be triggered by API calls that may be used by the application to present the final image to the user. Examples of such API calls include:
  • a pixel filter may be applied to parts of the image that meet certain set of criteria. This set may include one or more of:
  • the change check can be limited to pixels that may be within a particular color range
  • Operations that can be applied by a pixel filter may include one, some or all of the following:
  • a text once resized, may be so small as to be unreadable or may be large enough to read, however, due to its relatively small size, changes in the text are not particularly salient to the user. For either of the above reasons, it may be desirable to highlight such text, as shown in Fig. 15, using any means which makes such text more prominent to a user. Alternatively or in addition, it may be desired to provide an elective zoom view onto the text. In the zoom view, if such is selected by a user, the text may be shown enlarged so it is large enough to be readable. Alternatively, it may be desired to provide an automatic zoom view which enlarges the text without waiting for the user to select this option, e.g. because the text is deemed so important that it must be shown to the user without allowing user discretion.
  • Fig. 7 describes the flow of Geometry Filter related activities in the example embodiment.
  • Other embodiments might use a different set of steps e.g. some or all of steps 702-707 if a different set of criteria is used to identify geometry API calls.
  • Geometry filters may be applied to Geometry related API calls (step 701). In the example embodiment these calls may include any or all of:
  • the processing of geometry commands may comprise the following three top-level steps: Computing the object attributes (e.g. as per steps 702- 706); Comparing against the current set of filters (e.g. as per steps 707-708); execution of result commands (e.g. steps 709 or 710).
  • the selection of the execution action may depend on the comparison that may be made in step 708.
  • a call (710) may be made to the original, Operating System provided API call. Otherwise, i.e. if the object does meet the filter criteria, the action (709) that may be described in the filter may be carried out.
  • Actions may include, but are not limited to, some or all of the following actions:
  • a key mapping as stored in the current selected key map, typically comprises a translation table that defines the actions to be taken upon user input such as key-press, mouse move etc.
  • a suitable key mapping process is described in detail below.
  • app command - send an application command e.g. as described within the context of the input handling mechanism below; in the example embodiment, step 709 may include a combination of the commands above.
  • the operations executed in step 709 accept parameters in order to carry out their actions.
  • These parameters may be defined in the configuration file as part of the action description. These parameters may compromise: (i) constants such as color description, pre-defined application command ; (ii) server related variables such as position on the screen, relative to the application window; and/or (iii) client related variables e.g. zoom that fits into the remote terminal display size.
  • text Filters 602 include filters that may be applied to text related API calls. Examples of such calls in the example embodiment include but are not limited to: (i) TextOut; (ii) DrawText; (iii) DrawTextEx; (iv) ExtTextOut; (v) SetText*; (vi) TabbedTextOut.
  • Text filters may also process Geometry related API calls as described for the Geometry Filters 601. In this case, the texture that is used by the geometry calls is examined using a suitable OCR (optical character recognition) algorithm such as but not limited to edge detection, neural network integration and image warping and projection, and is used to convert the texture image into a string.
  • OCR optical character recognition
  • Text filter actions in an example embodiment may include some or all of the following actions: (i) Hide - the text is not displayed; (ii) Say - e.g. as per Say command described hereinbelow. (iii) Overlay Display - The string may be sent to the client 111 for display as string on top of the video stream. As a result the displayed string is not subject to video scaling and compression and therefore remains readable on the remote terminal device 111 (iv) Display in ticker -as in (iii), the string may be sent to the client for display in a ticker that may be presented to the user. This method may be used when the displayed string may be expected to be longer than that which the client display can accommodate.
  • the process of sending the string may be similar to (iii) e.g. as described in detail below, (iv) Scale/Translate -
  • the string may be displayed in a new position on the screen, potentially in different (scaled) size, (v) Generate Audio Cue; (vi) Render in different font and/or color; (vii) change key map; (viii) application command.
  • Fig. 8a illustrates activities that may be involved in the text filter processing.
  • the texture of the geometry may be processed using an off the shelf OCR algorithm 810.
  • the OCR module extracts the string out of the texture pixmap (pixel map).
  • the attributes of the command may be obtained e.g.
  • command attributes may be compared against the current list of text filters (step 811). If no match is found, the API call may be executed 'as is' by the operating system provided API 110. If a match is found, the program executes the actions defined in the matched filter.
  • Pixel filters may be applied to the final image that may be rendered by the application.
  • Pixel filters may be triggered by API calls that may be used by the application to present the final image to the user. Examples of such API calls include: (i) IDirect3D9::Present; (ii) glFinish(..); (iii) glFlush(..); (iv) wglSwapLayerBuffers(..); (v) wglSwapBuffers(...);
  • a pixel filter may be applied to parts of the image that meet a certain set of criteria. This set may include: (i) Position on the screen; (ii) A change in pixels relative to other portions of the screen or relative to one or more previous images. The change check can be limited to pixels that are within a specified color range.
  • Operations that can be applied by a pixel filter may include but are not limited to some or all of the following: (i) Highlight an area; (ii) Shade an area; (iii) Scale and Zoom to a specific region of interest (iv) Radiometric transformations (Brightness, Contrast, Gamma Correction); (v) change key map; and (vi) app command.
  • Fig. 9 illustrates a sequence of activities that may be used in the example embodiment for 'saying' a string on the remote terminal 111.
  • the audio facility of the host computer 103 may be used to play the string as indicated at reference numeral 904.
  • the translation of the string into an audio signal may be implemented on the remote terminal 111 using the remote terminal audio facility.
  • Fig. 10 is an example of a screenshot rendered without use of the geometry filter 601 of Fig. 6.
  • Fig. 11 is an example of a screenshot rendered using the geometry filter 601 of Fig. 6.
  • the geometry command that is involved in rendering object 1001 has been detected, scaled and translated e.g. as described in actions (ii) and (iii) described above with reference to Fig. 7.
  • the text in object 1001 is not large enough to be readable after having been rescaled and therefore, the object may be translated upward to a area which may be less crucial to the user's interaction with the game such that the object 1001 can be presented at a size large enough to maintain readability of the text.
  • Fig. 12 is an example of a screenshot with text in the lower right corner (object 1201) which is too small to see, due to resizing to adapt to a new and smaller display screen.
  • the text says: "In application message text is displayed on the screen”.
  • Figs. 13A - 13B may be screenshots similar to the screenshot of Fig. 12 except that text filter 602 of Fig. 6 has been applied to draw the above text in a ticker. The Ticker display action is best appreciated by comparing these two figures.
  • the text in the upper right corner is unreadable as a result of the screen downscale
  • Figs. 13A — 13B illustrates the result image after applying a text filter that replaces the text rendering object with a "display in ticker" action.
  • the text in the ticker is horizontally scrolled to the left and is in a large enough font to be readable; this is possible because only a portion of the text is fitted into the display screen at any one time.
  • the ticker may or may not be of the same dimensions as the original text box; if it is not, human input may be used to verify that the area occupied by the ticker can be occluded without impairing the user's interaction with the application.
  • the system of Fig. Ia may also handle or process user inputs from the remote terminal.
  • Specific handling of various user inputs may be defined in a configuration file stored on the server and delivered to the client software. Although the exact format of the configuration file may change between embodiments, the example embodiment described herein may use XML format to store key mapping data.
  • Input handling may include "Device level” input handling (e.g. translation of a user input into an application command) and "application level” input handling (translation of the application command into a host input). These two levels of input handling are illustrated in Fig 16.
  • Block 1602 translates the key to a correct key map based on one or more suitable criteria, some of which may be device dependent, and hence exemplary of device level input handling.
  • Block 1605 sends the application command to the server, and hence is an example of application level input handling. It may be defined that no user intervention is provided during the process of input handling, or alternatively it may be defined that the user is prompted for input, such as, but not limited to, the operation of block 1603, "Switch to new key map", where the user may be prompted with a list of key maps to choose from.
  • the key-mapping data may be saved in part or in its entirety on the server side and/or the client side.
  • Server commands are actions or sequence of actions which are performed on the server side, such as but not limited to the following actions: Move the mouse in the server, emulate keyboard press and release, zoom in on a certain region of the screen.
  • Client local commands are commands that run locally on the client device and do not actively run on the server, such as but not limited to the following commands: show the client system menu, and exit the client.
  • Application commands may be defined for a sequence of server and client commands. Such commands may assist in creating a level of separation between a sequence of actions to be performed and the device-specific assignment of this sequence to a specific input event. For example, if in a certain computer program the user normally presses ctrl+alt+z to zoom in on the screen, then the creator of the XML customization file for that program may define an application command called "zoom-app-command" which emulates the above action sequence. Later on in the file, while describing the specific configuration to, for example, a mobile device of type XX, the creator may assign the "zoom-app-command" to a "key X pressed” input event. In the description of the configuration to mobile device of type YY the creator may assign the same application command to the "key Y pressed" input event without having to redefine the action sequence.
  • an input handling event may trigger the handling of server commands or application commands, performed on the server, or of local client commands.
  • Server commands are commands performed solely on the server side such as a "Zoom in” command which sets the server image capture area to a sub-region of the entire rendered image.
  • An example of a command which is not a server command may be "move mouse”.
  • "move mouse” may be performed on the client and on the server: The server moves the mouse in the intercepted application and the client moves the mouse cursor he is drawing on the image received from the server.
  • the "Zoom in” commands sets the server to capture a sub-region of the rendered image, whereas the client does nothing dedicated to this task.
  • an event may be sent to block 113 in the client program 111 using the underlying operating system of the remote terminal.
  • the input event generated by the user might be any or all of the following: key press, mouse move, mouse press, touch screen press, device rotation, voice command.
  • the event may be then translated into a command in block 1602.
  • Block 1602 may use multiple translation tables termed herein 'key maps'. At any given time, there may be one, typically only one, key map which is active and used for the actual translation.
  • the command may be then dispatched to one of the processing blocks 1603 - 1606 e.g. based on the command type described below:
  • a Switch Map command may be used to select a new key mapping table.
  • a selection of a new key map might be requested upon user input that switches the application into a new state.
  • SendKey command is the basic key input injection to the application. As illustrated in Fig. 16, block 1604 sends a message to the server program 103. The message is received in block 106 for further handling.
  • Module 1607 which may be provided within Block 106 uses an underlying operating system mechanism to inject a key event into the application. In the example embodiment a Microsoft Windows Sendlnput command is used. In an alternative embodiment the SendMessage command can be used to send a message directly to the application.
  • An "Application command” typically comprises a request for specific processing on the host computer 101 side. Examples of such a command include but are not limited to any of the following: (i) Select a specific screen area (Region of Interest); (ii) select new screen scaling factor; (iii) Pan the screen; and (iv) Move the cursor to specific screen location.
  • Block 1605 sends a message that may be received by module 1608 and executed. It is appreciated, as shown at block 1608, that generally, key presses may be mapped into App commands, e.g. with filter commands, and filters may be used to switch key maps. For example, a filter may detect a switch to a new mode in an application, e.g. game, which results in switching to a new key map.
  • a fourth type of command is "cursor move".
  • the "Cursor move” command typically moves the local cursor and sends a cursor move event to the host computer 101 which then injects it as an event to the application, similar to SendKey processing.
  • the latency from user event to the feedback may be immediate.
  • Fig. 14 is an example of a screenshot rendered without use of the pixel filter 603 of Fig. 6.
  • Fig. 15 is an example of a screenshot rendered using the pixel filter 603 of Fig. 6, where the pixel filter is constructed and operative to perform 'highlight' (marked 1501).
  • some of the adaptation directed at handling input events includes use of tilt sensitive hardware which may be found on the client device.
  • tilt sensitive hardware examples include the Nokia N95 cellular phone whose full specification is in the public domain and is available e.g. at the following http www link: forum.nokia.com/devices/N95 and the Apple iPhone whose full specification is in the public domain and is available e.g. at the following http www link: apple.com/iphone/specs.html.
  • a game can run on a PC which acts as a server and the display and user inputs may occur through a mobile device (client).
  • the system of the present invention typically allows a meaningful experience on the mobile client, even though the game application was written for the PC.
  • the PC may intercept certain game instructions e.g. relating to visual or audio presentation to the user and may automatically adapt the instructions. For example, the size of a bubble may be increased, the dialog box may be zoomed, and/or text may be converted to voice so that it is spoken rather than displayed to the user.
  • Adaptation may be based on the keyboard type provided on the remote e.g. mobile device. For example if not enough keys are provided on the mobile device, relative to the number of keys assumed by the application, but the mobile client has touch-screen, soft keys may be added on the screen. Adaptation may also be based generally on whether or not the client has a touch screen. For example, if a touch screen exists, a mouse may be added to the touch screen; if no touch screen exists, mouse input may not be allowed. Adaptation may be based on network connections. For example if a network connection is good, more information can be sent, and/or part of the application may be allowed to run on the client side. Geometric operations such as translation, rotation, and scaling may be performed by a simple operator. Scaling typically involves scaling only a portion of the data on the display rather than the entirety of that data.
  • Adaptation may be based on context. For example, if a client is known to be in a noisy environment, text can be converted to voice and read rather than being displayed on a screen. Also, if there are not enough keys on the remote terminal, voice commands may be used for input. Typically, in game applications, the server knows, based on graphic instruction interception, where the user is in the game and therefore knows the limited vocabulary that the user can input, thereby facilitating interpretation of the voice commands.
  • the scope of the invention includes methods performed by a server including some or all of the following steps: a. receiving client capabilities b. deciding on adaptation(s) to be effected based on client capabilities c. activating component(s) for decided upon adaptation(s) and/or for a particular database with rules for decided upon adaptation(s) d. Intercepting an instruction to be adapted e. Adapting the intercepted instruction according to rules corresponding to decided upon adaptation(s) f. Optionally, intercepting user command which would affect the adaptation(s) which are to be effected and redoing step c; and g. optionally, iterating to step d.
  • Operation of the filters shown and described herein may be determined by a set of rules that are input to the apparatus, while rules are also termed herein 'descriptors' and which may be provided in blocks 505-507 shown herein.
  • rules are also termed herein 'descriptors' and which may be provided in blocks 505-507 shown herein.
  • the run-time application of these rules are carried by any or all of the Geometry filters, Text filters and Pixel filters shown and described herein.
  • client software may handle audio commands. These, like other user input events, may be translated into application commands and sent to the server which translates them into application domain events and injects them into the application.
  • server system 101 and interfacing system 102 may communicate using UDP as a communication protocol.
  • the apparatus of the present invention optionally identifies text, determines whether it might be insufficiently noticeable once "translated" from a first output device to a second typically smaller output device, and if so, highlights the text as "translated" for the second output device to make it more noticeable, e.g. as shown herein in Figs. 14 - 15.
  • Each of these steps may be performed entirely by the computerized apparatus or in a partially human-guided manner.
  • Certain embodiments of the object transformation methods and apparatus shown and described herein are particularly suitable for situations in which the source code of the software application for which an effect is to be achieved, as described herein, is not available, and/or it is impossible to modify the application input and/or it is impossible to modify the application's configuration parameters to achieve the desired effect.
  • text and geometry objects are identified out of 'display' API calls that are made.
  • system may be a suitably programmed computer.
  • some embodiments of the invention contemplate a computer program being readable by a computer for executing the method of the invention.
  • Some embodiments of the invention further contemplate a machine-readable memory tangibly embodying a program of instructions executable by the machine for executing methods of the invention.
  • software components of the present invention including programs and data may, if desired, be implemented in ROM (read only memory) form including CD-ROMs, EPROMs and EEPROMs, or may be stored in any other suitable computer-readable medium such as but not limited to disks of various kinds, cards of various kinds and RAMs.
  • ROM read only memory
  • EEPROM electrically erasable programmable read-only memory
  • Components described herein as software may, alternatively, be implemented wholly or partly in hardware, if desired, using conventional techniques.
  • a computer program product comprising a computer usable medium having a computer readable program code embodied therein, the computer readable program code being adapted to be executed to implement one, some or all of the methods shown and described herein. It is appreciated that any or all of the computational steps shown and described herein may be computer-implemented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Generation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A system for adapting objects generated by programs and having output characteristics to run on each of a plurality of terminals each including a different output device and input capabilities, the system comprising a terminal data repository operative to accept information regarding at least one characteristic of the output device of each of the plurality of terminals, and a graphics object modifier operative to modify at least one output characteristic of a graphics object outbound to an individual output device according to at least one characteristic of the individual output device.

Description

SYSTEM AND METHODS FOR ADAPTING APPLICATIONS TO INCOMPATIBLE OUTPUT DEVICES
REFERENCE TO CO-PENDING APPLICATIONS
Priority is claimed from U.S. Provisional Patent Application No. 61/193,629, entitled "System and methods for adapting applications to incompatible output devices" and filed 11 December 2009.
FIELD OF THE INVENTION
This invention generally relates to input devices and output devices for computer software and more specifically to mobile communication devices having small screens and limited power.
BACKGROUND OF THE INVENTION
Wireless Internet, which utilizes small screens of mobile communication devices instead of full-size personal computer screens, is known.
Text-to-speech technology is known.
Terminals such as mobile communication devices which sense their own tilt relative to a fixed frame of reference such as the earth, are known.
The state of the art includes inter alia the systems and technologies described in the following publications and patent documents: US 2002/0178302 to Tracey; US 2004/148221 to Chu; US 2007/0129990 to Tzruya et al; US 2007/0126749 to Tzruya et al; WO 00/29964 (PCT IL00/00612 to Lingocom Ltd.; WO 2007/063422A3 (PCT/IB2006/003968) to Exent Technologies Ltd.; WO 2007/066329A2 (PCT/IL2006/001398) to Exent Technologies Ltd.; WO0748233A3; WO0820313A3 (PCT/IB2007/003000 to Exent Technologies Ltd.; WO0820317A3 (PCT/IB2007/003066 to Exent Technologies Ltd.
The disclosures of all publications and patent documents mentioned in the specification, and of the publications and patent documents cited therein directly or indirectly, are hereby incorporated by reference. SUMMARY OF THE INVENTION
There are many reasons that one may desire to run an interactive application in a setup where the application is running on one computer device (host computer) but the display and the interaction with the application is done from another, typically mobile, computer device (Terminal), typically connected using a data channel such as fixed or wireless network. Among these reasons are the Terminal's limited resources (CPU power, memory, Graphics capabilities, battery power etc), incompatibility of the application with the Terminal device, or the data that is required to run the application is not available on the Terminal or from data security reasons not intended to be distributed . Indeed, there are many systems that enable the user to connect to a computer using a remote computer and run applications on the host computer. An example for such application is Xl 1 Windows System, RealVNC as described in the website ofrealvnc.com, Microsoft's Terminal Services, and many others.
However, there are cases where the Terminal that is used to interact with the application has different characteristics and capabilities than those that the application was designed for. Such differences include differences in display size, display resolution, type and availability of a keyboard, mouse device, touch screen, tablet etc. likewisethere there are cases where the Terminal that is used to interact with the application is used in very different work environment (outside noise, direct sun light etc) Certain embodiments of the present invention provide techniques to address these differences and modify the application behavior during run-time.
There are several possible methods for adapting the application behavior to a new terminal type. One method includes modifying the application source code. With this approach, the application developers rewrite their application to conform to each new terminal type. Typically though, this approach is tedious, costly demands great effort and requires the availability and access to the original code. In many cases, it is desired to modify the application without needing the source code. This type of modification can be done in a method called API Interception. In general, API Interception refers to a method where application's API functions are replaced with a modified version of those functions. As a result the application behavior is modified and adds new functionality. API Interception is a technique known in the art and may be implemented using any suitable known methodology. One possible method for implementing the API Interception technique is described on the World Wide Web at the following http location: intemals.com/articles/apispy/apispy.htm. According to certain embodiments of the present invention, an API Interception method is used to modify application behavior and adapt it to remote terminal capabilities.
PC applications in general and games in particular, if targeted to run on mobile devices, may need to be specially re-developed or ported to support the small screen size and limited processing, memory size and/or rendering capabilities of the mobile device.
Most applications initially developed for a PC, are not able to run on a mobile device considering the amount of processing power required, resulting in processor size and power consumption not available to mobile devices.
Furthermore, there are numerous mobile device types which differ in many of the above capabilities (and use different operating systems -EMEI characteristics) making the task of having a mobile version, if at all possible, very tedious, costly and time consuming.
Additional complexity may be introduced through use of advanced graphics and media conversion (text to voice, voice commands) if it is desired to adapt the application to offer good user experience on a small device. Implementing such adaptations is not feasible on remote devices as they require extensive computational and graphic power.
As a result it is clear that in order to facilitate a worthy gaming (or application in general ) experience for high end PC/Console games on mobile devices, one has to enable the application to continue to run on a remote powerful PC (server) and to adapt the application and add specific functionality such that the user is able to run an application from the remote terminal.
An improved functionality for running the application from a remote terminal can be achieved by redesigning and coding the application, yet it would be of great value and significant advantage to provide a method which does not require modification of the application source code, nor of its binary image.
A particular feature of certain embodiments of the invention shown and described herein is a functionality of changing or adapting the behavior and/or look and feel of an application without recoding either source or binary code, using API Interception techniques. Certain embodiments of the present invention use API Interception techniques that enable tracking of the API calls made by an application and based on predefined rules or algorithms, may modify them if and as appropriate, on the fly, while the original application runs on the PC.
A particular feature of certain embodiments of the present invention is the ability to implement the desired adaptation of the application to the new device and use case by adding a client and server SW layer, together intercepting the relevant API calls made by the application and applying new rules whichaffect their execution and how they would look on the remote device.
Certain embodiments of the present invention provide a system, methods and apparatus for running visual applications from a remote terminal that is connected to a host computer over a network.
In particular, certain embodiments of the present invention comprise methods of manipulating the application's visual, text objects and imagery in general, such that it is useful on a remote terminal that has different characteristics than the display/input (in general, terminal) that the application was originally designed for.
Certain embodiments of the present invention pertain to an application, such as but not limited to a game, written for a particular device and context. Without changing the application code, when the application runs on the particular device, output instructions which may for example affect visual and/or audio output, are intercepted and adapted for the capabilities of a different device and/or context. For example, if the application was written for a PC but the actual output is on a mobile phone with a smaller display, the displayed font may be dynamically increased (e.g. is determined after a specific client with specific screen size has been connected and/or is determined responsive to a scaling factor change during a lasting connection such as a client's screen which changes from portrait to landscape),relative to the scaling factor between the PC and the smaller display. As another example, if there are less keys on the output device than used by the original application and the output device includes a touch screen, soft keys may be dynamically created and displayed on the screen.
Typically, dynamic operations shown and described herein take place only after a specific client with specific screen size has been connected to the system of the present invention. It is appreciated that a scaling factor may change during a lasting connection e.g. if a client's screen's orientation changes from portrait to landscape or vice versa.
There is thus provided, in accordance with at least one embodiment of the present invention, a method for generating a first text display for a first display device, the first text display representing a second text display generated by a program for a second display device, the method comprising identifying a subset of text objects, each associated with a text string and being unsuitable for display on the first display device, from among all text objects in the second text display; and generating a first text display which differs from the second text display in that at least one text object of the subset of text objects is omitted and at least a portion of the text string associated therewith is presented orally.
Also provided, in accordance with at least one embodiment of the present invention, is a method for generating a first text display for a first display device, the first text display representing a second text display generated by a program for a second display device, the method comprising identifying a subset of text objects, each associated with a text string and being unsuitable for display on the first display device, from among all text objects in the second text display; and generating a first text display which differs from the second text display in that at least a portion of the text string of least one text object of the subset of text objects is displayed piecemeal.
Further in accordance with at least one embodiment of the present invention, at least a portion of the text string of least one text object of the subset of text objects is displayed in ticker format.
Still further in accordance with at least one embodiment of the present invention, the program has a source code and the identifying is performed without recourse to the source code.
Additionally in accordance with at least one embodiment of the present invention, the identifying proceeds at least partly on a basis of identifying text objects including characters which, when displayed on the first text display, are smaller than a predetermined threshold value.
Also provided, in accordance with at least one embodiment of the present invention, is a method for generating a first display for a first display device, the first display representing a second display generated by a program for a second display device and including a cursor, the method comprising determining whether the cursor is unsuitable for display on the first display device; and if the cursor is unsuitable, generating a first display which differs from the second display in that the cursor is omitted and replaced by a cursor suitable for display on the first display device.
Further in accordance with at least one embodiment of the present invention, the first display device is housed on a remote terminal.
Still further in accordance with at least one embodiment of the present invention, the method also comprises accepting a human input defining the subset to include only text objects deemed by a human to be important to the application.
Additionally in accordance with at least one embodiment of the present invention, the human input defines the text objects deemed important in terms of at least one of the following text object characteristics: String content, location of the text object within the second text display, and color.
Also provided, in accordance with at least one embodiment of the present invention, is a method for generating a first text display for a first display device fixedly associated with a first input device, the first text display representing a second text display generated by a program for a second display device, the method comprising determining if the orientations of the first and second display devices are one of the following: both landscape; and both portrait; and if not, mapping directional input functions into the first input device so as to enable the first display device fixedly associated therewith to be held and used rotated 90 degrees to the orientation of the second display device.
Further in accordance with at least one embodiment of the present invention, the first input device, when rotated 90 degrees to the orientation of the second display device, includes at least two input modules having at least two of the following relative orientations: left, right, top and bottom; and the mapping comprises mapping at least two of the following input options: go left, go right, go up and go down, into the at least two input modules respectively.
Still further in accordance with at least one embodiment of the present invention, the first display device comprises a keyboard and each of the input modules comprises a key in the keyboard.
Also in accordance with at least one embodiment of the present invention, the method also comprises providing a display device database storing at least one display characteristic of each of a plurality of display devices. Further in accordance with at least one embodiment of the present invention, the program comprises a game.
Also provided, in accordance with at least one embodiment of the present invention, is a method for running a program written for a first input device having a first plurality of states and associated with a first display device on a terminal having a second input device having a second plurality of states and associated with a second display device, the method comprising generating a display, for the second display device, which associates at least one input option with at least one of the second plurality of states.
Further in accordance with at least one embodiment of the present invention, the text objects being unsuitable for display comprise objects which, when re-sized proportionally to relative dimensions of the first and second text displays, are unsuitable for viewing on the first text display.
Still further in accordance with at least one embodiment of the present invention, the cursor unsuitable for display comprises a cursor which, when re-sized proportionally to relative dimensions of the first and second display devices, is unsuitable for viewing on the first display device.
Additionally provided, in accordance with at least one embodiment of the present invention, is a system for adapting objects generated by programs and having output characteristics to run on each of a plurality of terminals each including a different output device, the system comprising a terminal data repository operative to accept information regarding at least one characteristic of the output device of each of the plurality of terminals; and a graphics object modifier operative to modify at least one output characteristic of a graphics object outbound to an individual output device according to the at least one characteristic of the individual output device.
Further in accordance with at least one embodiment of the present invention, the graphics object modifier is operative to perform a global modification on at least most objects generated by an individual program outbound for an individual terminal; and to perform local modifications on at least one object generated by the individual program which, having undergone the global modification, becomes unsuitable for display on the output device of the individual terminal.
Still further in accordance with at least one embodiment of the present invention, at least one of the terminals also includes an input device. Further in accordance with at least one embodiment of the present invention, at least one of the output devices comprises a visual display device.
Still further in accordance with at least one embodiment of the present invention, the modifier is operative to perform at least one of the following operations on at least a portion of at least one object: translation, rotation, scaling, occluding.
Additionally in accordance with at least one embodiment of the present invention, the modifier is operative to modify at least one of the color, texture, brightness and contrast of at least a portion of at least one object.
Further in accordance with at least one embodiment of the present invention, the characteristic of the output device includes an indication of whether the output device is intended for use outside or inside and the graphics object modifier is operative to modify at least one of at least one graphic object's brightness and contrast accordingly.
Yet further provided, in accordance with at least one embodiment of the present invention, is a method for modifying a program for display on a first display device, wherein the program generates a plurality of display screens suitable for display on a second display device which differs from the first display device, the method comprising at least one display screen, identifying first and second portions of the display screen which can be rendered semi-transparently and superimposed onto one another; rendering the first and second portions of the display screen semi-transparently; and superimposing the first and second portions of the display screen onto one another.
Still further provided, in accordance with at least one embodiment of the present invention, is a system for adapting a multi-mode program to run on a terminal including an output device and an input device capable of generating a first set of input events, the program being operative to branch responsive to occurrences of input events from among a second set of pre-defined input events, the system comprising an input event mapper operative to receive an event from the first set of input events and to generate, responsively, at least a simulation of an event from the second set of input events, thereby to cause the program to branch, wherein the event from the second set of input events generated at least in simulation by the input event mapper responsive to receiving an event from the first set of input events depends at least partly on the mode in which the program is operating.
It is appreciated that the above system is particularly suitable when the number of input events that a terminal is capable of generating is smaller than the number of input events that a multi-mode game or other multi-mode application is capable of understanding. In this case, according to certain embodiments of the present invention, the mapping of outgoing terminal events to incoming game events may be performed differently within each of the modes of the game or application. For example, if the game has 3 modes I, II and III which accept 2, 3 and 4 different input events respectively, and the terminal is capable of generating only four input events A, B, C and D, then A and B may be mapped to the 2 input events of Mode I respectively if the game is in Mode I. Input events C and D may be regarded as non-events if the game is in Mode I. However, if the game is in mode II, A and B and C may be mapped to the 3 input events of Mode II respectively. Input event D may be regarded as a non-event if the game is in Mode II. If the game is in Mode III, A and B and C and D may be mapped to the 4 input events of Mode III respectively. The term "mapping" refers to generating a particular input event that the game or application is capable of understanding, responsive to production by the terminal of a certain one of the input events that the terminal is capable of generating.
Further in accordance with at least one embodiment of the present invention, the program comprises at least one game, the first set of input events comprises a set of voice commands and the second set of input events comprises a set of application commands.
Still further in accordance with at least one embodiment of the present invention, the program comprises at least one game and the set of application commands comprises a set of game controls.
Also provided, in accordance with at least one embodiment of the present invention, is a system for adapting a program to run on a terminal including an output device and being capable to sense its own tilt relative to a fixed frame of reference, events, the program being operative to branch responsive to occurrences of input events from among a set of pre-defined input events, the system comprising an input event mapper operative to receive a tilt value sensed by the terminal and to generate, responsively, at least a simulation of an event from the set of input events, thereby to cause the program to branch.
Further provided, in accordance with at least one embodiment of the present invention, is a system for generating a first text display for a first display device, the first text display representing a second text display generated by a program for a second display device, the system comprising a text object analyzer operative to identify a subset of text objects, each associated with a text string and being unsuitable for display on the first display device, from among all text objects in the second text display; and a text display modifier operative to generate a first text display which differs from the second text display in that at least one text object of the subset of text objects is omitted and at least a portion of the text string associated therewith is presented orally.
Yet further provided, in accordance with at least one embodiment of the present invention, is a system for generating a first text display for a first display device, the first text display representing a second text display generated by a program for a second display device, the system comprising a text object analyzer operative to identify a subset of text objects, each associated with a text string and being unsuitable for display on the first display device, from among all text objects in the second text display; and a text display modifier operative to generate a first text display which differs from the second text display in that at least a portion of the text string of least one text object of the subset of text objects is displayed piecemeal.
Additionally provided, in accordance with at least one embodiment of the present invention, is a system for generating a first display for a first display device, the first display representing a second display generated by a program for a second display device and including a cursor, the system comprising a cursor analyzer operative to determine whether the cursor is unsuitable for display on the first display device; and a display modifier operative, if the cursor is unsuitable, to generate a first display which differs from the second display in that the cursor is replaced by a cursor suitable for display on the first display device.
Also provided, in accordance with at least one embodiment of the present invention, is a system for generating a first text display for a first display device fixedly associated with a first input device, the first text display representing a second text display generated by a program for a second display device, the system comprising a display device orientation analyzer operative to determine if the orientations of the first and second display devices are one of the following: both landscape; and both portrait; and a directional input function mapper operative, and if not, to map directional input functions into the first input device so as to enable the first display device fixedly associated therewith to be held and used rotated 90 degrees to the orientation of the second display device. Further provided, in accordance with at least one embodiment of the present invention, is a system for running a program written for a first input device having a first plurality of states and associated with a first display device on a terminal having a second input device having a second plurality of states and associated with a second display device, the system comprising an input option associator operative to generate a display, for the second display device, which associates at least one input option with at least one of the second plurality of states.
Additionally provided, in accordance with at least one embodiment of the present invention, is a method for adapting objects generated by programs and having output characteristics to run on each of a plurality of terminals each including a different output device, the method comprising accepting information regarding at least one characteristic of the output device of each of the plurality of terminals; and modifying at least one output characteristic of a graphics object outbound to an individual output device according to the at least one characteristic of the individual output device.
Also provided, in accordance with at least one embodiment of the present invention, is a system for modifying a program for display on a first display device, wherein the program generates a plurality of display screens suitable for display on a second display device which differs from the first display device, the system comprising a display screen area analyzer operative, for at least one display screen, to identify first and second portions of the display screen which can be rendered semi-transparently and superimposed onto one another; a rendering functionality operative to render the first and second portions of the display screen semi-transparently; and a superimposing functionality operative to superimpose the first and second portions of the display screen onto one another.
Additionally provided, in accordance with at least one embodiment of the present invention, is a method for adapting a multi-mode program to run on a terminal including an output device and an input device capable of generating a first set of input events, the program being operative to branch, responsive to occurrences of input events from among a second set of pre-defined input events, the method comprising receiving an event from the first set of input events and to generate, responsively, at least a simulation of an event from the second set of input events, thereby to cause the program to branch, wherein the event from the second set of input events generated at least in simulation responsive to receiving an event from the first set of input events depends at least partly on the mode in which the program is operating.
Further provided, in accordance with at least one embodiment of the present invention, is a method for adapting a program to run on a terminal including an output device and being capable to sense its own tilt relative to a fixed frame of reference, events, the program being operative to branch responsive to occurrences of input events from among a set of pre-defined input events, the method comprising receiving a tilt value sensed by the terminal and to generate, responsively, at least a simulation of an event from the set of input events, thereby to cause the program to branch.
According to certain embodiments of the present invention, systems, methods and apparatus are provided that dynamically adapt an application running on a host computer and which add functionality, such that the user is able to run an application from a remote terminal connected to the host computer e.g. via a communication network or via analog modems or by any other suitable technology or scheme. The remote terminal may comprise a computing device that has means for display and optionally has user input receiving functionality, such as but not limited to a cellular phone, PDA, TV set top box (STB), TV set, or a desktop computer.
If the remote terminal display's attributes and/or capabilities, such as but not limited to size, resolution, number of colors, and orientation, differ from those of the display device that the application was originally designed for, the system may dynamically modify the display that is rendered by the application to match the remote terminal capabilities.
According to certain embodiments of the present invention, if the remote terminal's input device/s, such as a keyboard, touch screen, mouse, orientation sensor, or camera differ from those of the input device that the application was originally designed for, either differing entirely or differing as to certain capabilities such as number of keys in the keyboard or type of mouse, the system dynamically and/or statically modifies that user's inputs to match application requirements. An example of a static modification is a change of key map effected by a user.
The functionality of running the application from a remote terminal preferably does not require modification to either the application's source code or its binary image. Instead, the system may use API Interception techniques that enable it to track API calls made by the application and modify these as appropriate e.g. as described below. Adaptations suitable for specific applications may be described and stored, e.g. in XML format or in a configuration file which may be stored and read on the server and/or the client. The configuration file may be built by editing a text file or by using automated and specific tools. Those adaptations may be expressed as a set of filters also termed herein "object filters", typically including filters of at least one of the following three types of filters:
Geometry filter - Applied to geometry rendered by the application. For example, a geometry filter which is used to intercept a certain "pop up message box", or to intercept a certain graphic element which appears on the screen and to enlarge it, so it is seen better on the client's screen.
Text filter - Applied to text displayed by the applications. For example, text filters which are used to intercept a certain string and to present it as speech e.g. via a suitable text to speech mechanism, or to display it as a ticker on the client's screen.
Pixel filter — applied to an image rendered by the application. For example, a filter which is used to highlight/mark a certain region of the client's screen which was modified, or a filter which is used to enhance the image as to level of detail, and/or sharpness.
An application can have any number, including 0, of each of the above 3 types of filters and these may be applied sequentially to the application's API calls. The specific object filters to be used by a particular application may be specified in the App specific section of the server configuration file.
Also provided, in accordance with certain embodiments of the present invention, is a method for identifying and manipulating Graphics objects according to the remote terminal characteristics, where manipulating may for example comprise any of the following in isolation or in any combination: translation, rotation, scaling, occluding, or changing the color, texture or other appearance attributes of an object.
Further provided, in accordance with certain embodiments of the present invention, is a method for identifying text objects and converting them into an audio message that is played on a remote terminal.
Still further provided, in accordance with certain embodiments of the present invention, is a method for identifying text objects and displaying them in a dedicated ticker, or moving text box, on a remote terminal. Additionally provided, in accordance with certain embodiments of the present invention, is a method for presenting multiple graphic objects at a single screen location using transparencies.
Also provided, in accordance with certain embodiments of the present invention, is a method to translate voice commands on the terminal into applications commands such as game controls.
Further provided, in accordance with certain embodiments of the present invention, is a method for translating an attitude or tilt of a remote terminal capable of sensing tilt, into game commands.
According to certain embodiments of the present invention, an external modification (external to an application's own source code or binary files) is made to an application which displays text on a screen. The modification may generally change e.g. decrease the size of an output screen generated by the application to fit a differently sized e.g. smaller screen, and process the image for adaptation to the smaller screen in terms of, for example, level of detail, sharpness, or color range. However, regarding font which, if decreased, becomes hard to read, other solutions are found, such as but not limited to oral presentation of the text, using conventional text-to-speech techniques, enlarging of the font of only a portion of the text and omitting other portions of text where the text object may stay the same size, presenting the text and another portion of the output screen superimposed on one another wherein at least one of the superimposed portions is transparent, and presenting the text piecewise within a text object of the same size e.g. using a ticker type format in which text is displayed one letter or word at a time, at reading pace.
According to certain embodiments of the present invention, an application written for a first display device may be operative to render one or more objects and an external (out of source code) modification of the application is effected which generally diminishes the size of the objects, however objects with text or other detail that is deemed, either by human input or by a computerized criterion, to be unsuitable for display on another given display device, receive special treatment. Such an object might be diminished less in size, and optionally translated to another location on the screen, and/or rotated to another orientation, such that its relatively large size is less critical and does not obscure critical elements. According to certain embodiments of the present invention, API calls generated by an application written for a source terminal including an output device and optionally an input device go through filters which adapt these calls to a target terminal which differs from the source terminal. Each such filter includes an "identification" functionality which determines whether a particular API call deserves special treatment and an "action" functionality which stipulates what that treatment is to be. It is appreciated that the application may be a multi-mode application in which case filters may treat objects rendered by the application differently as a function of which mode the application is in when these objects occur.
According to certain embodiments of the present invention, a terminal which has a small number of input keys or no keys is used to provide input to an application written for a terminal which has a larger number of input keys. If, for example, the terminal used to provide input can generate voice commands, these may be translated, typically externally of the source code of the application, into input events recognized by the application. For example, mouse input events may be translated into touch screen input events, or vice versa. According to certain embodiments of the present invention, the application has more than one mode, and the inputs generated by the terminal used to provide input are translated differently, depending on the mode the application is in. For example, the "4" key on a cellular telephone may be interpreted as a leftward arrow if the application is in a first mode and may be interpreted as an upward arrow or as a "yes" if the application is in a second mode.
Any suitable processor, display and input means may be used to process, display, store and accept information, including computer programs, in accordance with some or all of the teachings of the present invention, such as but not limited to a conventional personal computer processor, workstation or other programmable device or computer or electronic computing device, either general-purpose or specifically constructed, for processing; a display screen and/or printer and/or speaker for displaying; machine-readable memory such as optical disks, CDROMs, magnetic- optical discs or other discs; RAMs, ROMs, EPROMs, EEPROMs, magnetic or optical or other cards, for storing, and keyboard or mouse for accepting. The term "process" as used above is intended to include any type of computation or manipulation or transformation of data represented as physical, e.g. electronic, phenomena which may occur or reside e.g. within registers and /or memories of a computer. The above devices may communicate via any conventional wired or wireless digital communication means, e.g. via a wired or cellular telephone network or a computer network such as the Internet.
The apparatus of the present invention may include, according to certain embodiments of the invention, machine readable memory containing or otherwise storing a program of instructions which, when executed by the machine, implements some or all of the apparatus, methods, features and functionalities of the invention shown and described herein. Alternatively or in addition, the apparatus of the present invention may include, according to certain embodiments of the invention, a program as above which may be written in any conventional programming language, and optionally a machine for executing the program such as but not limited to a general purpose computer which may optionally be configured or activated in accordance with the teachings of the present invention.
The embodiments referred to above, and other embodiments, are described in detail in the next section.
Any trademark occurring in the text or drawings is the property of its owner and occurs herein merely to explain or illustrate one example of how an embodiment of the invention may be implemented.
Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions, utilizing terms such as, "processing", "computing", "estimating", "selecting", "ranking", "grading", "calculating", "determining", "generating", "reassessing", "classifying", "generating", "producing", "stereo-matching", "registering", "detecting", "associating", "superimposing", "obtaining" or the like, refer to the action and/or processes of a computer or computing system, or processor or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within the computing system's registers and/or memories, into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.
The following terms may be construed either in accordance with any definition thereof appearing in the prior art literature or in accordance with the specification, or as follows: The term "object" is used herein in the broadest sense of the word known in the art of programming to include, inter alia, a set of information sufficient to render a visual display of an item. The term "program" is used to include any set of commands which a processor can perform.
"Configuration file" is used to include output of an "edit stage" provided in accordance with certain embodiments of the present invention which determines which global modifications to perform on a program, and/or which local modifications to perform on which objects within the program, to enable the program to run on a different terminal.
"Soft button" is intended to include a display area on a touch screen which when touched, constitutes a particular input event.
The terms "call", "API call", "function", "command" are used herein generally interchangeably.
The term "piecemeal display" is intended to include any and all display modes in which information is displayed portion by portion instead of all at once.
BRIEF DESCRIPTION OF THE DRAWINGS
Certain embodiments of the present invention are illustrated in the following drawings:
Fig. Ia is a simplified block diagram illustration of a software application modification system constructed and operative in accordance with certain embodiments of the present invention.
Fig Ib is a simplified block diagram illustration of the connection creation process using the assistance of the rendezvous server. The process of Fig. Ib comprises one possible implementation of the connection creation process indicated by arrow 115 of Fig. Ia.
Fig. 2 is a simplified flow diagram of a method for initializing the system of Fig. Ia.
Fig. 3 is a simplified flow diagram of a method for performing the "start application" step 206 of Fig. 2. Fig. 4 is an example of API call Redirection which may be effected by rewriting step 304 of Fig. 3.
Fig. 5 is a simplified block diagram illustration of an example of a suitable data structure for the shared memory 117 of Fig. Ia.
Fig. 6 is a simplified block diagram illustration of client adaptation block 109 of Fig. Ia, constructed and operative in accordance with certain embodiments of the present invention.
Fig. 7 is a simplified flowchart illustration of a method of operation for the Geometry Filter of Fig. 6.
Fig. 8a is a simplified flowchart illustration of a method of operation for the Text Filter of Fig. 6.
Fig. 8b is a simplified flowchart illustration of a method of operation for the pixel Filter of Fig. 6.
Fig. 9 is a simplified flowchart illustration of a "Say command" sequence performed by a client adaptation block 104 in Fig. Ia.
Fig. 10 is an example of a screenshot rendered without use of the geometry filter 601 of Fig. 6.
Fig. 11 is an example of a screenshot rendered using the geometry filter 601 of Fig. 6.
Fig. 12 is an example of a screenshot with text in the upper right corner.
Figs. 13A — 13B are screenshots similar to the screenshot of Fig. 12 except that text filter 602 of Fig. 6 has been applied to draw the text in a ticker.
Fig. 14 is an example of a screenshot rendered without use of the pixel filter 603 of Fig. 6.
Fig. 15 is an example of a screenshot rendered using of the pixel filter 603 of Fig. 6 where the pixel filter is constructed and operative to perform 'highlight'.
Fig. 16 is a simplified flowchart illustration of a method of operation for the user input handling module 113 of Fig. Ia.
Fig. 17 is a simplified flowchart illustration of a key map loading process which may be performed during phase 206 of Fig. 2.
Fig. 18 is simplified flowchart illustration of a method for performing step 1602 of Fig. 16, including translation of a client key to a command. DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS
Described herein are embodiments for adaptation of a software application running on a particular system ("host system") based on the characteristics of a system which a user uses to interface with the application ("interfacing system") and/or based on the current environment, without changing the source code or binary image. It is assumed that the software application was developed to conform with characteristics of the host system and/or developed with a specific environment in mind. Embodiments of the current invention cause the software application to conform even if characteristics of the interfacing system differ from characteristics of the host system, and/or even if the current environment differs from the specific environment envisioned during application development.
Depending on the embodiment, the host system on which the software application runs may be a user device or may be capable of servicing more than one user at a time. Depending on the embodiment, the interfacing system may be a user device or may be capable of servicing more than one user at a time. The interfacing system provides user output (for example via a display or speaker) and optionally receives user input (for example via one or more of a keyboard/pad, touch screen, mouse, orientation sensor, camera, or microphone). Examples of user devices which may be used as host systems and/or as interfacing systems include but are not limited to cellular telephones, desktop computers, laptop computers, game consoles (e.g. Playstation®, Nintendo Wii™, Xbox®) TV set top boxes, TV sets, personal digital assistants (PDAs), and wireless handheld devices (e.g. BlackBerry). Depending on the embodiment, the software application may be any suitable application. Although any suitable interfacing system, software application, and host system may be employed, for the purposes of example and clarification, the specification describes, in addition to the general case, a particular embodiment in which a user uses a cellular telephone to interface with a game which runs on a desktop or laptop computer.
As used herein, the phrase "for example," "such as" and variants thereof describe non-limiting embodiments of the present invention.
Reference in the specification to "one embodiment", "an embodiment", "some embodiments", "another embodiment", "other embodiments" , "various embodiments", or variations thereof means that a particular feature, structure or characteristic described in connection with the embodiment(s) is included in at least one embodiment of the invention. Thus the appearance of the phrase "one embodiment", "an embodiment", "some embodiments", "another embodiment", "other embodiments" "various embodiments", or variations thereof do not necessarily refer to the same embodiment(s).
The method(s)/algorithms/process(es) or module(s) (or counterpart terms specified above) presented in some embodiments herein are not inherently related to any particular electronic system or other apparatus, unless specifically stated otherwise. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the desired method. The desired structure for a variety of these systems will be apparent from the description below. In addition, embodiments of the present invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the inventions as described herein.
The principles and operation of methods and systems that enable running of interactive applications using a remote terminal according to the certain embodiments of the present invention may be better understood with reference to the drawings and the accompanying description.
It is sometimes desirable that a user be able to interface via a particular system ("interfacing system") with a software application running on another system ("host system"). The interfacing system and host system may be distinct from one another and may be coupled for example by a fixed (wired) or wireless connection. One software program which allows a user to interact via a particular computer desktop with a software application running on another computer desktop is XIl Windows System, Real VNC, distributed at the following World Wide Web location: realvnc.com.
However, it is sometimes the case that the interfacing system and host system have differing characteristics, and therefore unless the differing characteristics are taken into account it may not be optimal to interface via the interfacing system with a software application which runs on the host system.
One approach to dealing with the different characteristics of the interfacing system vis-a-vis the host system is to modify the software application source code. With this approach, the application developer rewrites the application to meet the requirements of the interfacing system. However, this approach is demanding in terms of effort and expense required.
Fig. Ia is a simplified functional block diagram illustration of a system 100 enabling interactive applications to run using a remote terminal comprising various modules, e.g. as shown, according to an embodiment of the present invention. Each module illustrated in Fig. Ia may be made up of any combination of software, hardware and/or firmware which performs the functions as defined and explained herein. As described in Fig. Ia, the system 100 comprises a host computer 101 and a remote terminal 102 connected via a data network 115. Computer 101 may run two programs, a server program 103 and the application program 116. The remote terminal computer 102 runs client program 111.
Fig. Ia generally illustrates a network or apparatus for adaptation of a software application, according to an embodiment of the present invention. In the illustrated embodiment, the network includes a host system 101 and an interfacing system 102 (also termed herein "remote terminal") coupled via any appropriate wired or wireless coupling 115 such as but not limited to optical fiber, Ethernet, Wireless LAN, HomePNA, power line communication, cell phone, PDA, Blackberry GPRS, Satellite, or other mobile delivery. Depending on the embodiment, host system 101 and interfacing system 102 may communicate using one or communication protocols of any appropriate type such as but not limited to IP, TCP, OSI, FTP, SMTP, and WIFI. Depending on the embodiment, host system 101 and interfacing system 102 may be remotely situated from one another or may located in proximity of one another.
Host system 101 may comprise any combination of hardware, software and/or firmware capable of performing operations defined and explained herein. For simplicity of description, the description only explicitly describes the hardware, software and/or firmware in host system 101 which are directly related to implementing embodiments of the invention. For example, host system 101 is assumed to include various system software and application software. System software which runs on host system 101 and is directly related to implementing embodiments of the invention is termed "server" program 103, and the software application which runs on host system 101 and which is adapted in accordance with certain embodiments of the invention is termed herein "application" program 107. Shared memory 117 is shared by server process 103 and application process 116 and stores elements directly related to implementing some embodiments of the invention.
Similarly, remote terminal 102 may comprise any combination of hardware, software and/or firmware capable of performing the operations defined and explained herein. For simplicity of description, the description only explicitly describes the hardware, software and/or firmware in the remote terminal 102 which are directly related to implementing embodiments of the invention. For example, remote terminal 102 is assumed to include at least system software. The system software which runs on remote terminal 101 and is directly related to implementing some embodiments of the invention is termed "client" program 111.
In one embodiment, server program 103 includes one or more of the following modules: client adaptation module 104, audio/video encoding and streaming module 105 and input translation and injection module 106. In one embodiment, an injected DLL 118 is injected during run-time into the application process 116 which includes the original program code 107 and system provided libraries (API) 110. The injected DLL 116 typically comprises an API interception module 108 and a client adaptation module 109. hi one embodiment, client program 111 may include any of the following modules, inter-alia: audio and video decoding module 112, user input handling module 113, and input/output module 114. Certain embodiments of specific modules of server program 103, application program 116, and client program 111 are described below. Server program 103, application program 116, and client program 111 are not necessarily bound by the modules illustrated in Fig. Ia and in some cases, any of server program 103, application program 116, and client program 111 may comprise fewer, more and/or different modules than those illustrated in Fig. Ia and/or a particular module may have more, less and/or different functionality than described herein. For example, modules illustrated as being separate in Fig. Ia may be part of the same module in other embodiments. As another example, a particular module illustrated in Fig. Ia may be divided into a plurality of modules in other embodiments. The same is true of other block diagrams shown and described herein.
Typically, the system of Fig. Ia has a client/server architecture. The server typically comprises a host computer that runs the visual application. The server may comprise any computation device that is able to run the desired application such as but not limited to a Personal Computer (Desktop, laptop), a Game Console such as Sony Playstation 3, Nintendo Wii, Microsoft Xbox, a cell phone, or a PDA. Responsive to a user request, the server may launch the requested application. Generally speaking, as the application updates its display, the updated content may be retrieved and sent as a video stream to the remote terminal device. In a similar manner, the audio that is generated by the application may be captured and sent to the remote terminal as an audio stream.
The client software may run on a remote terminal that serves as a display device and as a user input device, such as but not limited to a Personal Computer (Desktop, laptop), a Game Console such as Sony Playstation 3, Nintendo Wii, Microsoft Xbox, a cell phone, or a PDA. The remote terminal receives the video stream that is sent by the server, decodes it and presents it to the user on its screen. Similarly, the audio stream that is sent by the server may be played to the user using the local audio facilities. The client software may handle user inputs such as key press, mouse move, touch screen touches, device rotation, tilts and shakes. These user input events may be translated into application commands and sent to the server which translates them into application domain events and injects them into the application.
In order for the user to run an application remotely, the user runs the client software installed on the terminal device. The client software may connect to the server software that runs on the host computer using Internet Protocol (IP). The client may connect to the server directly, by virtue of having its network address, or may create such a connection using a third computer also termed herein a "rendezvous server" ,which provides the server address and assists with creating the initial connection, as described in Fig Ib. Typically, when a server becomes available it typically notifies the rendezvous server as indicated by the "Availability Notification" arrow in Fig Ib. The client, when trying to connect to the server, first connects to the rendezvous server and queries for the server address. The rendezvous server, then responds to the client request and notifies the client of the server's address as indicated by the "Phase 1 : Address Query" arrow in Fig Ib. Only then, typically, does the client create a direct connection to the server as indicated by the "Phase 2: Direct Connection" arrow in Fig Ib.
Creation of the direct connection between the client and the server may be effected by any suitable method. One optional method for the connection creation process is termed herein "Simple Traversal of UDP through NAT". The specification for connection using this method is termed "STUN" and is available on Internet at the following World Wide Web http address: ietf.org/rfc/rfc3489.txt.
Implementation of the STUN specification creates a lightweight protocol and allows applications running behind a NAT to determine external IP and port-binding properties and packet filtering rules. As a result P2P and other applications can work through existing NAT infrastructure.
Once the client and the server have been connected, the two parties may exchange initial information which may for example include authentication data such as keys and passwords, client capabilities and available applications on the host computer. A suitable initial message exchange between the client and the server is illustrated in Fig . 2. Once an application is selected, it may be launched on the host computer. As the application launches, a dynamic library (DLL5 Shared Object) 118 may be 'injected' into the application such that it is loaded as part of the application process 116.
DLL injection is a conventional technique used to run code within the address space of another process by forcing that process to load a dynamic-link library. The technique is generally applicable to any operating system that supports shared libraries, although the term most commonly assumes usage on Microsoft Windows. An advantage of the DLL injection technique is that it does not require access to the application source code. As such, DLL Injection is often used by third-party developers to influence the behavior of a program externally. A description of conventional library injection methods appears in Wikipedia under "DLL injection".
In the embodiment of this invention, the injected library provides replacement versions for API functions calls that may be used by the application. The specific API calls that are to be overridden depend on the type of object that is to be manipulated, such as but not limited to Graphics, Text, or audio types.
Audio and video encoding and decoding may for example be effected in accordance with known specification documents. Suitable specifications include but are not limited to the following:
(a) H.264 for movies, a specification document for which can be found at the following http www link: .itu.int/rec/T-REC-H.264,
(b) JPEG for images, a specification document for which can be found at the following http www link: w3.org/Graphics/JPEG/itu-t81.pdfhttp://www.w3.org/Graphics/JPEG/itu-t81.pdf; and
(c) MPEG-I layer 3 (MP3) for audio, a specification document for which can be found at the following http www link: iso.ch/cate/d22412.html.
It is appreciated that the dotted line marked "Capture and Override Data" connecting elements 103 and 116 in Fig. Ia functions as a means of communication for data transferred between the server process and the application process. Such data may include, but is not limited to, one or more of the following: the captured image of the application process which is provided to block 105 , the override data exchange between the server and the application which is provided to block 118, a state machine data tracking the application current status or even the captured audio from the application process. Element 115 typically functions as a mean of communication for all data transferred between the client and the server: such data may include, but is not limited to, one or more of the following: image and/or audio data sent from the server to the client which is processed by element 114, input injection commands sent from the client to the application via the server which are processed by element 106, server commands sent from the client (for example setROI commands which calibrate the captured image parameters), or client commands sent from the server (for example, the MoveCursor command which changes the cursor location on the client's screen).
Fig. 2, which describes a startup sequence for the system of Fig. Ia, is now described in detail. The server program 103 is assumed to start before the initial client connection. In a typical embodiment, this may occur upon the host computer boot. The user starts a session by starting the client program 111 and connecting it to the server (step 201). Upon the initial connection, the client and the server programs perform an authentication step 203.
Next, in step 204, the client 111 publishes its capabilities to the server 103. Examples of such capabilities are screen resolution, video/audio decoding capabilities, keyboard type, mouse type, and touch screen. This data may be used later to adapt the application to a particular client type. In an alternative embodiment, the client may simply send its model/class and the server may hold a database that maps the client type to a set of capabilities.
Next, the server presents to the client 111 the list of available applications in step 205. Once the user selects the desired application, the application program 116 starts in step 207. In an alternative embodiment, the server may present only a single application. In such case, this application may be automatically selected without further input request from the user.
In step 206, the application program 116 is started. Fig. 3 describes an example start sequence in detail. More generally, the start sequence typically comprises:
API Interception; and communication between the application program 116 and the server program 103.
Once the client program 111 sends a 'start application' message 206 to the server program 103, the server 103 typically creates a shared memory block 117, e.g. as described in Fig. 5, that is used to communicate between the client adaptation layer 109 and the client adaptation layer on the server side 104. Shared memory block 117 holds the adaptation configuration data that is employed by the adaptation layer. The configuration data includes the description of the filters that may be applied to the application program.
Fig. 3 is a simplified flow diagram of a method for performing the "start application" step 206 of Fig. 2. In step 301, shared memory block 117 is created. In step 302, API interception and DLL injection occurs. Next, the application program 116 is started in step 303. Once the Interception DLL has been injected into the application program 116 as described above such that the Interception DLL is loaded (step 302), the import tables of the application program may be modified (step 304) such that API calls are redirected to the code that is provided by the interception DLL 118 rather than the Operating System/Host computer provided code. In step 305, the injected DLL 118 is connected to the shared memory block 117 as well for further communication between the modules. In step 307, the application program 116 notifies the server program 103 that launch has been completed.
Fig. 4 describes call redirection in accordance with an embodiment of the present invention.
Fig. 5 describes a possible data structure for Shared memory block 117. This block typically performs one or both of the following functions inter alia: (i) serving as a communication means between the two parts of the Client Adaptation blocks; and/or (ii) storing all the 'context' that is used for client adaptation. The context may comprise various sub-elements such as some or all of the following, inter alia: Client Command Queue 501 - used by the application side adaptation layer 109 to send commands to the client. Examples of such commands include but are not limited to the following: display a string, 'say' a string, show/hide cursor, set cursor position, change cursor icon. In the example embodiment, these commands may be sent to the client program 111 for execution.
Input injection queue 502 - used by module 104 to send an input command that may be later injected to the application 116. The commands may be stored in the queue and read whenever the client calls API functions to read the input queue. Examples of such commands include but are not limited to the following: IDirectInputDevice7::GetDeviceData and IDirectInputDevice7::GetDeviceState from the Directlnput API, and PeekMessage(..) and GetMessage(..) from user32 API.
Client Capabilities 503 - is used to store the client capabilities. This data structure may be initialized upon client connection and may be referenced by the adaptation filters. The capabilities that may be stored in the example embodiment may include, but are not limited to, some or all of the following capabilities: client display width/height, client sound capabilities, and client image decoder capabilities.
Frame queue 504 — is used by the adaptation layer 109 to send newly acquired frames to the server 103. These frames may later be read by element 105 that may encode (compress) them and send them to the client program 111.
Blocks 505, 506 and 507 include the current filters descriptors (also termed herein "Current Filter Set") e.g. as described in detail below. These filters may be initialized during step 206 by reading a per-application configuration file. While it is appreciated that any suitable file format may be used to store the configuration data, an example embodiment may use text based XML file format to store this data. The current filter set may be changed upon execution of an 'application command' e.g. as described below. As described below, an 'application command' may result from an execution of filter action or a user input which sets the application into a new mode.
Geometry Filter Descriptors 505 - The geometry filter describes the geometry related commands that are to be modified (identification), and the actions that are to be taken. Geometry filters according to certain embodiments of the present invention are described herein below with reference to Fig. 7.
Text Filters Descriptors 506 - The text filters describe the text related commands that are to be modified and the action that is to be taken. In this example embodiment Text filters are initialized from a configuration file that may be read upon session initialization. Text filters according to certain embodiments of the present invention are described herein below with reference to Fig. 8A.
Pixel Filters Descriptors 507 - the pixel filters comprise operations applied to the result image before it is further processed and eventually sent to client 111. Element 507 stores the filters that are to be applied. In this example embodiment Pixel filters are initialized from a configuration file. Pixel filters according to certain embodiments of the present invention are described hereinbelow with reference to Fig. 8b.
An embodiment of the client adaptation module 109 is now described with reference to Fig. 6. Module 109 typically comprises filters, e.g. one, some or all of the three types of Filters termed herein Geometry Filters (701), Text Filters (702) and Pixel Filters (703). Each filter typically comprises a software module that receives an input information about API calls that may be made by the application. Each filter's output typically comprises a new set of API calls that may be adapted to the remote terminal capabilities. The specific API calls that are made depend on the filter configuration e.g. as described above and stored in blocks 505-507.
Geometry filters (701) may be applied to the API's geometry rendering calls. Once the application calls a geometry function, the Adaptation layer compares, in block 702 (Fig. 6), the call parameter and the current graphics pipeline state against the filter's set of identification criteria which may be stored in the filter descriptor in block 505 (Fig. 5). As described in block 709, if a match is found, the filter's action may be executed. Once again, the filter action may be stored in block 505.
Examples of geometry filter criteria, one or more of which can be used by a filter to identify a command, include:
• Primitive type e.g. Triangle, Triangle strip, Triangle fan, lines list, connected line and points.
• Primitive count : the number of primitives that are rendered
• Vertex stride: 2 for 2D vertices, 3 for 3D vertices.
• Hastexture: whether or not texture is currently enabled in the graphics subsystem state
• Texture color: the color at a specific position of the currently bound texture Examples of operations that may be applied by a filter include one or more of the following:
• Hide (or "occlude") -ignore the call and do not draw the specific object
• Scale the object
• Translate the object
• Color - change the object's color
• Shade - shade an object
• Highlight — highlight the object e.g. by drawing a cross on its geometry extents.
• Render transparently
The specific rendering calls depend on the rendering API that may be used by the application. For example, in an embodiment of the invention where the application uses the Direct3D v9 API, the applicable rendering commands that the geometry filters are applied for may be:
• IDirect3DDevice9 : :DrawPrimitive(..)
• IDirect3DDevice9: :DrawPrimitiveUP(...)
• IDirect3DDevice9::DrawIndexedPrimitive(...)
• IDirect3DDevice9 : :DrawIndexedPrimitiveUP(..)
In an embodiment of the invention where the application uses the OpenGL API, the following functions may be processed:
• glDrawBuffer
• glDrawElements
• glBegin/glEnd sequence.
Text filters may be applied to text display commands. The text filter identifies a command to be applied according to the command's parameter and according to the current state of the system. The current state is typically influenced by API calls made previously.
Identification criteria may for example include one, some or all of the following:
• Command type
• display string
• Position on the screen
• Color; and
• Font characteristics (weight, italic, size, and/or family) Once a text API call matches the filter criteria, it may be processed by the filter. The processing may for example include one, some or all of the following:
• Hide - ignore the command
• Say - use text to speech module to readout the text
• Scale, translate
• Render in a different color
• Highlight
• Generate Audio cue
For example, API calls, in the Microsoft Windows operating system, which may be processed by the text filters may include the following calls, from the GDI, Direct3D and OpenGL APIs respectively:
The GDI API calls in the Microsoft Windows operating system, which may be processed by the text filters, may include the following:
• Textθut(..)
• DrawText(...)
• DrawTextEx(..)
• ExtTextOut(..)
• SetText*(...)
• TabbedTextOut(...)
The Direct3D API calls in the Microsoft Windows operating system, which may be processed by the text filters, may include the following:
• ID3DXFont::DrawText(..)
• IDirect3DDevice9 : :DrawPrimitive(..)
• IDirect3DDevice9::DrawIndexedPrimitive(..)
• IDirect3DDevice9::DrawPrimitiveUP(..)
• IDirect3DDevice9: :DrawIndexedPrimitiveUP(..)
The OpenGL API calls in the Microsoft Windows operating system, which may be processed by the text filters, may include the following:
• glDrawBuffer
• glDrawElements • glBegin/glEnd sequence
The list of example functions to be processed includes geometry related API, typically including occurrences in which the text is presented as part of a pre-rendered bitmap or texture. In such a case, the texture may be processed using an OCR (optical character recognition) module that extracts the text from the image.
Pixel filters may be applied to the final image that is rendered by the application. Pixel filters may be triggered by API calls that may be used by the application to present the final image to the user. Examples of such API calls include:
• IDirect3D9::Present
• glFinish(..)
• glFlush(..)
• wglSwapLayerBuffers(..)
• wglSwapBuffers(...)
A pixel filter may be applied to parts of the image that meet certain set of criteria. This set may include one or more of:
• Position on the screen; and
• A change in pixels. The change check can be limited to pixels that may be within a particular color range
Operations that can be applied by a pixel filter may include one, some or all of the following:
• Highlight an area
• Shade an area
• Zoom to a specific region of interest
Examples of the utility of a pixel filter in adapting to a different display device e.g. a smaller display device are now described. As illustrated e.g. in Fig. 14, a text, once resized, may be so small as to be unreadable or may be large enough to read, however, due to its relatively small size, changes in the text are not particularly salient to the user. For either of the above reasons, it may be desirable to highlight such text, as shown in Fig. 15, using any means which makes such text more prominent to a user. Alternatively or in addition, it may be desired to provide an elective zoom view onto the text. In the zoom view, if such is selected by a user, the text may be shown enlarged so it is large enough to be readable. Alternatively, it may be desired to provide an automatic zoom view which enlarges the text without waiting for the user to select this option, e.g. because the text is deemed so important that it must be shown to the user without allowing user discretion.
Fig. 7 describes the flow of Geometry Filter related activities in the example embodiment. Other embodiments might use a different set of steps e.g. some or all of steps 702-707 if a different set of criteria is used to identify geometry API calls. Geometry filters may be applied to Geometry related API calls (step 701). In the example embodiment these calls may include any or all of:
(i) IDirect3DDevice9 : :DrawPrimitive;
(ii) IDirect3DDevice9::DrawIndexedPrimitive;
(iii) IDirect3DDevice9::DrawPrimitiveUP;
(iv) IDirect3DDevice9::DrawIndexedPrimitiveUP;
(v) glDrawArrays;
(vi) glDrawElements; and
(vii) glBegin/glEnd.
As described in Fig. 7, the processing of geometry commands may comprise the following three top-level steps: Computing the object attributes (e.g. as per steps 702- 706); Comparing against the current set of filters (e.g. as per steps 707-708); execution of result commands (e.g. steps 709 or 710). The selection of the execution action may depend on the comparison that may be made in step 708. In a case where the current object does not meet the filter's criteria, a call (710) may be made to the original, Operating System provided API call. Otherwise, i.e. if the object does meet the filter criteria, the action (709) that may be described in the filter may be carried out. Actions may include, but are not limited to, some or all of the following actions:
(i) Hide - don't draw the object;
(ii) Scale - scale an object;
(iii) Translate - Translate an object;
(iv) Color - Change the object color;
(v) Highlight - Draw a cross or a rectangle around the object;
(vi) Change transparency - Draw transparently.
(vii) Change key map - switch to new key mapping e.g. as per the description of input processing provided below. A key mapping, as stored in the current selected key map, typically comprises a translation table that defines the actions to be taken upon user input such as key-press, mouse move etc. A suitable key mapping process is described in detail below. viii) app command - send an application command, e.g. as described within the context of the input handling mechanism below; in the example embodiment, step 709 may include a combination of the commands above.
The operations executed in step 709 accept parameters in order to carry out their actions. These parameters may be defined in the configuration file as part of the action description. These parameters may compromise: (i) constants such as color description, pre-defined application command ; (ii) server related variables such as position on the screen, relative to the application window; and/or (iii) client related variables e.g. zoom that fits into the remote terminal display size.
Referring now to Fig. 8A, text Filters 602 include filters that may be applied to text related API calls. Examples of such calls in the example embodiment include but are not limited to: (i) TextOut; (ii) DrawText; (iii) DrawTextEx; (iv) ExtTextOut; (v) SetText*; (vi) TabbedTextOut. In addition, Text filters may also process Geometry related API calls as described for the Geometry Filters 601. In this case, the texture that is used by the geometry calls is examined using a suitable OCR (optical character recognition) algorithm such as but not limited to edge detection, neural network integration and image warping and projection, and is used to convert the texture image into a string.
Text filter actions in an example embodiment may include some or all of the following actions: (i) Hide - the text is not displayed; (ii) Say - e.g. as per Say command described hereinbelow. (iii) Overlay Display - The string may be sent to the client 111 for display as string on top of the video stream. As a result the displayed string is not subject to video scaling and compression and therefore remains readable on the remote terminal device 111 (iv) Display in ticker -as in (iii), the string may be sent to the client for display in a ticker that may be presented to the user. This method may be used when the displayed string may be expected to be longer than that which the client display can accommodate. The process of sending the string may be similar to (iii) e.g. as described in detail below, (iv) Scale/Translate - The string may be displayed in a new position on the screen, potentially in different (scaled) size, (v) Generate Audio Cue; (vi) Render in different font and/or color; (vii) change key map; (viii) application command. Fig. 8a illustrates activities that may be involved in the text filter processing. As illustrated, in the event of a Geometry command, such that the relevant string is not presented to the API explicitly, the texture of the geometry may be processed using an off the shelf OCR algorithm 810. The OCR module extracts the string out of the texture pixmap (pixel map). Next, the attributes of the command may be obtained e.g. as per steps 803-805. Next, the command attributes may be compared against the current list of text filters (step 811). If no match is found, the API call may be executed 'as is' by the operating system provided API 110. If a match is found, the program executes the actions defined in the matched filter.
Referring now to Fig. 8b, pixel filters may be applied to the final image that may be rendered by the application. Pixel filters may be triggered by API calls that may be used by the application to present the final image to the user. Examples of such API calls include: (i) IDirect3D9::Present; (ii) glFinish(..); (iii) glFlush(..); (iv) wglSwapLayerBuffers(..); (v) wglSwapBuffers(...);
A pixel filter may be applied to parts of the image that meet a certain set of criteria. This set may include: (i) Position on the screen; (ii) A change in pixels relative to other portions of the screen or relative to one or more previous images. The change check can be limited to pixels that are within a specified color range.
Operations that can be applied by a pixel filter may include but are not limited to some or all of the following: (i) Highlight an area; (ii) Shade an area; (iii) Scale and Zoom to a specific region of interest (iv) Radiometric transformations (Brightness, Contrast, Gamma Correction); (v) change key map; and (vi) app command.
Fig. 9 illustrates a sequence of activities that may be used in the example embodiment for 'saying' a string on the remote terminal 111. In the example embodiment the audio facility of the host computer 103 may be used to play the string as indicated at reference numeral 904. However, in an alternative embodiment the translation of the string into an audio signal may be implemented on the remote terminal 111 using the remote terminal audio facility.
Fig. 10 is an example of a screenshot rendered without use of the geometry filter 601 of Fig. 6. Fig. 11 is an example of a screenshot rendered using the geometry filter 601 of Fig. 6. In the illustrated example the geometry command that is involved in rendering object 1001 has been detected, scaled and translated e.g. as described in actions (ii) and (iii) described above with reference to Fig. 7. In the illustrated example, the text in object 1001 is not large enough to be readable after having been rescaled and therefore, the object may be translated upward to a area which may be less crucial to the user's interaction with the game such that the object 1001 can be presented at a size large enough to maintain readability of the text.
Fig. 12 is an example of a screenshot with text in the lower right corner (object 1201) which is too small to see, due to resizing to adapt to a new and smaller display screen. The text says: "In application message text is displayed on the screen". Figs. 13A - 13B may be screenshots similar to the screenshot of Fig. 12 except that text filter 602 of Fig. 6 has been applied to draw the above text in a ticker. The Ticker display action is best appreciated by comparing these two figures. Whereas in Fig. 12, the text in the upper right corner is unreadable as a result of the screen downscale, Figs. 13A — 13B illustrates the result image after applying a text filter that replaces the text rendering object with a "display in ticker" action. The text in the ticker is horizontally scrolled to the left and is in a large enough font to be readable; this is possible because only a portion of the text is fitted into the display screen at any one time. The ticker may or may not be of the same dimensions as the original text box; if it is not, human input may be used to verify that the area occupied by the ticker can be occluded without impairing the user's interaction with the application.
The system of Fig. Ia may also handle or process user inputs from the remote terminal. Specific handling of various user inputs may be defined in a configuration file stored on the server and delivered to the client software. Although the exact format of the configuration file may change between embodiments, the example embodiment described herein may use XML format to store key mapping data. Input handling may include "Device level" input handling (e.g. translation of a user input into an application command) and "application level" input handling (translation of the application command into a host input). These two levels of input handling are illustrated in Fig 16. Block 1602 translates the key to a correct key map based on one or more suitable criteria, some of which may be device dependent, and hence exemplary of device level input handling. Block 1605 sends the application command to the server, and hence is an example of application level input handling. It may be defined that no user intervention is provided during the process of input handling, or alternatively it may be defined that the user is prompted for input, such as, but not limited to, the operation of block 1603, "Switch to new key map", where the user may be prompted with a list of key maps to choose from.
As shown in Fig. 17 the key-mapping data may be saved in part or in its entirety on the server side and/or the client side.
Any or all of several types of commands may optionally be provided in the input handling mechanism. Server commands are actions or sequence of actions which are performed on the server side, such as but not limited to the following actions: Move the mouse in the server, emulate keyboard press and release, zoom in on a certain region of the screen.
Client local commands are commands that run locally on the client device and do not actively run on the server, such as but not limited to the following commands: show the client system menu, and exit the client.
Application commands, also termed herein "shortcuts", may be defined for a sequence of server and client commands. Such commands may assist in creating a level of separation between a sequence of actions to be performed and the device-specific assignment of this sequence to a specific input event. For example, if in a certain computer program the user normally presses ctrl+alt+z to zoom in on the screen, then the creator of the XML customization file for that program may define an application command called "zoom-app-command" which emulates the above action sequence. Later on in the file, while describing the specific configuration to, for example, a mobile device of type XX, the creator may assign the "zoom-app-command" to a "key X pressed" input event. In the description of the configuration to mobile device of type YY the creator may assign the same application command to the "key Y pressed" input event without having to redefine the action sequence.
As shown in Fig 18 an input handling event may trigger the handling of server commands or application commands, performed on the server, or of local client commands. Server commands are commands performed solely on the server side such as a "Zoom in" command which sets the server image capture area to a sub-region of the entire rendered image. An example of a command which is not a server command may be "move mouse". Unlike the server command, "move mouse" may be performed on the client and on the server: The server moves the mouse in the intercepted application and the client moves the mouse cursor he is drawing on the image received from the server. In contrast, the "Zoom in" commands sets the server to capture a sub-region of the rendered image, whereas the client does nothing dedicated to this task.
As shown in Fig. 16, when the user generates an input event 1601 on the remote terminal 102, an event may be sent to block 113 in the client program 111 using the underlying operating system of the remote terminal. The input event generated by the user might be any or all of the following: key press, mouse move, mouse press, touch screen press, device rotation, voice command. The event may be then translated into a command in block 1602. Block 1602 may use multiple translation tables termed herein 'key maps'. At any given time, there may be one, typically only one, key map which is active and used for the actual translation. The command may be then dispatched to one of the processing blocks 1603 - 1606 e.g. based on the command type described below:
A Switch Map command may be used to select a new key mapping table. In the example embodiment, a selection of a new key map might be requested upon user input that switches the application into a new state.
SendKey command is the basic key input injection to the application. As illustrated in Fig. 16, block 1604 sends a message to the server program 103. The message is received in block 106 for further handling. Module 1607 which may be provided within Block 106 uses an underlying operating system mechanism to inject a key event into the application. In the example embodiment a Microsoft Windows Sendlnput command is used. In an alternative embodiment the SendMessage command can be used to send a message directly to the application.
An "Application command" typically comprises a request for specific processing on the host computer 101 side. Examples of such a command include but are not limited to any of the following: (i) Select a specific screen area (Region of Interest); (ii) select new screen scaling factor; (iii) Pan the screen; and (iv) Move the cursor to specific screen location. Once an AppCommand is detected, block 1605 sends a message that may be received by module 1608 and executed. It is appreciated, as shown at block 1608, that generally, key presses may be mapped into App commands, e.g. with filter commands, and filters may be used to switch key maps. For example, a filter may detect a switch to a new mode in an application, e.g. game, which results in switching to a new key map.
A fourth type of command is "cursor move". The "Cursor move" command typically moves the local cursor and sends a cursor move event to the host computer 101 which then injects it as an event to the application, similar to SendKey processing. By processing the command locally on the client 103, the latency from user event to the feedback (in term of cursor movement) may be immediate.
Fig. 14 is an example of a screenshot rendered without use of the pixel filter 603 of Fig. 6. Fig. 15 is an example of a screenshot rendered using the pixel filter 603 of Fig. 6, where the pixel filter is constructed and operative to perform 'highlight' (marked 1501).
As described above, some of the adaptation directed at handling input events includes use of tilt sensitive hardware which may be found on the client device. Examples of such devices include the Nokia N95 cellular phone whose full specification is in the public domain and is available e.g. at the following http www link: forum.nokia.com/devices/N95 and the Apple iPhone whose full specification is in the public domain and is available e.g. at the following http www link: apple.com/iphone/specs.html.
Many examples, generalizations and modifications of the above specific disclosure are possible. For example, a game can run on a PC which acts as a server and the display and user inputs may occur through a mobile device (client). The system of the present invention typically allows a meaningful experience on the mobile client, even though the game application was written for the PC. The PC may intercept certain game instructions e.g. relating to visual or audio presentation to the user and may automatically adapt the instructions. For example, the size of a bubble may be increased, the dialog box may be zoomed, and/or text may be converted to voice so that it is spoken rather than displayed to the user.
Adaptation may be based on the keyboard type provided on the remote e.g. mobile device. For example if not enough keys are provided on the mobile device, relative to the number of keys assumed by the application, but the mobile client has touch-screen, soft keys may be added on the screen. Adaptation may also be based generally on whether or not the client has a touch screen. For example, if a touch screen exists, a mouse may be added to the touch screen; if no touch screen exists, mouse input may not be allowed. Adaptation may be based on network connections. For example if a network connection is good, more information can be sent, and/or part of the application may be allowed to run on the client side. Geometric operations such as translation, rotation, and scaling may be performed by a simple operator. Scaling typically involves scaling only a portion of the data on the display rather than the entirety of that data.
Adaptation may be based on context. For example, if a client is known to be in a noisy environment, text can be converted to voice and read rather than being displayed on a screen. Also, if there are not enough keys on the remote terminal, voice commands may be used for input. Typically, in game applications, the server knows, based on graphic instruction interception, where the user is in the game and therefore knows the limited vocabulary that the user can input, thereby facilitating interpretation of the voice commands.
The scope of the invention includes methods performed by a server including some or all of the following steps: a. receiving client capabilities b. deciding on adaptation(s) to be effected based on client capabilities c. activating component(s) for decided upon adaptation(s) and/or for a particular database with rules for decided upon adaptation(s) d. Intercepting an instruction to be adapted e. Adapting the intercepted instruction according to rules corresponding to decided upon adaptation(s) f. Optionally, intercepting user command which would affect the adaptation(s) which are to be effected and redoing step c; and g. optionally, iterating to step d.
Communication need not be provided via a data network; for example, analog modems may be used as an alternative.
Operation of the filters shown and described herein may be determined by a set of rules that are input to the apparatus, while rules are also termed herein 'descriptors' and which may be provided in blocks 505-507 shown herein. The run-time application of these rules are carried by any or all of the Geometry filters, Text filters and Pixel filters shown and described herein.
Among the user inputs that the client software may handle are audio commands. These, like other user input events, may be translated into application commands and sent to the server which translates them into application domain events and injects them into the application. Depending on the embodiment, host system 101 and interfacing system 102 may communicate using UDP as a communication protocol.
According to certain embodiments of the present invention, the apparatus of the present invention optionally identifies text, determines whether it might be insufficiently noticeable once "translated" from a first output device to a second typically smaller output device, and if so, highlights the text as "translated" for the second output device to make it more noticeable, e.g. as shown herein in Figs. 14 - 15. Each of these steps may be performed entirely by the computerized apparatus or in a partially human-guided manner.
Certain embodiments of the object transformation methods and apparatus shown and described herein are particularly suitable for situations in which the source code of the software application for which an effect is to be achieved, as described herein, is not available, and/or it is impossible to modify the application input and/or it is impossible to modify the application's configuration parameters to achieve the desired effect.
According to certain embodiments of the present invention, text and geometry objects are identified out of 'display' API calls that are made.
It will also be understood that the system according to some embodiments of the present invention may be a suitably programmed computer. Likewise, some embodiments of the invention contemplate a computer program being readable by a computer for executing the method of the invention. Some embodiments of the invention further contemplate a machine-readable memory tangibly embodying a program of instructions executable by the machine for executing methods of the invention.
It is appreciated that software components of the present invention including programs and data may, if desired, be implemented in ROM (read only memory) form including CD-ROMs, EPROMs and EEPROMs, or may be stored in any other suitable computer-readable medium such as but not limited to disks of various kinds, cards of various kinds and RAMs. Components described herein as software may, alternatively, be implemented wholly or partly in hardware, if desired, using conventional techniques.
Included in the scope of the present invention, inter alia, are electromagnetic signals carrying computer-readable instructions for performing any or all of the steps of any of the methods shown and described herein, in any suitable order; machine- readable instructions for performing any or all of the steps of any of the methods shown and described herein, in any suitable order; program storage devices readable by machine, tangibly embodying a program of instructions executable by the machine to perform any or all of the steps of any of the methods shown and described herein, in any suitable order; a computer program product comprising a computer useable medium having computer readable program code having embodied therein, and/or including computer readable program code for performing, any or all of the steps of any of the methods shown and described herein, in any suitable order; any technical effects brought about by any or all of the steps of any of the methods shown and described herein, when performed in any suitable order; any suitable apparatus or device or combination of such, programmed to perform, alone or in combination, any or all of the steps of any of the methods shown and described herein, in any suitable order; information storage devices or physical records, such as disks or hard drives, causing a computer or other device to be configured so as to carry out any or all of the steps of any of the methods shown and described herein, in any suitable order; a program pre- stored e.g. in memory or on an information network such as the Internet, before or after being downloaded, which embodies any or all of the steps of any of the methods shown and described herein, in any suitable order, and the method of uploading or downloading such, and a system including server/s and/or client/s for using such; and hardware which performs any or all of the steps of any of the methods shown and described herein, in any suitable order, either alone or in conjunction with software.
Also included in the scope of the present invention, is a computer program product, comprising a computer usable medium having a computer readable program code embodied therein, the computer readable program code being adapted to be executed to implement one, some or all of the methods shown and described herein. It is appreciated that any or all of the computational steps shown and described herein may be computer-implemented.
Features of the present invention which are described in the context of separate embodiments may also be provided in combination in a single embodiment. Conversely, features of the invention, including method steps, which are described for brevity in the context of a single embodiment or in a certain order may be provided separately or in any suitable sub combination or in a different order, "e.g." is used herein in the sense of a specific example which is not intended to be limiting. While the invention has been shown and described with respect to particular embodiments, it is not thus limited. Numerous modifications, changes and improvements within the scope of the invention will now occur to the reader.

Claims

Claims
1. A method for generating a first text display for a first display device, the first text display representing a second text display generated by a program for a second display device, the method comprising: identifying a subset of text objects, each associated with a text string and being unsuitable for display on the first display device, from among all text objects in the second text display; and generating a first text display which differs from the second text display in that at least one text object of said subset of text objects is omitted and at least a portion of the text string associated therewith is presented orally.
2. A method for generating a first text display for a first display device, the first text display representing a second text display generated by a program for a second display device, the method comprising: identifying a subset of text objects, each associated with a text string and being unsuitable for display on the first display device, from among all text objects in the second text display; and generating a first text display which differs from the second text display in that at least a portion of the text string of least one text object of said subset of text objects is displayed piecemeal.
3. A method according to claim 2 wherein at least a portion of the text string of least one text object of said subset of text objects is displayed in ticker format.
4. A method according to claim 1 wherein said program has a source code and said identifying is performed without recourse to the source code, without modifying input to the program and without modifying any configuration parameter of the program.
5. A method according to claim 1 wherein said identifying proceeds at least partly on a basis of identifying text objects including characters which, when displayed on said first text display, are smaller than a predetermined threshold value.
6. A method for generating a first display for a first display device, the first display representing a second display generated by a program for a second display device and including a cursor, the method comprising: determining whether said cursor is unsuitable for display on the first display device; and if said cursor is unsuitable, generating a first display which differs from the second display in that said cursor is omitted and replaced by a cursor suitable for display on the first display device.
7. A method according to claim 1 wherein said first display device is housed on a remote terminal.
8. A method according to claim 1 and also comprising accepting a human input defining said subset to include only text objects deemed by a human to be important to the application.
9. A method according to claim 8 wherein said human input defines said text objects deemed important in terms of at least one of the following text object characteristics: String content, location of the text object within said second text display and color.
10. A method for generating a first text display for a first display device fixedly associated with a first input device, the first text display representing a second text display generated by a program for a second display device, the method comprising: determining if the orientations of the first and second display devices are one of the following: both landscape; and both portrait; and if not, mapping directional input functions into the first input device so as to enable said first display device fixedly associated therewith to be held and used rotated 90 degrees to the orientation of the second display device.
11. A method according to claim 10 wherein the first input device, when rotated 90 degrees to the orientation of the second display device, includes at least two input modules having at least two of the following relative orientations: left, right, top and bottom; and wherein said mapping comprises mapping at least two of the following input options: go left, go right, go up and go down, into said at least two input modules respectively.
12. A method according to claim 11 wherein said first display device comprises a keyboard and wherein each of said input modules comprises a key in said keyboard.
13. A method according to claim 1 and also comprising providing a display device database storing at least one display characteristic of each of a plurality of display devices.
14. A method according to claim 1 wherein said program comprises a game.
15. A method for running a program written for a first input device having a first plurality of states and associated with a first display device on a terminal having a second input device having a second plurality of states and associated with a second display device, the method comprising: generating a display, for said second display device, which associates at least one input option with at least one of said second plurality of states.
16. A method according to claim 1 wherein said text objects being unsuitable for display comprise objects which, when re-sized proportionally to relative dimensions of the first and second text displays, are unsuitable for viewing on said first text display.
17. A method according to claim 6 wherein said cursor unsuitable for display comprises a cursor which, when re-sized proportionally to relative dimensions of the first and second display devices, is unsuitable for viewing on said first display device.
18. A system for adapting objects generated by programs and having output characteristics to run on each of a plurality of terminals each including a different output device, the system comprising: a terminal data repository operative to accept information regarding at least one characteristic of said output device of each of said plurality of terminals; and a graphics object modifier operative to modify at least one output characteristic of a graphics object outbound to an individual output device according to said at least one characteristic of said individual output device.
19. A system according to claim 18 wherein said graphics object modifier is operative to perform a global modification on at least most objects generated by an individual program outbound for an individual terminal; and to perform local modifications on at least one object generated by the individual program which, having undergone said global modification, becomes unsuitable for display on the output device of said individual terminal.
20. A system according to claim 18 wherein at least one of said terminals also includes an input device.
21. A system according to claim 18 wherein at least one of said output devices comprises a visual display device.
22. A system according to claim 18 wherein said modifier is operative to perform at least one of the following operations on at least a portion of at least one object: translation, rotation, scaling, occluding.
23. A system according to claim 18 where said modifier is operative to modify at least one of a color characteristic, texture characteristic, brightness characteristic and contrast characteristic of at least a portion of at least one object.
24. A system according to claim 18 wherein said characteristic of said output device includes an indication of whether the output device is intended for use outside or inside and wherein said graphics object modifier is operative to modify at least one of at least one graphic object's brightness and contrast accordingly.
25. A method for modifying a program for display on a first display device, wherein the program generates a plurality of display screens suitable for display on a second display device which differs from the first display device, the method comprising: for at least one display screen, identifying first and second portions of said display screen which can be rendered semi-transparently and superimposed onto one another; rendering said first and second portions of said display screen semi- transparently; and superimposing said first and second portions of said display screen onto one another.
26. A system for adapting a multi-mode program to run on a terminal including an output device and an input device capable of generating a first set of input events, said program being operative to branch responsive to occurrences of input events from among a second set of pre-defined input events, the system comprising: an input event mapper operative to receive an event from said first set of input events and to generate, responsively, at least a simulation of an event from said second set of input events, thereby to cause said program to branch, wherein the event from said second set of input events generated at least in simulation by said input event mapper responsive to receiving an event from said first set of input events depends at least partly on the mode in which said program is operating.
27. A system according to claim 26 wherein said program comprises at least one game, said first set of input events comprises a set of voice commands and said second set of input events comprises a set of application commands.
28. A system according to claim 27 wherein said program comprises at least one game and said set of application commands comprises a set of game controls.
29. A system for adapting a program to run on a terminal including an output device and being capable to sense its own tilt relative to a fixed frame of reference, events, said program being operative to branch responsive to occurrences of input events from among a set of pre-defined input events, the system comprising: an input event mapper operative to receive a tilt value sensed by the terminal and to generate, responsively, at least a simulation of an event from said set of input events, thereby to cause said program to branch.
30. A system for generating a first text display for a first display device, the first text display representing a second text display generated by a program for a second display device, the system comprising: a text object analyzer operative to identify a subset of text objects, each associated with a text string and being unsuitable for display on the first display device, from among all text objects in the second text display; and a text display modifier operative to generate a first text display which differs from the second text display in that at least one text object of said subset of text objects is omitted and at least a portion of the text string associated therewith is presented orally.
31. A system for generating a first text display for a first display device, the first text display representing a second text display generated by a program for a second display device, the system comprising: a text object analyzer operative to identify a subset of text objects, each associated with a text string and being unsuitable for display on the first display device, from among all text objects in the second text display; and a text display modifier operative to generate a first text display which differs from the second text display in that at least a portion of the text string of least one text object of said subset of text objects is displayed piecemeal.
32. A system for generating a first display for a first display device, the first display representing a second display generated by a program for a second display device and including a cursor, the system comprising: a cursor analyzer operative to determine whether said cursor is unsuitable for display on the first display device; and a display modifier operative, if said cursor is unsuitable, to generate a first display which differs from the second display in that said cursor is replaced by a cursor suitable for display on the first display device.
33. A system for generating a first text display for a first display device fixedly associated with a first input device, the first text display representing a second text display generated by a program for a second display device, the system comprising: a display device orientation analyzer operative to determine if the orientations of the first and second display devices are one of the following: both landscape; and both portrait; and a directional input function mapper operative, and if not, to map directional input functions into the first input device so as to enable said first display device fixedly associated therewith to be held and used rotated 90 degrees to the orientation of the second display device.
34. A system for running a program written for a first input device having a first plurality of states and associated with a first display device on a terminal having a second input device having a second plurality of states and associated with a second display device, the system comprising: an input option associator operative to generate a display, for said second display device, which associates at least one input option with at least one of said second plurality of states.
35. A method for adapting objects generated by programs and having output characteristics to run on each of a plurality of terminals each including a different output device, the method comprising: accepting information regarding at least one characteristic of said output device of each of said plurality of terminals; and modifying at least one output characteristic of a graphics object outbound to an individual output device according to said at least one characteristic of said individual output device.
36. A system for modifying a program for display on a first display device, wherein the program generates a plurality of display screens suitable for display on a second display device which differs from the first display device, the system comprising: a display screen area analyzer operative, for at least one display screen, to identify first and second portions of said display screen which can be rendered semi-transparently and superimposed onto one another; a rendering functionality operative to render said first and second portions of said display screen semi-transparently; and a superimposing functionality operative to superimpose said first and second portions of said display screen onto one another.
37. A method for adapting a multi-mode program to run on a terminal including an output device and an input device capable of generating a first set of input events, said program being operative to branch responsive to occurrences of input events from among a second set of pre-defined input events, the method comprising: receiving an event from said first set of input events and to generate, responsively, at least a simulation of an event from said second set of input events, thereby to cause said program to branch, wherein the event from said second set of input events generated at least in simulation responsive to receiving an event from said first set of input events depends at least partly on the mode in which said program is operating.
38. A method for adapting a program to run on a terminal including an output device and being capable to sense its own tilt relative to a fixed frame of reference, events, said program being operative to branch responsive to occurrences of input events from among a set of pre-defined input events, the method comprising: receiving a tilt value sensed by the terminal and to generate, responsively, at least a simulation of an event from said set of input events, thereby to cause said program to branch.
39. A computer program product, comprising a computer usable medium having a computer readable program code embodied therein, said computer readable program code adapted to be executed to implement a method according to any of the preceding claims.
40. A method according to claim 2 wherein said program has a source code and said identifying is performed without recourse to the source code, without modifying input to the program and without modifying any configuration parameter of the program.
41. A method according to claim 2 wherein said first display device is housed on a remote terminal.
42. A method according to claim 2 and also comprising accepting a human input defining said subset to include only text objects deemed by a human to be important to the application.
43. A method according to claim 2 and also comprising providing a display device database storing at least one display characteristic of each of a plurality of display devices.
44. A method according to claim 2 wherein said program comprises a game.
45. A method according to claim 2 wherein said text objects being unsuitable for display comprise objects which, when re-sized proportionally to relative dimensions of the first and second text displays, are unsuitable for viewing on said first text display.
PCT/IL2009/001176 2008-12-11 2009-12-10 System and methods for adapting applications to incompatible output devices WO2010067365A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US19362908P 2008-12-11 2008-12-11
US61/193,629 2008-12-11

Publications (2)

Publication Number Publication Date
WO2010067365A2 true WO2010067365A2 (en) 2010-06-17
WO2010067365A3 WO2010067365A3 (en) 2010-09-02

Family

ID=42243136

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2009/001176 WO2010067365A2 (en) 2008-12-11 2009-12-10 System and methods for adapting applications to incompatible output devices

Country Status (1)

Country Link
WO (1) WO2010067365A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2942707A1 (en) * 2014-05-09 2015-11-11 Kabushiki Kaisha Toshiba Image display system, display device, and image processing method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5583761A (en) * 1993-10-13 1996-12-10 Kt International, Inc. Method for automatic displaying program presentations in different languages
US6457030B1 (en) * 1999-01-29 2002-09-24 International Business Machines Corporation Systems, methods and computer program products for modifying web content for display via pervasive computing devices
US20060123362A1 (en) * 2004-11-30 2006-06-08 Microsoft Corporation Directional input device and display orientation control
WO2007066329A2 (en) * 2005-12-05 2007-06-14 Vollee Ltd. Method and system for enabling a user to play a large screen game by means of a mobile device
US7360230B1 (en) * 1998-07-27 2008-04-15 Microsoft Corporation Overlay management

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5583761A (en) * 1993-10-13 1996-12-10 Kt International, Inc. Method for automatic displaying program presentations in different languages
US7360230B1 (en) * 1998-07-27 2008-04-15 Microsoft Corporation Overlay management
US6457030B1 (en) * 1999-01-29 2002-09-24 International Business Machines Corporation Systems, methods and computer program products for modifying web content for display via pervasive computing devices
US20060123362A1 (en) * 2004-11-30 2006-06-08 Microsoft Corporation Directional input device and display orientation control
WO2007066329A2 (en) * 2005-12-05 2007-06-14 Vollee Ltd. Method and system for enabling a user to play a large screen game by means of a mobile device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2942707A1 (en) * 2014-05-09 2015-11-11 Kabushiki Kaisha Toshiba Image display system, display device, and image processing method
US9626784B2 (en) 2014-05-09 2017-04-18 Kabushiki Kaisha Toshiba Image display system, display device, and image processing method

Also Published As

Publication number Publication date
WO2010067365A3 (en) 2010-09-02

Similar Documents

Publication Publication Date Title
US20210168441A1 (en) Video-Processing Method, Electronic Device, and Computer-Readable Storage Medium
WO2020038168A1 (en) Content sharing method and device, terminal, and storage medium
CN111433743B (en) APP remote control method and related equipment
CN106843715B (en) Touch support for remoted applications
CN114297436A (en) Display device and user interface theme updating method
KR20030001415A (en) Digital document processing
KR20080085008A (en) Method and system for enabling a user to play a large screen game by means of a mobile device
CN113810746B (en) Display equipment and picture sharing method
CN110750664B (en) Picture display method and device
CN112337091B (en) Man-machine interaction method and device and electronic equipment
US20160357532A1 (en) Graphics Engine And Environment For Encapsulating Graphics Libraries and Hardware
US10432681B1 (en) Method and system for controlling local display and remote virtual desktop from a mobile device
WO2019047187A1 (en) Navigation bar control method and device
CN114222185B (en) Video playing method, terminal equipment and storage medium
CN113825002B (en) Display device and focal length control method
US9508108B1 (en) Hardware-accelerated graphics for user interface elements in web applications
CN111708533B (en) Method and device for setting mouse display state in application thin client
WO2019047184A1 (en) Information display method, apparatus, and terminal
WO2010067365A2 (en) System and methods for adapting applications to incompatible output devices
WO2014024255A1 (en) Terminal and video playback program
CN100435096C (en) Image processing method based on C language micro operation system
CN112367295B (en) Plug-in display method and device, storage medium and electronic equipment
CN115040866A (en) Cloud game image processing method, device, equipment and computer readable storage medium
US20150128029A1 (en) Method and apparatus for rendering data of web application and recording medium thereof
CN112926420A (en) Display device and menu character recognition method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09831566

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09831566

Country of ref document: EP

Kind code of ref document: A2