CN111868713A - Method and apparatus for storing and restoring navigation context - Google Patents

Method and apparatus for storing and restoring navigation context Download PDF

Info

Publication number
CN111868713A
CN111868713A CN201980010525.4A CN201980010525A CN111868713A CN 111868713 A CN111868713 A CN 111868713A CN 201980010525 A CN201980010525 A CN 201980010525A CN 111868713 A CN111868713 A CN 111868713A
Authority
CN
China
Prior art keywords
media
media content
interest
objects
navigation context
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980010525.4A
Other languages
Chinese (zh)
Inventor
M·米安斯
托马斯·布勒莱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
A Erkemiya
Original Assignee
A Erkemiya
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from EP18305155.6A external-priority patent/EP3528145A1/en
Priority claimed from US15/898,136 external-priority patent/US20190250999A1/en
Application filed by A Erkemiya filed Critical A Erkemiya
Publication of CN111868713A publication Critical patent/CN111868713A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • G06F16/9562Bookmark management

Abstract

A method for restoring a navigation context of first media content. The navigation context data includes media access data and an identification of a set of one or more focused media objects in a first portion of the first media content currently displayed in the first user interface, i.e., the UI window. The recovery of the navigation context includes: loading (269) the first media content based on the media access data; generating (270) second media content based on the loaded first media content, the second media content comprising a set of one or more media objects of interest; a second portion of the second media content is displayed (290) in the second user interface window, wherein the second portion includes at least one media object of interest of the set of one or more media objects of interest.

Description

Method and apparatus for storing and restoring navigation context
Technical Field
The present disclosure relates generally to the field of telecommunications and digital data processing, and more particularly to a method for storing navigation context, a method for restoring navigation context, and associated electronic devices and computer programs.
Background
U.S. patent application publication No. US2017/0214640 A1 discloses a method for sharing media content between multiple users. The method includes generating one or more contributing media content that is combined with the subject media content to generate annotated media content from metadata (e.g., association data) that specifies to which portion of the subject media content the contributing media content relates and/or how to combine the subject media content with each contributing media content to render the annotated media content.
The subject media content, for example, must be transmitted from a transmitting device to a recipient device where it will be rendered and combined with one or more annotation media content.
In the disclosed method, the subject media content may be any media content displayed in a software application. For example, the media content may be a "static" web page (e.g., including text, photos, videos, interactive features/tools, etc.) whose content remains the same for a given URL (uniform resource location). In this case, the subject media content may be rendered on the recipient device simply by loading the web page from the media server using the URL of the web page. Conversely, the subject media content may also be dynamically updateable media content, whose content for the same URL may change over time or depending on the user location/user profile (profile) of the user that is displayed the content. In this latter case, using only the URL of the web page may not be sufficient for recovering a web page having the same state and content as the original web page. For example, the content of a web page corresponding to a given URL may depend on a user profile or may be updated by the origin media server. For example, searching web pages using a keyword-based search engine may produce a list of web pages that depend on the user's profile and/or the user's cookie(s) and/or date the search was performed.
Further, in the disclosed method, the portion of the subject media content to which the contributing media content relates may be identified using, for example, coordinates in a two-dimensional coordinate system associated with the web page. However, the display screen of the recipient device may have a different size and/or resolution than the sending device: the coordinate systems are therefore completely different and the stored coordinates may not be sufficient for reproducing a web page with the same content (at least visually, the layout on the screen may differ from one display to another). In addition, the web page loaded using the URL of the source page may be modified according to the type of target device (mobile version) that must display the web page.
Furthermore, privacy concerns may arise for at least portions of the web pages to be shared. For example, a web page displayed for a user on a social network, blog, microblog website may include one or more media objects (e.g., publications, feeds, messages, search results, etc.) that are dependent on input keywords, subscriptions, user profiles, and/or relationships with other users in the social network. For example, the web page may include media objects that the associated user is ready to share and other media items that the subscribing user does not want to share.
One solution is to generate a screen shot of the web page to be shared or at least a portion of the web page to be shared. However, a screen shot is an "original copy" of a web page, to the extent that no interaction and/or navigation can begin from the screen shot. In this case, the receiving device will not be able to restore the interaction context.
Thus, there appears to be a need to store and/or restore navigation context to preserve interaction and/or navigation context of media content.
Disclosure of Invention
In general, in one aspect, the disclosure is directed to a method for restoring navigation context. The method includes storing a navigation context, wherein storing the navigation context includes: storing media access data that enables access to first media content displayed in software; detecting a set of one or more media objects in first media content; identifying, among the set of one or more media objects, a set of one or more media objects of interest in a first portion of first media content currently visible in a first user interface window, i.e., a first UI window; and storing an identification of one or more media objects of interest. The method includes restoring the navigation context, wherein restoring the navigation context includes: loading first media content based on the media access data; generating second media content based on the loaded first media content, wherein the second media content comprises a set of one or more restored media objects of interest corresponding to the identified one or more media objects of interest; displaying the second media content such that the second portion currently visible in the second UI window comprises: at least one restored media object of interest in the set of one or more restored media objects of interest.
According to a second aspect, the present disclosure relates to a method for restoring a navigation context. The method comprises the following steps: obtaining navigation context data, the navigation context data comprising: media access data enabling access to first media content, wherein the first media content comprises a set of one or more media objects; during navigation, an identification of a set of one or more focused media objects displayed in a first portion of first media content in a first user interface window, i.e., a first UI window. The method further comprises the following steps: restoring the navigation context, wherein restoring the navigation context comprises: loading first media content based on the media access data; generating second media content based on the loaded first media content, wherein the second media content comprises a set of one or more restored media objects of interest corresponding to the identified one or more media objects of interest; displaying the second media content such that the second portion currently visible in the second UI window comprises: at least one restored media object of interest in the set of one or more restored media objects of interest.
According to a third aspect, the present disclosure relates to a method for restoring a navigation context of software, the method comprising: transmitting media access data to a recipient device, the media access data enabling access to first media content displayed in the software; detecting a set of one or more media objects in the first media content; identifying, among the set of one or more media objects, a set of one or more media objects of interest in a first portion of first media content currently visible in a first user interface window, i.e., a first UI window; and sending the identification of the one or more media objects of interest to the recipient device; the navigation context is restored on the recipient device. Restoring the navigation context may include: loading first media content based on the media access data; generating second media content based on the loaded first media content, wherein the second media content comprises a set of one or more restored media objects of interest corresponding to the identified one or more media objects of interest; displaying the second media content such that the second portion currently visible in the second UI window comprises: at least one restored media object of interest in the set of one or more restored media objects of interest.
According to a fourth aspect, the present disclosure relates to an electronic device comprising: a processor, a memory operatively coupled to the processor, the memory comprising computer readable instructions which, when executed by the processor, cause the processor to perform the steps of the method according to the first, second or third aspect.
According to a fourth aspect, the present disclosure relates to a computer program product comprising computer readable instructions which, when executed by a computer, cause the computer to perform the steps of the method according to the first, second or third aspect.
Further aspects and embodiments of the method according to the first, second or third aspect are defined by the dependent claims.
Drawings
The present disclosure will be better understood, and its numerous aspects and advantages will become more apparent to those skilled in the art by reference to the following drawings, in conjunction with the accompanying specification, wherein:
FIG. 1A is a schematic diagram of a system for sharing media content, according to one or more embodiments;
FIG. 1B is a schematic diagram of an electronic device and a computing server in accordance with one or more embodiments;
2A-2B illustrate flow diagrams of exemplary methods, respectively, in accordance with one or more embodiments;
FIGS. 3A-3B respectively illustrate flow diagrams of exemplary methods in accordance with one or more embodiments;
4A-4C illustrate a user interface in accordance with one or more embodiments;
5A-5C illustrate a user interface according to one or more embodiments;
6A-6C illustrate a user interface in accordance with one or more embodiments;
7A-7C illustrate a user interface in accordance with one or more embodiments;
fig. 8A-8C illustrate a user interface in accordance with one or more embodiments.
Detailed Description
The present disclosure relates to devices, systems, and methods for storing and/or recovering navigation context. Various embodiments are disclosed herein.
The method for storing and/or retrieving may be applied in a system for sharing media content between users, as described in US2017/214640a 1. The method for storing and/or retrieving may be applied to any system implying the transmission of one or more media contents that have to be rendered by a recipient electronic device.
Other advantages and other features of the assemblies disclosed herein will become more apparent to those of ordinary skill in the art. Representative embodiments of the present technology are set forth in the following detailed description of certain preferred embodiments, which proceeds with reference to the accompanying figures, wherein like reference numerals identify like structural elements.
Further, it should be apparent that the teachings herein may be embodied in a wide variety of forms and that any specific structure and/or function disclosed herein is merely representative. In particular, those skilled in the art will appreciate that the embodiments disclosed herein can be implemented independently of any other embodiments, that multiple embodiments can be combined in various ways, and that one or more aspects of the different embodiments can be combined in various ways.
The present disclosure is described below with reference to functional, engine, block and flow diagram illustrations of methods, systems, and computer programs in accordance with one or more exemplary embodiments. Each described function, engine, block diagram, and block diagram of the flowchart illustrations may be implemented by hardware, software, firmware, middleware, microcode, or any suitable combination thereof. If implemented in software, the functions, engines, block diagrams, and/or blocks of the flowchart illustrations may be implemented by computer program instructions or software code, which may be stored or transmitted by a computer-readable medium, or loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the computer program instructions or software code executed on the computer or other programmable data processing apparatus create means for implementing the functions described herein.
Embodiments of computer readable media include, but are not limited to, computer storage media and communication media, as well as any medium that facilitates transfer of a computer program from one place to another. In particular, software instructions or computer readable program code that perform embodiments described herein may be stored, in whole or in part, temporarily or permanently, on a non-transitory computer readable medium in a local or remote storage device that includes one or more storage media.
As used herein, a computer storage medium may be any physical medium that can be read, written, or more generally accessed by a computer. Examples of computer storage media include, but are not limited to, flash drives or other flash memory devices (e.g., memory keys, memory sticks, key drives), CD-ROM or other optical storage, DVD, magnetic disk storage or other magnetic storage devices, solid state memory, memory chips, RAM, ROM, EEPROM, smart cards, relational database management systems (RDBMS), conventional databases, or any other suitable medium that can be used to carry or store program code in the form of instructions or data structures that can be read by a computer processor. Also, various forms of computer-readable media may be involved in carrying or transmitting instructions to a computer, including a router, gateway, server, or other transmission device, wired (coaxial cable, fiber optic, twisted pair, DSL cable), or wireless (infrared, radio, cellular, microwave). The instructions may include code from any computer programming language, including but not limited to assembly, C, C + +, Basic, SQL, MySQL, HTML, PHP, Python, java, JavaScript, and the like.
Turning now to the drawings, wherein like numerals represent like parts throughout the several views, FIG. 1 illustrates an example system 100 in which the various processes and techniques described herein may be implemented.
The system 100 includes one or more computing servers 103A-103E and a plurality of electronic devices 104A-104C (e.g., user devices) operatively and communicatively coupled to each other via a network 105.
The network 105 may be any data transmission network, such as a wired (coaxial cable, optical fiber, twisted pair, DSL cable, etc.) or wireless (radio, infrared, cellular, microwave, etc.) network, a Local Area Network (LAN), an Internet Area Network (IAN), a Metropolitan Area Network (MAN), or a Wide Area Network (WAN), such as the internet, a public or private network, a Virtual Private Network (VPN), a telecommunications network with data transmission capabilities, a single radio cell with a single point of connection, such as a Wifi or bluetooth cell, etc.
Each electronic device 104A-104C may be implemented as a single hardware device, for example in the form of a desktop Personal Computer (PC), a laptop computer, a Personal Digital Assistant (PDA), a smart phone, or on separate interconnected hardware devices connected to each other by communication links having wired and/or wireless sections.
Each electronic device 104A-104C typically operates under the control of an operating system and executes or otherwise relies upon various computer software applications, components, programs, objects, modules, data structures, and the like.
As schematically represented in FIG. 1B, in one or more embodiments, the electronic devices 104A-104C include a processor 110, a memory 111, one or more computer storage media 112 and other related hardware such as input/output interfaces (e.g., device interfaces, such as USB interfaces and the like; network interfaces, such as Ethernet interfaces and the like), and a media drive 113 for reading from and writing to the one or more computer storage media 112.
The memory 111 of the electronic devices 104A-104C may be Random Access Memory (RAM), cache memory, non-volatile memory, backup memory (e.g., programmable memory or flash memory), read-only memory, or any combination thereof. The processor 110 of the electronic devices 104A-104C may be any suitable microprocessor, integrated circuit, or Central Processing Unit (CPU) including at least one hardware-based processor or processing core.
In one or more embodiments, each computer storage medium or media 112 of the electronic devices 104A-104C can contain computer program instructions that, when executed by the processor 110, cause the electronic devices 104A-104C to perform one or more of the methods described herein for the electronic devices 104A-104C. The processor 110 of the electronic devices 104A-104C may be configured to access the one or more computer storage media 112 for storing, reading and/or loading computer program instructions or software code that, when executed by the processor, cause the processor to perform the steps of the methods for the electronic devices 104A-104C described herein. The processor 110 of the electronic device 104A-104C may be configured to use the memory 111 of the electronic device 104A-104C in performing the steps of the methods described herein for the electronic device 104A-104C, e.g., for loading computer program instructions and for storing data generated during execution of the computer program instructions.
Each electronic device 104A-104C also typically receives a number of inputs and outputs for communicating information externally. For interacting with the users 101A-101C or operators, the electronic devices 104A-104C generally include a user interface 114 that includes one or more user input/output devices, such as a keyboard, a pointing device, a display, a printer, and so forth. Otherwise, user input may be received from one or more external computers (e.g., one or more electronic devices 104A-104C or other computing servers 103A-103E), for example, through a network interface coupled to the network 105.
Returning to FIG. 1A, each computing server 103A-103E may be implemented as a single hardware device, or may be implemented on separate interconnected hardware devices connected to each other by communication links having wired and/or wireless segments. Each computing server 103A-103E may be implemented within a cloud computing environment.
In one or more embodiments, the set of computing servers 103A-103E includes at least three media servers 103A-103C, a database server 103D, and a front end server 103E.
Each computing server 103A-103E typically operates under the control of an operating system and executes or otherwise relies upon various computer software applications, components, programs, objects, modules, data structures, and the like.
As schematically represented in FIG. 1B, in one or more embodiments, each computing server 103A-103E includes a processor 120, a memory 121, one or more computer storage media 122 and other associated hardware such as input/output interfaces (e.g., device interfaces, such as USB interfaces, etc.; network interfaces, such as Ethernet interfaces, etc.), and a media drive 123 for reading from and writing to the one or more computer storage media 122.
The memory 121 of the computing servers 103A-103E may be Random Access Memory (RAM), cache memory, non-volatile memory, backup memory (e.g., programmable or flash memory), read-only memory, or any combination thereof. The processor 120 of the computing servers 103A-103E may be any suitable microprocessor, integrated circuit, or Central Processing Unit (CPU) including at least one hardware-based processor or processing core.
In one or more embodiments, each computer storage medium or media 122 of computing servers 103A-103E may contain computer instructions that, when executed by processor 120, cause computing servers 103A-103E to perform one or more methods described herein for computing servers 103A-103E. The processor 120 of the computing servers 103A-103E may be configured to access the one or more computer storage media 122 for storing, reading and/or loading computer program instructions or software code that, when executed by the processor, cause the processor to perform the steps of the methods for the computing servers 103A-103E described herein. The processor 120 of the computing server 103A-103E may be configured to use the memory 121 of the computing server 103A-103E in performing the steps of the methods described herein for the computing server 103A-103E, e.g., for loading computer program instructions and for storing data generated during execution of the computer program instructions.
Each electronic device 104A-104C is operatively connected to one or more computing servers 103A-103E via a network 105. Each electronic device 104A-104C is configured to communicate with at least one of the computing servers 103A-103E over at least one communication link.
In one or more embodiments, each electronic device 104A-104C executes computer program instructions of a software application 106 (also referred to as a "client application 106") that, when executed by a processor of the electronic device, causes the processor to perform the method steps described herein for any one of the electronic devices 104A-104C.
In one or more embodiments, the computing servers 103E are configured to execute computer program instructions of the server software applications 107 (also referred to as "server applications 107") that, when executed by a processor of one of the computing servers 103E, cause the processor to perform the method steps described herein for one of the computing servers 103E.
In one or more embodiments, the server application 107 is executed by the front end server 103E, which itself is operatively connected to at least one of the computing servers 103C, 103D to implement the steps of the methods described herein for the server application 107.
Each instance of the client software application 106 and the server software application 107 are configured to be operatively coupled to each other to communicate in a client/server mode over at least one communication link. The communication link between the client software application 106 and the server software application 107 may use any suitable communication protocol. For example, a protocol based on HTTP (hypertext transfer), a Web service communication protocol such as SOAP (simple object access protocol) may be used. Any other protocol, such as a proprietary protocol, may be used.
In one or more embodiments, the software application 106 of the electronic devices 104A-104C is operatively connected to the server application 107 to implement methods for storing and restoring navigation context(s). The software application 106 includes computer program instructions for communicating with the server application 107 via messages. In one or more embodiments, the software application 106 includes computer program instructions for generating, sending to the server application 107, and for receiving and processing messages received from the server application 107. The server application 107 may comprise computer program instructions for communicating with the software application 106, in particular for generating and sending to the software application 106 and for receiving and processing messages received from the software application 106.
In one or more embodiments, the software application 106 is, for example, software for display and navigation through the world Wide Web, an intranet, the Internet, or the like. However, the present description is applicable to any kind of software application for displaying media content.
In one or more embodiments, the user interface of the software application 106 of the electronic device 104A can include at least one user interface item for generating and/or selecting one or more media content. For example, the user interface may include user interface items (buttons, menus, etc.) for triggering the display of the user interface for searching for and/or selecting one or more media content. For example, the user interface includes a toolbar for generating or importing media content.
Each media server 103A-103C is operatively coupled to one or more media content databases 102A-102C for storing media content and associated metadata. Each media server 103A-103C provides an interface for receiving and processing data requests for media content stored in one of the media content databases 102A-102C, such as requests for storing media content in one of the media content databases 102A-102C and requests for retrieving, searching, modifying media content stored in one of the media content databases 102A-102C.
Each media server 103A-103C may be any Web server application, remote server application, storage server in the cloud, or more generally any software application configured to store and retrieve media content and process upload requests-or download requests-from remote devices, respectively, for uploading-or downloading-one or more media content, respectively. For example, media server 103A is a Web server accessible over the Internet from all of electronic devices 104A-104C, and media server 103B is a media server in a cloud computing environment that is available to only one of electronic devices 104A-104C.
In one or more embodiments, media server 103C may be implemented as a Content Delivery Network (CDN) for storing: the stored content must be available not only from the front-end server 103E but also from any of the electronic devices 104A-104C. The media server 103C is used to store media content that is stored on one of the electronic devices 104A-104C or one of the media servers 103A-103B when the media content must be accessible from each of the electronic devices 104A-104C.
In the context of the present disclosure, media content may include video content, audio content, textual content, image content, graphical content, or any combination thereof, such as a web page or multimedia content. The video content may be 3D content, stereoscopic 3D video content, 2D video content, a list of independently encoded images, etc. The audio content may include mono audio content, stereo audio content, 3D audio content, and the like. The media content may be encoded in any format, including: such as JPEG, TIFF, MPEG, WAV, DOC, HTM, OBJ, DAE, FBX, DXF, X3D, MOV
Figure GDA0002679894960000121
And the like.
The media content data may be encoded in one or more data files. For example, video content may be encoded as a set of images, each image being encoded in a separate data file.
In one or more embodiments, the first media content (e.g., a web page) is displayed in a User Interface (UI) window of the software application 106 on the source electronic device (e.g., electronic device 104A). The user 101A of the source electronic device is navigating to the first media content, such as by scrolling through the first media content, triggering an action, and so forth. At some point in the navigation process-e.g., when a given interaction is performed, upon request by the user 101A or when certain predetermined conditions are met-navigation context data is stored that defines a navigation context with respect to the currently displayed first media content.
In one or more embodiments, the software application 106 is configured to analyze the content of the first media content (e.g., the HTML code of a web page), detect one or more media objects of interest in the first media content, and determine navigation context data based on the detected media objects of interest. The restoring of the navigation context with respect to the first media content is performed based on the detected media objects of interest in order to restore the one or more media objects of interest.
The media object may be any object contained in the first media content: text, photos, videos, menus, hyperlinks, graphics, document previews, icons, buttons, and the like. The one or more detected media objects may be objects that are currently visible (i.e., currently displayed) in the UI window or that are not currently visible in the UI window.
The media objects may be interactive media objects or non-interactive media objects. For example, menus, hyperlinks, or buttons are typical interactive media objects. For example, when a URL is associated with a photo, the photo may also be interactive. The interactive media object may be configured to trigger performance of one or more interactive functions.
Navigation context data defining a navigation context with respect to a currently displayed first media content may be used to restore the navigation context with respect to the first media content on a destination electronic device (e.g., electronic device 104B). The destination electronic device may be distinct from the source electronic device or the same device.
In one or more embodiments, to restore the navigation context, second media content is generated that at least partially reproduces the first media content. One or more media objects of interest may be restored in the second media content such that the visible content of the current UI window is the same (or at least similar) to the visible content when the navigation context is stored (same viewpoint to the media content).
In one or more embodiments, one or more volatile media objects may be identified and restored in the second media content such that the dynamic state (e.g., context sensitive content) of the first media content is restored in the second media content. Thus, in storing the navigation context, navigation in the second media content may be performed using the same context parameters as navigation in the first media content. The volatile media object refers to content of the first media content according to the navigation context. For a given media access data, a volatile media object may be a media object that may be changed, modified, present/absent, presented in a different order, etc., depending on one or more contextual parameters. In one or more embodiments, the context parameters may include: user location, user cookie, user profile, user permissions, and/or current time. The non-volatile media object may refer to persistent content of the first media content that is not dependent on the contextual parameter. The non-volatile media object may be retrieved using the media access data independently of the navigation, whereas the volatile media object may not be retrieved using the media access data alone.
For example, searching Web pages using a keyword-based search engine may generate a list of Web links, where the list is based on a user profile and/or a cookie of the user and/or a date the search was performed. Thus, the search results, the result list, and the order in which the results are presented may vary depending on the navigation context, for example, according to personalization parameters (e.g., user profile, user location, and/or user cookie) and/or the date the search was performed. In this case, in order to restore the navigation context, the resource access data of all the search results in the web page and the sorted result list are stored by the calculation server 103E so that the web page having the same result list including the search results in the same order can be restored.
For example, on a website for selling products, media objects used to present a list of products for the search results may be identified as volatile media objects, while media objects used to present descriptions of products pointed to by items in the list may be treated as non-volatile media objects. Thus, the identification of the media object may be appropriate for the type of website or the type of website application, or may be specific to the website. In one or more embodiments, all media objects in a page may be identified as volatile media objects.
The layout of the second media content may be the same as or different from the layout of the first media content. In one or more embodiments, the layout of the second media content is different from the layout of the first media content, for example to accommodate a resolution and/or size of a display screen of the destination electronic device.
In one or more embodiments, the stored navigation context metadata can be used as a bookmark that enables navigation and/or interaction context in the media content at the time the bookmark was created to be restored. In one or more embodiments, a bookmark refers to a particular point of view and/or media object of interest in a given media content. One or more such bookmarks may be stored for the same media content to enable retrieval of a corresponding navigation context. One or more bookmarks may be stored for different media content.
In one or more embodiments, the navigation context data (or bookmarks) are stored on the computing server 103E so that the navigation context data (or bookmarks) can be shared with different users/different destination electronic devices. In one or more embodiments, the navigation context data (or bookmarks) are stored locally by the source electronic device and may further be transmitted for retrieval to the destination electronic device. The navigation context data (or bookmarks) may be stored in a memory, a database, a configuration file associated with the software, and the like.
The user 101B of the destination electronic device may navigate through the second media content. In one or more embodiments, the interactive functionality associated with the interactive media object(s) in the second media content is restored.
FIG. 2A is a flow diagram of a method for storing navigation context in accordance with one or more embodiments. Although the various steps in the flow diagrams are presented and described in a sequence, those skilled in the art will appreciate that some or all of the steps may be performed in a different sequence, may be combined or omitted, and some or all of the steps may be performed in parallel.
Method steps may be implemented by a source electronic device (e.g., the electronic device 104A of the user 101A) and a storage device for storing navigation context data according to any embodiment disclosed herein. The storage device may be included in the source electronic device or may be a separate device. In one or more embodiments, the storage device may be a computing server (e.g., front end server 103E), a database 102A-102B, another electronic device 104B, 104C, or may be included in such a device.
In one or more embodiments, to share media content, the media content displayed on the display screen of the source electronic device 104A must be transferred to other electronic devices (i.e., destination electronic devices) of the plurality of electronic devices 104B-104C through the computing server 103E. The computing server 103E manages a database for storing media access data and associated metadata. The associated metadata includes navigation context metadata based on which navigation context can be restored.
In the following description, the method steps are implemented by the client application 106 of the source electronic device 104A and by the server application 107 of the front-end server 103E being used as a storage device, but are applicable to any other storage device and any other software application for displaying media content.
In step 200, first media content M1 (e.g., a web page) is displayed in a first UI window W1 of the software application 106 on a display screen of the source electronic device 104A. In one or more embodiments, only the first portion of the first media content M1 is visible in the UI window. In one or more embodiments, the software application 106 is configured to begin the process of storing the navigation context. For example, the process may be initiated upon request of the user 101A or when certain predetermined conditions are met while performing a given interaction.
In step 201, the software application 106 is configured to send media access data to a storage device (e.g., computing server 103E) that enables access to the first media content M1. The media access data may include a URL, address and/or identifier for accessing the web page to access the first media content through the media server 103A, 103B, 103C. The media access data may comprise data encoding the first media content M1. The media access data is stored by a storage device (e.g., computing server 103E). In one or more embodiments, the navigation context data includes the received media access data.
In step 205, the software application 106 is configured to perform a screen shot of a first portion of the first media content M1 currently displayed in the UI window W1.
In step 206, the software application 106 is configured to send the screen shot to a storage device (e.g., computing server 103E) for storage by the storage device (e.g., computing server 103E). In one or more embodiments, the navigation context data includes the received screen shot.
In step 210, the software application 106 is configured to identify a set of one or more media objects in the first media content M1, for example: text, images, video, animations, sounds, applets and/or any document, icon, button, menu, interactive object, etc. included in the media content. For a web page, a media object typically corresponds to a resource of the web page. For example, for a web page, the code of the web page (e.g., HTML code) defines the web address of the web page, the layout of the web page, and the resource data that defines the media objects in the web page: thus, resources may be identified by analyzing the code of the web page data, and resource access data may be extracted from the web page code.
The granularity of the identified media objects may depend on the application/requirements or encoding scheme used for the first media content. In one or more embodiments, the media objects may be complex media objects including one or more media sub-objects, object containers or object groups containing one or more media objects, a hierarchy of media objects including one or more media objects and sub-objects organized in a hierarchical structure, an ordered list of media objects, and the like, e.g., text and associated pictures in a document may be grouped and processed as a single media object, or processed and identified separately.
In one or more embodiments, the software application 106 is configured to allow the user 101A of the source electronic device 104A to identify and exclude one or more media objects that the user does not want to share. For example, media objects that do not belong to a portion of the set of one or more media objects of interest are excluded. In another example, the user 101A may select one media object and perform some predetermined action to request that other media objects must be excluded. In one or more embodiments, one or more excluded media objects may be blurred (e.g., see fig. 8A-8B described below), shaded, or removed from the UI window to provide feedback to the user 101A regarding the exclusion. In one or more embodiments, the set of one or more media objects is updated to remove one or more media objects that have been excluded.
In one or more embodiments, the identification of the media object may be transmitted with the resource access data of the associated media object. For web pages, the identification of the media object of interest may be extracted from the code of the web page, or may be an identification assigned to the detected media object by the software application 106 and/or a storage device (e.g., computing server 103E).
In step 215, the software application 106 is configured to identify a set of one or more volatile media objects from the set of one or more media objects identified in step 210. Thus, the identification of the media object may be appropriate for the type of website or the type of website application, or may be specific to the website. In one or more embodiments, all media objects in a page may be identified as volatile media objects.
In step 216, the software application 106 is configured to: for a set of one or more volatile media objects, resource access data is sent to a storage device (e.g., computing server 103E) for accessing a resource that enables recovery (e.g., rendering) of the set of one or more volatile media objects. For example, if the first media content is a web page, the resource access data may be a pointer (e.g., a URL) to a resource of the associated web page. In one or more embodiments, the resource access data may include the resource itself (e.g., data encoding the media object). The data encoding the media object may, for example, include object properties and/or status, data-encoded image(s), text, and the like. The resource access data is stored by a storage device (e.g., computing server 103E) in association with the media access data. In one or more embodiments, the navigation context data includes the received resource access data.
In step 220, the software application 106 is configured to identify, among the set of one or more volatile media objects, a set of one or more interactive media objects associated with one or more interactive functions. The interaction function may be, for example: loading a web page pointed to by the URL associated with the interactive media object, displaying a menu, displaying a sub-window, downloading content, sending an email, launching an application, opening a communication session, etc. The trigger data may include hyperlinks (e.g., URLs) on the target web page, definitions of menus and related resources, code for interactive functions, and so forth. For web pages, the trigger data may be extracted from the code of the web page.
In step 221, the software application 106 is configured to: for each identified interactive media object, trigger data for triggering execution of one or more interactive functions associated with the identified interactive media object is sent to a storage device (e.g., computing server 103E). The trigger data may include a URL and/or a portion of code (e.g., HTML code) extracted from the code of the first media content. The trigger data is stored by a storage device (e.g., computing server 103E) in association with the media access data. In one or more embodiments, the navigation context data includes the received trigger data.
In step 230, the software application 106 is configured to transmit the coordinates of the first portion of the first media content M1 currently displayed in the first UI (user interface) window to a storage device (e.g., computing server 103E). The coordinates of the first portion are stored by a storage device (e.g., computing server 103E) in association with the media access data. In one or more embodiments, the navigation context data includes coordinates of the received first portion.
The first portion of the first media content M1 currently displayed in the first UI window represents an area of interest to the user 101A, e.g., an area on which the user 101A may perform interactions, or an area where annotations of the area may be generated by the user 101A, or an area that the user 101A desires to share with other users. In one or more embodiments, the coordinates of the first portion are defined relative to a coordinate system associated with the media content. The coordinates of the first portion may be based on a resolution and/or size of a display screen of the source electronic device 104A.
In step 240, the software application 106 is configured to: of the set of one or more media objects identified in step 210, a set of one or more interesting media objects in the first portion of the first media content M1 currently displayed in the first UI window of the software application is identified. The media objects of interest may be interactive or non-interactive. In one or more embodiments, one or more media objects of interest in the set of one or more media objects of interest are interactive media object(s). The media objects of interest may be volatile or non-volatile. In one or more embodiments, one or more media objects of interest in the set of one or more media objects of interest are volatile media object(s).
The identification of media objects of interest may be performed in different ways. In one or more embodiments, the identification of the media object of interest includes: interactions of the user 101A on objects in the first portion are detected, and media objects on which interactions have been detected are identified. In one or more embodiments, the identification of the media object of interest includes: a viewing direction from which the user 101A views the first media content is determined and one or more media objects of interest whose display area intersects the viewing direction or whose display area is closest to the intersection of the viewing direction and the display screen are identified. In one or more embodiments, the identification of the media object of interest includes: one or more media objects closest to a center position of the first portion of the first media content are determined. In one or more embodiments, the identification of the media object of interest includes: the user is allowed to select (e.g., by contact on the touch-sensitive display) one or more media objects of interest, and the selected one or more media objects of interest are identified.
In step 241, the software application 106 is configured to send the identification of the media object(s) of interest that have been identified in step 240 to a storage device (e.g., computing server 103E). The identification of the media object(s) of interest is stored in association with the media access data. In one or more embodiments, the navigation context data includes an identification of the received media object(s) of interest.
In step 250, the software application 106 is configured to: the object coordinates of the object of interest with respect to the first section (region of interest) of the first media content M1 currently displayed in the first UI window W1 are determined for each object of interest.
In step 251, the software application 106 is configured to: for each object of interest, the object coordinates of the object of interest are sent to a storage device (e.g., computing server 103E). The object coordinates are stored by a storage device (e.g., computing server 103E) in association with the media access data. In one or more embodiments, the navigation context data includes the received object coordinates.
Steps 200-251 may be repeated for each navigation context to be stored, e.g., for another portion of the first media content or for a portion of another media content. For each execution of steps 200-251, navigation context data is stored by a storage device (e.g., computing server 103E), and a corresponding navigation context may be restored based on the navigation context data.
FIG. 2B is a flow diagram of a method for restoring navigation context in accordance with one or more embodiments. Although the various steps in the flow diagrams are presented and described in a sequence, those skilled in the art will appreciate that some or all of the steps may be performed in a different sequence, may be combined or omitted, and some or all of the steps may be performed in parallel.
The method steps may be implemented by a destination electronic device (e.g., electronic device 104B) and a storage device for storing navigation context data according to any embodiment disclosed herein. The storage device may be included in the source electronic device or may be a separate device. In one or more embodiments, the storage device may be a computing server (e.g., front end server 103E), a database 102A-102B, another electronic device 104B, 104C, or may be included in such a device.
In one or more embodiments, to share media content, first media content displayed on a source electronic device 104A must be transferred to other electronic devices (i.e., destination electronic devices) of the plurality of electronic devices 104B-104C through the computing server 103E. The first media content may be generated locally on the source electronic device 104A or loaded from the media server 1031-103C. The computing server 103E manages a database for storing media access data and associated metadata. The associated metadata includes navigation context metadata.
In the following description, the method steps are implemented by the client application 106 of the source electronic device 104A and by the server application 107 of the front-end server 103E being used as a storage device, but the method steps are applicable to any other storage device and any other software application for displaying media content.
In step 260, the software application 106 obtains the navigation context data from the storage device (e.g., from a database, from the computing server 103E, or from the source electronic device 104A) and initiates restoring the navigation context based on the navigation context data. In one or more embodiments, navigation context data is generated as described with reference to fig. 2A.
The navigation context data may include:
-media access data for accessing the first media content M1 through the media server; and/or
A screen shot of a first portion of the first media content M1; and/or
-coordinates relative to a first portion of the first media content M1; and/or
-an identification of the media object(s) in the first media content; and/or
-an identification of media object(s) of interest within the first portion; and/or
-identification of volatile media object(s) within the first media content M1; and/or
-resource access data for restoring a set of one or more volatile media objects of the first media content M1; and/or
-trigger data for resuming execution of one or more interactive functions associated with one or more interactive media objects of the first media content M1;
-object coordinates of each interesting media object relative to the first part of the first media content.
In step 265, the software application 106 is configured to display the screen shot in the second UI window. In one or more embodiments, the displaying of the screen shot is performed before the second portion of the second media content M2 is displayed in the second UI window (see step 290). In one or more embodiments, the screen shot can be used as a visual representation of a bookmark corresponding to the navigation context data. In one or more embodiments, one or more screen shots are displayed, and upon detecting user interaction with a related screen shot and interaction based on associated navigation context data, a navigation context corresponding to the screen shot is restored.
At step 269, the software application 106 is configured to load the first media content using the media access data. In one or more embodiments, the first media content M1B is loaded from the media server 103A, 103B, 103C. The loaded first media content M1B may be different from the initially displayed first media content M1 (step 200). For example, upon loading the first media content M1B, one or more volatile media objects of the first media content M1 that were initially displayed in step 200 may no longer be accessible.
In step 270, the software application 106 is configured to generate the second media content M2, which second media content M2 at least partially reproduces the first media content M1. In one or more embodiments, the generation of the second media content M2 is performed based on the navigation context data. In one or more embodiments, the second media content includes a set of one or more media objects of interest identified in step 240. In one or more embodiments, the second media content includes the set of one or more volatile media objects identified in step 215.
In one or more embodiments, the second media content M2 is generated by modifying the loaded first media content M1B to recover the set of one or more volatile media objects identified in step 215 based on the resource access data of the set of one or more volatile media objects of the first media content M1. In some cases, for example, it may not be necessary to restore some of the volatile media objects when the set of volatile media objects is not affected, modified or altered in the loaded first media content M1B, as compared to the initially displayed first media content M1 (step 200). In one or more embodiments, the recovery is performed on one or more volatile media objects that are not present in the loaded first media content M1B, or on one or more volatile media objects on which one or more modifications occurred. The modification may be, for example, a modification of the volatile media object or a modification in the order in which the volatile media objects are presented.
In one or more embodiments, the interaction context is restored for the set of one or more media objects in the second media content based on trigger data for triggering performance of the interaction function(s) associated with the interactive media object.
In one or more embodiments, the generation of the second media content M2 is performed such that the interactive function(s) associated with the interactive media object may be triggered in the same way as in the first media content M1, and the software application 106 is configured to trigger the interactive function(s) associated with the interactive media object based on the trigger data for generating the second media content M2. For example, if the user is redirected to a target web page by clicking on a media object in the first media content M1, then the user is also redirected to the same target web page by clicking on a corresponding media object in the second media content M2. For example, if a menu is opened by clicking on a media object in the first media content M1, then the same menu will be opened in the second media content M2 by clicking on the corresponding media object in the second media content M2. Thus, the interaction context may be restored even for volatile media objects.
In step 275, the software application 106 is configured to remove from the generated second media content M2 any media objects that are not accessible from the recipient electronic device 104B and/or are not accessible to the user 101B that restored the navigation context. In one or more embodiments, the software application 106 is configured to determine whether the user 101B may access a media object in the set of one or more media objects. This may be based on the user permissions/profiles of user 101B, rather than user permissions/profiles of user 101A, as opposed to a software application (e.g., a web application) with which the first media content has been accessed. This may involve media content that is not publicly available or media content that is private to the user 101A. The generation of the second media content M2 is performed such that media objects that are not accessible to the user 101B of the software application 106 with restored navigation context are removed from the second media content M2 (e.g., for non-volatile media content) or inserted into the second media content M2 (e.g., for volatile media objects that are not present in the loaded first media content) that are not accessible to the user 101B of the software application 106 with restored navigation context.
In step 280, the software application 106 is configured to identify a portion of interest (hereinafter referred to as a second portion) in the second media content M2 to be displayed. In one or more embodiments, the second portion is identified based on the set of one or more media objects of interest, and the second portion includes at least one media object of interest in the set of one or more media objects of interest (e.g., for the one or more media objects of interest identified in step 240).
In one or more embodiments, the coordinates relative to the second portion of the second media content M2 are determined based on the received object coordinates of the one or more objects of interest. In one or more embodiments, the coordinates relative to the second portion of the second media content M2 are determined such that, for a given object of interest, the object coordinates of the media object of interest (relative to the second portion of the second media content M2) are the same (or as close as possible) to the object coordinates of the same media object of interest (relative to the first portion of the first media content M1).
In one or more embodiments, the coordinates of the second portion are also determined in accordance with the resolution and/or size of the display screen of the destination electronic device 104B. In one or more embodiments, the coordinates of the second portion are determined to ensure that the media object(s) of interest are inside the second portion and in a relative position with respect to the display window that is as similar as possible to the layout of the first media content M1, even though the layout of the second media content M2 is different from the layout of the first media content M1.
In one or more embodiments, the generation of the second media content M2 is performed such that the layout of the second media content M2 is the same as or as similar as possible to the layout of the first media content M1, depending on the resolution and/or size of the display screen of the destination electronic device 104B compared to the resolution and/or size of the display screen of the source electronic device 104A. The same layout may be used if the resolution and/or size of the display screens are the same.
In step 290, the software application 106 is configured to display a second portion of the second media content M2 in the second UI window. In one or more embodiments, the display of the second portion of the second media content M2 is performed when the screen shot is displayed upon detecting an interaction with the user 101B in the second UI window (see step 265). Once the second portion is displayed, the user 101B may trigger the interactive function(s) on the interactive object(s) currently displayed in the second portion or may navigate through the second media content M2 to change the currently displayed portion.
Steps 260-290 may be repeated for each navigation context to be restored based on the navigation context data.
In one or more embodiments, the method for storing described with reference to fig. 2A and the method for restoring navigation context described with reference to fig. 2B may be implemented by the same software application (e.g., software application 106), but still mean transmitting the navigation context metadata, and optionally the media content itself, towards a storage device (e.g., computing server 103E). Thus, these methods may be used to remotely store navigation context metadata for media content (e.g., a web page or any hypertext document) on a storage device, and optionally the media content itself.
Fig. 3A-3B are flow diagrams of methods for storing and restoring navigation context on the same electronic device (e.g., electronic device 104A), in accordance with one or more embodiments. Although the various steps in the flow diagrams are presented and described in a sequence, those skilled in the art will appreciate that some or all of the steps may be performed in a different sequence, may be combined or omitted, and some or all of the steps may be performed in parallel.
Method steps may be implemented by an electronic device (e.g., electronic device 104A) in accordance with any embodiment disclosed herein. The method steps may be implemented, for example, by the client application 106 of the source electronic device 104A.
The method may be implemented without implying transfer of media content or metadata to a storage device (e.g., computing server 103E). Thus, the method may be used to store navigation context metadata for media content (e.g., a web page or any hypertext document) locally on the electronic device 104A, and optionally locally store the media content itself.
Steps 300-390 described below correspond to steps 200-290 described with reference to fig. 2A-2B, respectively, and are simplified because they are performed on the same and a single electronic device. For example, data need not be sent to the storage device. For example, the privacy of the media object need not be considered. For simplicity, the embodiments and aspects that have been described with reference to fig. 2A-2B are not repeated here, but still apply.
Referring to FIG. 3A, step 300, first media content M1 (e.g., a web page) is displayed in a first UI window W1 of the software application 106 on a display screen of the source electronic device 104A, and the software application 106 is configured to: the process of storing the navigation context begins. For example, the process may be initiated upon request of the user 101A or when certain predetermined conditions are met while performing a given interaction.
In step 301, the software application 106 is configured to store media access data enabling access to the first media content M1. The media access data may include hyperlinks (e.g., URLs), shortcuts, addresses, and/or identifiers for accessing web pages to access media content through the media servers 103A, 103B, 103C.
In step 305, the software application 106 is configured to perform a screen shot of a first portion of the first media content M1 currently displayed in the UI window W1.
In step 306, the software application 106 is configured to store the screen shot.
In step 310, the software application 106 is configured to identify a set of one or more media objects in the first media content M1.
In step 315, the software application 106 is configured to identify a set of one or more volatile media objects from the set of one or more media objects obtained in step 310. Thus, the identification of the media object may be appropriate for the type of website or the type of website application, or may be specific to the website. In one or more embodiments, all media objects in a page may be identified as volatile media objects.
In step 316, the software application 106 is configured to: for a set of one or more volatile media objects, resource access data is stored for accessing a resource that enables recovery (e.g., rendering) of the set of one or more volatile media objects.
In step 320, the software application 106 is configured to identify, among the set of one or more volatile media objects, a set of one or more interactive media objects associated with the one or more interactive functions.
In step 321, the software application 106 is configured to: for each identified interactive media object, trigger data is stored for triggering execution of one or more interactive functions associated with the identified interactive media object.
In step 330, the software application 106 is configured to store the coordinates of the first portion of the first media content M1 currently displayed in the first UI (user interface) window.
In step 340, the software application 106 is configured to: among the set of one or more media objects, a set of one or more media objects of interest in a first portion of the first media content M1 currently displayed in a first UI window of a software application is identified.
In step 341, the software application 106 is configured to store the identification of the media object(s) of interest that have been identified in step 340.
In step 350, the software application 106 is configured to: for each object of interest, the object coordinates of the object of interest are determined with respect to the first portion (region of interest) of the first media content M1 currently displayed in the first UI window.
In step 351, the software application 106 is configured to: object coordinates of the objects of interest are stored for each object of interest.
Steps 300-351 may be repeated for each navigation context to be stored, e.g., for another portion of the first media content or for a portion of another media content. For each execution of steps 300-351, the navigation context data is stored by the source electronic device 104A, and the corresponding navigation context may be restored based on the stored navigation context data.
Thus, the stored navigation context data may include:
-media access data for accessing the first media content M1 through the media server; and/or
A screen shot of a first portion of the first media content M1; and/or
-coordinates relative to a first portion of the first media content Ml; and/or
-an identification of media object(s) in the first media content; and/or
-an identification of media object(s) of interest within the first portion; and/or
-identification of volatile media object(s) within the first media content M1; and/or
-resource access data for restoring a set of one or more volatile media objects of the first media content M1; and/or
-trigger data for triggering the execution of the interactive function(s) associated with the interactive media object; and/or
-object coordinates of each interesting media object relative to the first part of the first media content.
Referring to FIG. 3B, in step 360, the software application 106 retrieves (e.g., from a database, data file, etc.) the navigation context data and begins restoring the navigation context based on the retrieved navigation context data.
In step 365, the software application 106 is configured to display the screen shot in the second UI window. In one or more embodiments, the displaying of the screen shot is performed before the second portion of the second media content M2 is displayed in the second UI window (see step 390). In one or more embodiments, the screen shot can be used as a visual representation of a bookmark corresponding to the navigation context data. In one or more embodiments, one or more screen shots are displayed and a navigation context corresponding to a screen shot is restored upon detecting user interaction with the relevant screen shot and interaction based on the associated navigation context data.
In step 369, the software application 106 is configured to load the first media content using the media access data. In one or more embodiments, the first media content M1B is loaded from the media server 103A, 103B, 103C. The loaded first media content M1B may be different from the initially displayed first media content M1 (step 200).
In step 370, the software application 106 is configured to generate the second media content M2, which second media content M2 at least partially reproduces the first media content M1. In one or more embodiments, the second media content includes a set of one or more media objects of interest identified in step 340. In one or more embodiments, the second media content includes a set of one or more volatile media objects identified in step 315.
In one or more embodiments, the second media content M2 is generated by modifying the loaded first media content M1B to recover the set of one or more volatile media objects identified in step 315 based on the resource access data respectively associated with the volatile media object(s) of the first media content Ml. In some cases, for example, when the set of volatile media objects is not affected, modified or altered in the loaded first media content M1B, some of the volatile media objects may not need to be restored, as compared to the initially displayed first media content M1 (step 300). In one or more embodiments, the recovery is performed on one or more volatile media objects that are not present in the loaded first media content M1B, or on one or more volatile media objects on which one or more modifications occurred. The modification may be, for example, a modification to the volatile media object, or may be a modification in the order in which the volatile media objects are presented.
In one or more embodiments, the generation of the second media content M2 is performed such that the interactive function(s) associated with the interactive media object may be triggered in the same way as in the first media content M1, and the software application 106 is configured to trigger the interactive function(s) associated with the interactive media object based on the trigger data for generating the second media content M2.
In one or more embodiments, the interaction context is restored for the set of one or more media objects in the second media content based on trigger data for triggering performance of the interaction function(s) associated with the interactive media object. Interactivity may be restored even for interactive volatile objects.
In step 380, the software application 106 is configured to identify a portion of interest (hereinafter referred to as a second portion) in the second media content M2 to be displayed. In one or more embodiments, the second portion is identified based on a set of one or more media objects of interest, and the second portion includes at least one media object of interest in the set of one or more media objects of interest (e.g., the one or more media objects of interest identified in step 340).
In one or more embodiments, the coordinates relative to the second portion of the second media content M2 are determined based on the received object coordinates of the one or more objects of interest. In one or more embodiments, the coordinates relative to the second portion of the second media content M2 are determined such that, for a given object of interest, the object coordinates of the media object of interest (relative to the second portion of the second media content M2) are the same (or as close as possible) to the object coordinates of the same media object of interest (relative to the first portion of the first media content M1).
In step 390, the software application 106 is configured to display a second portion of the second media content M2 in a second UI window. In one or more embodiments, the display of the second portion of the second media content M2 is performed when the screen shot is displayed upon detecting interaction with the user 101B in the second UI window (see step 365). Once the second portion is displayed, the user 101B may trigger the interactive function(s) on the interactive object(s) currently displayed in the second portion or may navigate through the second media content M2 to change the currently displayed portion.
Steps 360-390 may be repeated for each navigation context to be restored based on the navigation context data.
Fig. 4A-4C and 5A-5C represent media content and a user interface in accordance with one or more embodiments.
FIG. 4A illustrates a first media content M1 (e.g., a web page) comprising a plurality of media objects: a first picture P1 and associated text T1, a second picture P2 and associated text T21, T22, a third picture P3 and associated text T3, a fourth picture P4 and associated text T4.
FIG. 4B illustrates the first portion of the first media content M1 of FIG. 4A displayed in a window W1 of a software application on a display screen of the source electronic device 104A. In the first portion, only the third picture P3, the text T3, the fourth picture P4 and the text T4, the text T22 are visible, while the other media objects P1, T1, P2, T21 are not visible or are only partially visible.
FIG. 4C illustrates that when picture P4 is selected by user 101A of source electronic device 104A, the media object is identified as a media object of interest, which includes picture P4 and at least a portion of associated text T4. In the illustrated example, the object coordinates of the media object of interest are determined to be at a height of 75% Y from the top of the window W1. In the illustrated example, only the vertical position of the media object of interest is considered, but both the vertical and horizontal positions relative to window W1 may be considered.
Fig. 5A shows a screen shot SC0 of a first portion of the first media content M1 displayed in the window W1 shown in fig. 4B. The third picture P3, the text T3, the fourth picture P4 and the text T4 are visible in the screen shot SC 0.
Fig. 5B shows the screen shot SC0 of fig. 5A displayed in the window W2 of the software application on the display screen of the destination electronic device 104B (see, e.g., steps 265 or 365). As shown in fig. 5B, due to the differences in size and resolution between the display screen of the source electronic device 104A and the display screen of the target electronic device 104B, the displayed screen shot may not completely cover the window W2 and/or may not fit within the display scale of the display screen of the target electronic device 104B.
Once the second media content has been generated based on the first media content, fig. 5C shows the content of a second window W2. Only the portion of the second media content is visible, the portion including the media object of interest (i.e., picture P4 and the portion's associated text T4). In the illustrated example, the object coordinates of the media object of interest with respect to the portion displayed in the second window W2 are the same as in fig. 4C. In the example shown, the center of the vertical position of the media object of interest is located at a height of 75% Y from the top of the window W2. Since the display screen of the recipient electronic device 104B is larger in size than the display screen of the source electronic device 104B, additional media objects (in the example of FIG. 5C, a partial picture P2) may be visible as compared to the media objects visible in the UI window W1 on the electronic device 104A.
6A-6C and 7A-7C illustrate media content and a user interface according to one or more embodiments.
Fig. 6A shows first media content M1 (e.g., a web page) that includes a list of media objects O1-O7, where each media object is a group (or container) of media objects that includes a picture and accompanying short text. The list of media objects O1-O7 is derived from a search performed by a search engine according to search criteria entered by the user 101A of the source electronic device 104A. Since the results of such searches may vary depending on the time at which the search was performed and/or depending on personalization parameters (e.g., user profile and/or user cookies and/or user location), the list of media objects O1 through O7 is considered a collection of volatile media objects and associated resource access data is stored such that the list of media objects O1 through O7 can be restored in the same order.
FIG. 6B illustrates the first portion of the first media content M1 of FIG. 6A displayed in a window W1 of a software application on a display screen of the source electronic device 104A. In the first portion, only media objects O2 through O6 are visible, while the other media objects O1 and O7 are not visible.
As shown in FIG. 6C, when media object O4 is selected by the user 101A of the source electronic device 104A, volatile media object O4 is identified as the media object of interest. In the illustrated example, the object coordinates of the media object of interest are determined to be located at a height of 50% Y from the top of the window W1. In the illustrated example, only the vertical position of the media object of interest is considered, but both the vertical and horizontal positions relative to window W1 may be considered. In the example shown, the volatile media object O4 includes a picture VP and a short text ST.
Fig. 7A shows a screen shot SC1 of the first portion of the first media content M1 displayed in the window W1 shown in fig. 6B. The media objects O2-O6 are visible in the screen shot SC 1.
Fig. 7B shows the screen shot SC1 of fig. 7A displayed in the window W2 of the software application on the display screen of the destination electronic device 104B (see, e.g., steps 265 or 365). As shown in fig. 7B, due to the differences in size and resolution between the display screen of the source electronic device 104A and the display screen of the target electronic device 104B, the displayed screen shot may not completely cover the window W2 and/or may not fit within the display scale of the display screen of the target electronic device 104B.
Fig. 7C shows the content of the second window W2 once the second media content M2 has been generated based on the first media content M1. By restoring the exact order of the media objects O1-O7 (see fig. 6A) corresponding to the initial content of the first media content, the list of media objects O1-O7 may be rendered, which may result in a different list of media objects if the user 101B were to request a search on the target electronic device 104B, even if the same search conditions were used to trigger the search.
As shown in FIG. 7C, only a portion of the second media content is visible in the second window W2, and the portion includes the media object of interest O4. In the illustrated example, the object coordinates of the media object of interest with respect to the portion displayed in the second window are the same as in fig. 6C. In the example shown, the center of the vertical position of the media object of interest is located at a height of 50% Y from the top of the window W2. Because the display screen of the recipient electronic device 104B is larger in size than the display screen of the source electronic device 104B, additional media objects may be visible as compared to the media objects visible in the UI window W1 on the electronic device 104A.
FIG. 7D shows the content of the second window W2 once an interaction on the media object of interest O4 of the second media content M2 has been detected in window W2 of FIG. 7C. One or more interactive functions may be triggered upon detection of an interaction on media object O4. In the illustrated example, the triggered function is redirected to the target web page (e.g., based on a hyperlink contained in the trigger data), and the target web page is displayed in window W2. The target web page may include other information associated with the media object of interest O4, such as a picture VP and a full text FT. Thus, the interaction context is restored and the user can continue to navigate from the target web page.
Fig. 8A-8C also represent other user interfaces in accordance with one or more embodiments.
FIG. 8A shows a screen shot SC2 of a first portion of first media content M1 displayed in window W1 shown in FIG. 6B, in which media objects of interest are obscured (e.g., upon request by user 101A or automatically upon identification of media object of interest O4) as opposed to certain media objects. Thus, the media objects O2 through O3 and O5 through O6 are unreadable in the screen shot SC 2. In the screenshot SC2, media object O4 includes a picture VP and a summary text ST.
Fig. 8B shows the screen shot SC2 of fig. 7A displayed in the window W2 of the software application on the display screen of the destination electronic device 104B (see, e.g., steps 265 or 365). As shown in fig. 7B, due to the differences in size and resolution between the display screen of the source electronic device 104A and the display screen of the target electronic device 104B, the displayed screen shot may not completely cover the window W2 and/or may not fit within the display scale of the display screen of the target electronic device 104B.
Fig. 8B shows the content of the second window W2 once the second media content M2 has been generated based on the first media content M1. The list of media objects O1 through O7 is not reproduced. But only the media object of interest O4 is rendered and the other media objects have been deleted.
As shown in FIG. 8C, a larger view on media object O4 may be presented since only a single media object O4 in the list of media objects needs to be restored in the second media content. For example, if media object O4 includes a hyperlink to a target web page that includes more information associated with media object O4, then rendering a larger view on media object O4 may include displaying the target web page. In the example shown, the larger view on the restored media object O4 includes the picture VP and the full text FT, rather than the short text ST previously visible in fig. 6A-6C.
The embodiments described herein may be used to restore navigation context in any kind of software, whether a Web application or an application for navigating into media content (e.g., a map application displaying an interactive map of a town where the media object of interest may be the location of the town and/or a subway station, etc.).

Claims (17)

1. A method for restoring navigation context of software, the method comprising:
storing a navigation context, wherein storing the navigation context comprises:
storing (301) media access data enabling access to first media content displayed in the software;
detecting (310) a set of one or more media objects in the first media content;
identifying (340), among the set of one or more media objects, a set of one or more media objects of interest in a first portion of the first media content that is currently visible in a first user interface window, i.e., a first UI window; and
storing (341) an identification of the one or more media objects of interest;
restoring the navigation context, wherein restoring the navigation context comprises:
Loading (369) the first media content based on the media access data;
generating (370) second media content based on the loaded first media content, wherein the second media content comprises: a set of one or more restored media objects of interest corresponding to the identified one or more media objects of interest;
displaying (390) the second media content such that the second portion currently visible in the second UI window comprises: at least one restored media object of interest in the set of one or more restored media objects of interest.
2. The method of claim 1, further comprising:
wherein storing the navigation context further comprises:
identifying a set of one or more volatile media objects among the set of one or more media objects;
storing resource access data for accessing resources used to recover the set of one or more volatile media objects;
wherein generating the second media content comprises:
modifying, at least in part, the loaded first media content by restoring at least one volatile media object of the set of one or more volatile media objects in the second media content based on the resource access data.
3. The method of claim 1 or 2, further comprising:
wherein storing the navigation context further comprises:
identifying, among the set of one or more volatile media objects, a set of one or more interactive media objects associated with one or more interactive functions; and
storing, for each identified interactive media object, trigger data for triggering execution of one or more interactive functions associated with the identified interactive media object;
wherein the second media content comprises at least one interactive media object of the set of one or more interactive media objects,
wherein for each interactive media object in the second media content, restoring the navigation context further comprises,
enabling a user to trigger the one or more interactive functions associated with the interactive media object in question, an
Triggering the one or more interactive functions associated with the media object of interest under consideration based on the trigger data.
4. The method of claim 3, wherein at least one media object of interest of the set of one or more media objects of interest is an interactive media object of the set of one or more interactive media objects.
5. The method of any of the preceding claims, further comprising: for at least one object of interest,
wherein storing the navigation context further comprises:
storing object coordinates of the object of interest in the first portion of the first media content currently visible in the first UI window;
wherein restoring the navigation context further comprises:
determining second coordinates of the second portion in the second media content based on the object coordinates.
6. The method of any of the preceding claims, further comprising:
wherein storing the navigation context comprises:
performing a screen shot of the first portion of the first media content;
wherein restoring the navigation context comprises:
displaying the screen shot in the second UI window;
wherein displaying the screen shot is performed prior to displaying the second portion of the second media content in the second UI window;
wherein displaying the second portion of the second media content in a second UI window of the software is performed upon detecting an interaction of a user in the second UI window.
7. The method of any of the above claims, wherein identifying one or more media objects of interest in the first portion of the first media content comprises: detecting user interaction on at least one media object in the first portion, and identifying at least one of: the interaction has been detected on the at least one media object.
8. The method of any of the above claims, wherein identifying one or more media objects of interest in the first portion of the first media content comprises: determining a viewing direction from which a user views the first media content, and identifying one or more media objects of interest: the display area of the one or more media objects of interest intersects the viewing direction or is closest to the intersection of the viewing direction and the display screen.
9. The method of any of the above claims, wherein identifying one or more media objects of interest in the first portion of the first media content comprises: determining one or more media objects closest to a center position of the first portion of the first media content.
10. The method of any of the above claims, wherein identifying one or more media objects of interest in the first portion of the first media content comprises: the user is allowed to select one or more media objects of interest and identify the selected one or more media objects of interest.
11. The method of any of the above claims, wherein generating the second media content comprises: determining whether a media object of the set of one or more media objects is accessible to the user having restored the navigation context, and inserting a media object in the second media content only if the media object in question is accessible.
12. The method of any of the preceding claims, wherein the layout of the second media content is different from the layout of the first media content, and the layout of the second media content is adapted to the screen resolution and/or the screen size of the display screen.
13. A method for restoring navigation context of software, the method comprising:
obtaining (260; 360) navigation context data, the navigation context data comprising,
Media access data enabling access to first media content, wherein the first media content comprises a set of one or more media objects;
during navigation, an identification of a set of one or more focused media objects displayed in a first portion of the first media content in a first user interface window, i.e., a first UI window;
restoring the navigation context, wherein restoring the navigation context comprises:
loading (269; 369) the first media content based on the media access data;
generating (270; 370) second media content based on the loaded first media content, wherein the second media content comprises: a set of one or more restored media objects of interest corresponding to the identified one or more media objects of interest;
displaying (290; 390) the second media content such that the second portion currently visible in the second UI window comprises: at least one restored media object of interest in the set of one or more restored media objects of interest.
14. The method of claim 13, wherein the first media content comprises a set of one or more volatile media objects, the method further comprising:
Wherein the navigation context data further comprises: resource access data for accessing a resource for recovering a set of one or more volatile media objects identified in the set of one or more media objects;
wherein generating the second media content comprises:
modifying, at least in part, the loaded first media content by restoring at least one volatile media object of the set of one or more volatile media objects in the second media content based on the resource access data.
15. A method for restoring navigation context of software, the method comprising:
sending (201) media access data to a recipient device, the media access data enabling access to first media content displayed in the software;
detecting (210) a set of one or more media objects in the first media content;
identifying (240), among the set of one or more media objects, a set of one or more media objects of interest in a first portion of the first media content that is currently visible in a first user interface window, i.e., a first UI window; and
Sending an identification of the one or more media objects of interest to the recipient device (241);
restoring the navigation context on the recipient device, wherein restoring the navigation context comprises:
loading (269) the first media content based on the media access data;
generating (270) second media content based on the loaded first media content, wherein the second media content comprises: a set of one or more restored media objects of interest corresponding to the identified one or more media objects of interest;
displaying (290) the second media content such that the second portion currently visible in the second UI window comprises: at least one restored media object of interest in the set of one or more restored media objects of interest.
16. An electronic device, the electronic device comprising:
a processor;
a memory operatively coupled to the processor, the memory comprising computer readable instructions which, when executed by the processor, cause the processor to perform the steps of the method of any of the above claims.
17. A computer program product comprising computer readable instructions which, when executed by a computer, cause the computer to perform the steps of the method according to any one of claims 1 to 15.
CN201980010525.4A 2018-02-15 2019-02-11 Method and apparatus for storing and restoring navigation context Pending CN111868713A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
EP18305155.6 2018-02-15
EP18305155.6A EP3528145A1 (en) 2018-02-15 2018-02-15 Method and device for storing and restoring a navigation context
US15/898,136 US20190250999A1 (en) 2018-02-15 2018-02-15 Method and device for storing and restoring a navigation context
US15/898,136 2018-02-15
PCT/EP2019/053347 WO2019158490A1 (en) 2018-02-15 2019-02-11 Method and device for storing and restoring a navigation context

Publications (1)

Publication Number Publication Date
CN111868713A true CN111868713A (en) 2020-10-30

Family

ID=65278390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980010525.4A Pending CN111868713A (en) 2018-02-15 2019-02-11 Method and apparatus for storing and restoring navigation context

Country Status (4)

Country Link
KR (1) KR20200131250A (en)
CN (1) CN111868713A (en)
SG (1) SG11202006212RA (en)
WO (1) WO2019158490A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8041763B2 (en) * 2007-06-12 2011-10-18 International Business Machines Corporation Method and system for providing sharable bookmarking of web pages consisting of dynamic content
US20090254529A1 (en) * 2008-04-04 2009-10-08 Lev Goldentouch Systems, methods and computer program products for content management
US8775923B1 (en) * 2011-09-26 2014-07-08 Google Inc. Web page restoration
US10075399B2 (en) 2016-01-22 2018-09-11 Alkymia Method and system for sharing media content between several users

Also Published As

Publication number Publication date
SG11202006212RA (en) 2020-07-29
WO2019158490A1 (en) 2019-08-22
KR20200131250A (en) 2020-11-23

Similar Documents

Publication Publication Date Title
US9977768B2 (en) System for clipping webpages by traversing a dom, and highlighting a minimum number of words
CN101300621B (en) System and method for providing three-dimensional graphical user interface
US9213684B2 (en) System and method for rendering document in web browser or mobile device regardless of third-party plug-in software
US10185931B2 (en) Thumbnail image previews
US20040205633A1 (en) Previewing file or document content
US20170243017A1 (en) Bundling File Permissions For Sharing Files
US10897404B2 (en) System and method of a manipulative handle in an interactive mobile user interface
KR102281186B1 (en) Animated snippets for search results
US20230221837A1 (en) Coalescing Notifications Associated with Interactive Digital Content
US10346523B1 (en) Content synchronization across devices
CN104798072A (en) Information management and display in web browsers
TW201508639A (en) Capturing website content through capture services
US9679081B2 (en) Navigation control for network clients
JP7009860B2 (en) Systems and methods, programs, devices for attachments of links to chat messages
TW200928791A (en) Trade card services
KR20160125401A (en) Inline and context aware query box
WO2017196407A1 (en) Forking digital content items between digital topical environments
US20190250999A1 (en) Method and device for storing and restoring a navigation context
TWI609280B (en) Content and object metadata based search in e-reader environment
WO2015043532A1 (en) Information processing method, apparatus, and system
US20160179821A1 (en) Searching Inside Items
CN101611423A (en) Structural data is used for online investigation
CN111868713A (en) Method and apparatus for storing and restoring navigation context
US10417288B2 (en) Search of web page metadata using a find function
EP3528145A1 (en) Method and device for storing and restoring a navigation context

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201030