US20230362242A1 - Direct input from a nearby device - Google Patents
Direct input from a nearby device Download PDFInfo
- Publication number
- US20230362242A1 US20230362242A1 US18/222,704 US202318222704A US2023362242A1 US 20230362242 A1 US20230362242 A1 US 20230362242A1 US 202318222704 A US202318222704 A US 202318222704A US 2023362242 A1 US2023362242 A1 US 2023362242A1
- Authority
- US
- United States
- Prior art keywords
- client
- content
- server
- electronic device
- application
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 claims description 38
- 238000004891 communication Methods 0.000 claims description 25
- 230000015654 memory Effects 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 10
- 230000000694 effects Effects 0.000 claims description 8
- 230000036651 mood Effects 0.000 claims 5
- 230000003213 activating effect Effects 0.000 claims 2
- 230000003190 augmentative effect Effects 0.000 claims 1
- 230000036772 blood pressure Effects 0.000 claims 1
- 238000012544 monitoring process Methods 0.000 claims 1
- 238000003860 storage Methods 0.000 abstract description 85
- 238000005516 engineering process Methods 0.000 abstract description 11
- 239000008186 active pharmaceutical agent Substances 0.000 description 41
- 238000013475 authorization Methods 0.000 description 28
- 230000003993 interaction Effects 0.000 description 26
- 230000002452 interceptive effect Effects 0.000 description 26
- 238000012546 transfer Methods 0.000 description 26
- 238000012986 modification Methods 0.000 description 22
- 230000004048 modification Effects 0.000 description 22
- 238000010586 diagram Methods 0.000 description 17
- 230000007246 mechanism Effects 0.000 description 15
- 230000008569 process Effects 0.000 description 15
- 238000013459 approach Methods 0.000 description 14
- 230000008901 benefit Effects 0.000 description 8
- 230000002776 aggregation Effects 0.000 description 7
- 238000004220 aggregation Methods 0.000 description 7
- 238000013507 mapping Methods 0.000 description 6
- 238000007796 conventional method Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 230000015556 catabolic process Effects 0.000 description 4
- 238000012790 confirmation Methods 0.000 description 4
- 238000012217 deletion Methods 0.000 description 4
- 230000037430 deletion Effects 0.000 description 4
- 238000012552 review Methods 0.000 description 4
- 230000001413 cellular effect Effects 0.000 description 3
- 230000009194 climbing Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 230000006855 networking Effects 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 230000004043 responsiveness Effects 0.000 description 3
- 101100226364 Arabidopsis thaliana EXT1 gene Proteins 0.000 description 2
- 230000004931 aggregating effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- WDQKVWDSAIJUTF-GPENDAJRSA-N via protocol Chemical compound ClCCNP1(=O)OCCCN1CCCl.O([C@H]1C[C@@](O)(CC=2C(O)=C3C(=O)C=4C=CC=C(C=4C(=O)C3=C(O)C=21)OC)C(=O)CO)[C@H]1C[C@H](N)[C@H](O)[C@H](C)O1.C([C@H](C[C@]1(C(=O)OC)C=2C(=C3C([C@]45[C@H]([C@@]([C@H](OC(C)=O)[C@]6(CC)C=CCN([C@H]56)CC4)(O)C(=O)OC)N3C=O)=CC=2)OC)C[C@@](C2)(O)CC)N2CCC2=C1NC1=CC=CC=C21 WDQKVWDSAIJUTF-GPENDAJRSA-N 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000000796 flavoring agent Substances 0.000 description 1
- 235000019634 flavors Nutrition 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000010813 municipal solid waste Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011112 process operation Methods 0.000 description 1
- 230000000284 resting effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
- H04L67/1087—Peer-to-peer [P2P] networks using cross-functional networking aspects
- H04L67/1091—Interfacing with client-server systems or between P2P systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
- H04L67/1044—Group management mechanisms
- H04L67/1046—Joining mechanisms
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/06—Selective distribution of broadcast services, e.g. multimedia broadcast multicast service [MBMS]; Services to user groups; One-way selective calling services
- H04W4/08—User group management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/80—Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W76/00—Connection management
- H04W76/10—Connection setup
- H04W76/14—Direct-mode setup
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W84/00—Network topologies
- H04W84/18—Self-organising networks, e.g. ad-hoc networks or sensor networks
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The subject technology provides a system of server and client devices, of which at least one server device includes an application configured to receive data directly from another one of the client devices. The application is configured to obtain a list of available client devices and associated features and provide input options for display based on the list. When one of the input options is selected, the application activates a component and/or an application of the client device for generation of the desired data. When the desired data has been generated by the client device, the generated data is directly input from the client device into the running application on the server device, without storage of the generated data at the client device, and without operation of the same application on the client device.
Description
- Wireless and wired connections, such as Wi-Fi, cellular (3G/LTE) or Ethernet may be used for internet connectivity to handle our needs for file transfer, browsing the internet, social networking, email/messaging, sending photos to each other, audio/video calling and e-commerce. It is curious that when we pass a printed photo to someone next to us in the real world we just hand it over to them but when we want to share a digital photo from our smartphone to someone standing in front of us, we typically send it across the internet, creating copies along the way. This approach lacks privacy and can be slow and costly when uploading to cloud storage or a web service via a 3G/LTE cellular connection. Further, it seems counter-intuitive to send a photo to the internet and back when we are simply trying to move it from one of our devices to another device physically located next to or in close proximity with each other. Emailing a photo or file to yourself feels as strange as sending a letter to ourselves. For short distance communication, we typically use a USB cable to connect our smartphone to our computer, or Bluetooth/NFC for light data transfers for example for streaming audio or transferring business cards. A USB cable is just not as user friendly as not needing one at all, whereas Bluetooth/NFC are not fast enough to transfer rich media such as photos and videos. Accordingly, technical problems exist in the conventional techniques for exchanging data amongst users and devices.
- The subject matter of the following documents is incorporated herein by reference.
-
Patent Application Title Filing Date 1 US20150230078 Secure Ad Hoc Data Backup to Nearby Friend Feb. 10, Devices 2014 2 US20140344446 Proximity and context aware mobile Apr. 11, workspaces in enterprise systems 2014 3 US20130268929 Method for sharing an internal storage of a Apr. 5, portable electronic device on a host electronic 2012 device and an electronic device configured for same 4 US20060200570 Discovering and mounting network file systems Mar. 2, via ad hoc, peer-to-peer networks 2005 5 U.S. Pat. No. 8,934,624 Decoupling rights in a digital content unit from Dec. 27, download 2011 6 U.S. Pat. No. 8,086,535 Decoupling rights in a digital content unit from Apr. 4, download 2006 7 PCT/US2013/076063 Gesture-based information exchange between Dec. 18, devices in proximity 2013 8 US20150082382 Techniques for multi-standard peer-to-peer Jun. 20, connection 2014 9 US20140362728 Discovery of nearby devices for file transfer Sep. 25, and other communications 2013 10 U.S. Pat. No. 8,838,697 Peer-to-peer file transfer between computer Mar. 8, systems and storage devices 2012 11 US20150295995 File transferring method and device through wi- Jun. 21, fi direct 2013 12 US20100081385 Peer-to-peer host station Sep. 30, 2008 13 US20140057560 Peer-to-peer host station Aug. 23, 2013 14 US20140287690 Method of connecting networks using wi-fi Mar. 21, direct in image forming apparatus, image 2014 forming apparatus supporting wi-fi direct, and image forming system 15 U.S. Pat. No. 9,078,087 Method and apparatus for forming Wi-Fi P2P Aug. 6, group using Wi-Fi direct 2012 16 US20140199967 Bump or Close Proximity Triggered Wireless Jan. 16, Technology 2013 17 US20110163944 Intuitive, gesture-based communications with Jan. 5, physics metaphors 2010 18 U.S. Pat. No. 9,224,364 Apparatus and method for interacting with Apr. 8, handheld carrier hosting media content 2013 19 U.S. Pat. No. 8,458,363 System and method for simplified data transfer Sep. 30, 2008 - Some examples herein relate generally to wireless data communication. For instance, some implementations may relate to wireless sharing of content between nearby devices Further, some examples relate to presenting content stored by one or more server devices at a client device, and interacting with the content at the client device.
- In some implementations, a plurality of wireless computing devices are connected as ad-hoc, pop-up wireless network using direct peer-to-peer wireless connections amongst the devices, without using a wireless access point as in conventional technologies. Each device may store a plurality of data in the form of files which collectively amount to the content of the respective device. Each device may take the role of a client or server or both, as described in the implementations disclosed, unless otherwise noted. As a client, the device requests access to the content of each server. As a server, the device manages client access to the content thereof and further prepares a lightweight representation of the content for the client. At the client, the lightweight representation of content is received from one or more of the servers, and is further modified to be presented to a user of the client. According to the various implementations that will be described in greater detail herein, from the presentation of server content at the client, a user can preview and interact with the remote content.
- Any of the devices can concurrently act as a client, or a server, or both. Once connected, the client device can retrieve, present, interact and operate on the contents of the servers. According to the particular implementation, the content(s) may be presented in the form of an interactive document, a filesystem volume, and/or an API, different from the original form in which the content(s) are stored at each server. The client directly interacts and operates on the content(s) of the server(s) according to the presentation thereof at the client. The types of interactions the client may perform can vary by presentation but generally include viewing, browsing, downloading, uploading, editing, deleting, tagging, commenting and the like.
-
FIG. 1A shows an implementation of retrieval and presentation of remote content among computing devices in proximity using peer-to-peer wireless networking. -
FIG. 1B shows an implementation of presenting aggregated remote content on a mobile device or computer from various types of computing devices such as a wearable device for example a smart watch, a digital camera and an embedded computing device. -
FIG. 1C shows an implementation of presenting aggregated remote content on a vehicle infotainment system or on an airline in-flight entertainment console from various type of computing devices such as smartphone, computer, wearable device, digital camera, and an embedded computing device. -
FIG. 1D shows an implementation of presenting aggregated remote content on a screen of a television, monitor, or projector which may or may not be via set-top-box from various type of computing devices such as smartphone, computer, wearable device, digital camera, and an embedded computing device. -
FIG. 1E shows an implementation of aggregated remote content presented in various layout styles of interactive documents, such as a web page, list, timeline, newsfeed, grid and multimedia. -
FIG. 1F shows details of the content aggregation implementations in cases of (a) a mobile device as a client with another mobile device as a server, (b) a mobile device as a client with a computer as a server, (c) a computer as a client and a mobile device as a server, and (d) a computer as a client with another computer as a server. -
FIG. 1G shows implementation of content aggregation using file system abstraction on a computer from a mobile device and/or another computer as server(s). -
FIG. 1H shows an implementation of content propagation from a user's computer to another's computer via their mobile devices. -
FIG. 2A shows a high level system architecture schematic of an exemplary implementation for browsing and interacting with remote content nearby. -
FIG. 2B shows a typical file system architecture of an operating system in accordance with implementations described herein. -
FIG. 2C shows an exemplary intermediate data structure of JSON format to translate a content list and content metadata into a file system tree and file system attributes. -
FIG. 2D shows examples of identification types that may be used by either the server or client device to identify itself. -
FIG. 3A shows the internal software components of theclient application 203. -
FIG. 3B shows the internal software components of theserver application 205. -
FIG. 3C shows an implementation of aggregated remote contents presented as an interactive document on theclient device 202 constructed from multiple different content types and content structures stored on multipledifferent server devices -
FIG. 3D is a modification ofFIG. 3C and shows the approach of using the virtualfile system adapter 208 in the client application to present the remote contents to the user of theclient device 202. -
FIG. 3E is a modification ofFIG. 3C and shows a presentation of the remote content performed by thecustom application 207 throughAPI 209. -
FIG. 4A shows a set of photos taken by a group of fourpersons Machame route 400. -
FIG. 4B shows an aggregated presentation of photos taken by the other 3 users implemented on thedevice 420. -
FIG. 4C shows an aggregated presentation of photos implemented on a 3rdparty application 206 oncomputer device 428. -
FIG. 4D shows an example of remote content mapping when aclient application 203 implements presentation using the virtualfile system adapter 208. -
FIG. 5A shows a flowchart of remote access request initiated byclient device 202. -
FIG. 5B shows a flowchart of remote access request initiated byserver device 201. -
FIG. 5C (1) shows a flowchart of the processing of the remote access request. -
FIG. 5C (2) shows a continuation of the flowchart of the processing of the remote access request inFIG. 5C (1). -
FIG. 6A shows a sequence diagram of the initial process of accessing the server device's contents. -
FIG. 6B shows a sequence diagram of the process of retrieving additional content metadata. -
FIG. 6C shows a sequence diagram of the read operation of the remote content. -
FIG. 6D shows a sequence diagram of the create operation of the remote content. -
FIG. 6E shows a sequence diagram of the delete operation of the remote content. -
FIG. 6F shows a sequence diagram of the modify operation of the remote content. -
FIG. 6G shows a sequence diagram of priority handling for remote content operations of different categories. -
FIG. 6H shows a sequence diagram of priority handling for remote content operations of the same category. -
FIG. 7A shows a screenshot of one implementation of the client application in the menu bar of an electronic device. -
FIG. 7B shows a screenshot of one implementation of the client application in the menu bar of an electronic device showing another nearby electronic device. -
FIG. 7C shows a screenshot of one implementation of the server application receiving a permission request to access its contents from a client application running on an electronic device. -
FIG. 7D shows a screenshot of one implementation of the server application showing that the client application running on the electronic device is currently permitted to browse contents thereof. -
FIG. 7E shows a screenshot of one implementation of the client application on the electronic device with its Finder displaying the photo and video contents of the server device, with photo albums organized into corresponding folders. -
FIG. 7F shows a screenshot of one implementation of the client application on the electronic device with its Finder displaying a list of photos and thumbnails contained within an album on the server device. -
FIG. 7G shows a screenshot of one implementation of the server application showing a request from a client application to modify photo content on the server device. -
FIG. 7H shows a screenshot of one implementation of the server application showing a request from a client application to delete photo content on the server device. -
FIG. 7I shows a screenshot of one implementation of the client application showing a connected status of the server device. -
FIG. 7J shows a screenshot of one implementation of the client application showing available user storage space on a connected server device. - A peer-to-peer (P2P) wireless connection, generally referred to as “Wi-Fi Direct” offers the advantages of (a) higher data transfer rate than current Bluetooth technology, comparable to the speed of infrastructure Wi-Fi (i.e., connecting to a Wi-Fi access point) and (b) zero configuration is required for setting up ad-hoc connections. The present inventors have found that Wi-Fi Direct is therefore more suitable than current Bluetooth technologies for transferring rich media files such as photos and videos. In the coming years, it is expected that the next version of Bluetooth (i.e., Bluetooth 5) will be widely adopted and become a viable alternative to Wi-Fi Direct for high-speed short-distance data transfers. In the following description, these and other such high-speed short-distance zero-configuration wireless peer-to-peer connections are generally referred to as peer-to-peer wireless connections. Such connections and ad-hoc networks readily lend themselves to wireless sharing of content as will become evident in the several scenarios and various implementations described below.
- In today's connected age, the internet or cloud serves as the source of all information, with users connecting to it to retrieve information, even to guide their locality-based decisions. In many cases, however, a need exists to efficiently, conveniently and directly discover, browse and interact with the information around you without relying on an intermediary such as the world wide web or the cloud. These peer-to-peer wireless connections offer a unique opportunity to build a set of applications for browsing and interacting with nearby devices, such as browsing nearby files, interacting with people nearby over an ad-hoc local pop-up social network, making audio or video calls to people in proximity and engaging in commerce in our vicinity, all without ever needing to connect to the internet or to a Wi-Fi access point. However, architecting these applications requires more innovation than simply porting the existing web architecture to work over a peer-to-peer wireless connection.
- Similarly, wireless mobile devices are serving as the new digital cameras, communication devices and personal computers. People take more photos using mobile phones than with dedicated digital cameras and we live in a world of rich media with billions of photos and videos taken and uploaded daily for sharing and backup to social networks, and messaging apps using services like Facebook, Twitter, Instagram, Flickr and cloud backup services like Dropbox, Apple iCloud, Google Drive, Microsoft OneDrive, Box etc. Consequently, transferring content in the form of files, such as photos, videos, documents and the like, between devices and people, is a daily necessity for the purpose of sharing, editing, organizing, storing or workflow.
- However, the conventional approaches present challenges when storing and sharing content like digital photos and videos with each other. While billions of photos and videos are captured, shared and uploaded daily using smartphones, typical sharing is generally considered a “push” mechanism (i.e. the sender chooses the content and a target person to send it to). The push approach creates multiple redundant copies of the photos, for example, on each recipient's device and in certain cases, on the cloud, and each of the devices connected to the cloud. In contrast, the beauty of the world wide web is that while the amount of information on the WWW is almost infinite, users can choose to browse, interact and download with only what they need as a “pull” mechanism (i.e., the user chooses the content and when to receive it). It should be appreciated that it is impossible to “push” the entire contents of the Internet to a user device. Similarly, if a user wishes to share a large number of photos with a large number of users nearby, it would be more efficient to let the user browse the aggregated contents and download what the user is interested in. Accordingly, the present inventors recognized that a need exists for a similar innovative breakthrough when sharing photos, videos, documents and files with nearby devices by using the metaphor of “pull”, instead of the conventional “push”. For example, if a user is interacting with several other users in a social or business situation, it would be quite useful and advantageous to aggregate and create a shared feed of contents from the other nearby users that other users can browse. A specific user can then choose to download only the contents he is interested in, or just browse other users' contents without downloading. Typical “pull” methodology requires a cloud proxy to serve as an intermediary, however some examples herein may include a direct peer-to-peer mesh connection between the nearby devices using a client-server architecture.
- The present disclosure relates to environments of client device(s) and server device(s) where content that is stored on one or more server devices is presented at the client device so that the presented content from the servers can be interacted with at the client device. Further, the communications between the client devices and server devices are facilitated by direct wireless connections without relying upon a wireless access point to provide a wireless local area network to the client device(s) and server device(s).
- In some examples herein, an electronic device is described as a “client device” or “client” and/or as a “server device” or “server”. While particular devices may be referred to as a client or server, in the various implementations described herein, each particular device is generally considered to be capable of acting as a client and server contemporaneously, unless specifically noted otherwise. Each device is a wireless computing device which performs wireless communications with other devices.
- Examples of such devices include mobile telephones or smartphones with are provided with a processor and storage media to execute a mobile operating system (OS) such as iOS, Android, BB10 and the like as well as hardware for wireless communications. Other examples of devices include laptops, tablets and other general purpose computers or computing systems or devices, which operate by executing a OS as are generally known in the art such as OS X, Windows, Unix, and the like, and include a storage area, processor and hardware for wireless communications. Still further examples of devices include smartwatches, digital cameras, smart TVs and set-top boxes, car infotainment systems, in-flight entertainment systems, embedded computing devices as in the “internet of things”, and the like. Each electronic device herein is provided with one or more processors and one or more storage media that are configured or programmed to perform the operations, acts, sequences and methods which will be described in further detail below.
- As described herein, direct peer-to-peer wireless connections can include Wi-Fi Direct connections, Apple Wireless Data Link (AWDL) connections, IEEE 802.11 ad hoc mode connections, Bluetooth 5 or higher, and the like. Unless otherwise noted, wireless connections between devices according to the implementations described herein may refer to any of the foregoing methods of connecting devices directly.
- Direct peer-to-peer connections are implemented between devices to achieve greater effect in facilitating wireless file transfer than is available with conventional techniques. Using direct peer-to-peer connections is beneficial because many users are not comfortable with uploading content and other information to the cloud since storing content on the internet is fraught with privacy and security issues. In addition, the present inventors recognized that content transfer could be accomplished in a more advantageous manner without needing to transfer content on a nearby computer across the internet and back. The need to provide an alternative way to share content locally is further compounded when considering that storage space and bandwidth generally cost users money and may be limited. Accordingly, some examples may provide content sharing and content transferring which not only avoids incurring storage and bandwidth costs but also removes the necessity of cloud storage, internet access, Wi-Fi access points, and wired connections as in the conventional techniques, while at the same time preserves the speedy, simple and secure user experience.
- Conventional direct file transfer technologies, may not offer a means to browse or manipulate remote content without being required to transfer the content first. Further, conventional techniques may not provide a manner to aggregate content from multiple sources, neither do they present a contextual relationship to present multiple content items, such as a hierarchy (for example, a directory tree), chronology (for example, a timeline or newsfeed), association (for example, a smart album) or the like.
- The technical benefit and technological improvements of the implementations disclosed herein can be explained with reference to some exemplary scenarios as follow, but in no way are intended to limit the present disclosure. In one scenario, you are walking around a neighborhood and come across a restaurant on a busy street that appears interesting. You might wish to look at its menu before deciding whether to enter the establishment, or you might want to know what other patrons thought of the restaurant. To get this information in a conventional manner, you would be required to log into the internet on your wireless mobile device (i.e., smartphone), search for the restaurant by entering its name and location into a search engine, or access its website or mobile app, to look at the menu. Moreover, you could find a user review site or app such as Yelp, and look at user reviews for the restaurant. Alternatively, a much more natural way of gathering information about the restaurant before crossing the street would be to be able to pull out your mobile device and automatically see the restaurant's name pop up in the “nearby feed” section of the application on your device. Accessing the nearby feed, you see a variety of information and contents there, including the restaurant's menu, popular items, coupons and an interactive living document that shows what other users thought of the restaurant, while also giving you the option/ability to leave a review or like the restaurant. A further advantage would be if you did not have to rely on internet access to be able to access this data—your device is simply picking up on information being made available by other devices within its range. In this scenario, the restaurant's device, as a server, is sharing digital contents for potential patrons, each with a smartphone as nearby clients, to peruse, without needing to create a web site or an app.
- In another scenario, suppose that you run into your friend John as you are boarding the plane on your way back from vacationing in Hawaii. John, who has also vacationed in Hawaii, has a phone full of photos, and both of you are eager to share your experiences. However, John has been upgraded to first class, and you have to make your way to the back of the plane. If the airplane provides no internet access, you have no way of interacting with John's photos during the flight using conventional techniques. However, both your phones are actually capable of communicating directly with each other using high-speed peer-to-peer wireless communication technologies such as Wi-Fi Direct. We recognized that it is needed and desirable for each phone to be able to discover and browse the content available on all devices within its range, albeit subject to privacy and access control restrictions. So, in this scenario, John could make his photos available for browsing to nearby consumers in accordance with the implementations described herein, subject to certain restrictions of his choosing, such as only allowing access to people on his phone's contact list or social network, only allowing read access, or only allowing access to certain photos or albums. All devices within range of John's device, including yours, would then be able to browse (pull) and interact with the data they have been granted permission to, without requiring an intermediate external network to provide connectivity, or needing John to send (push) them to you. From the perspective of any passenger on the plane, their smartphone can act as a client for browsing the aggregated content made available by their fellow passengers as servers and vice versa, in the form of an ad-hoc, pop-up wireless network.
- In the scenarios described above, using Wi-Fi Direct to form an ad-hoc, pop-up wireless network over direct peer-to-peer wireless connections alone does not address the problem of needing an efficient strategy or mechanism to transfer large amounts content. By way of example, each phone may have tens of gigabytes of photos transferring which will take forever. Therefore, the inventors recognized that beyond creating such a wireless network, it is further desirable to provide a more efficient alternative to needing to “push” content to every individual client device that wants access. The “push” approach creates multiple unnecessary copies of the content(s) and does not give the browsing user the opportunity to choose which specific content he wishes to download (save or store) to his device. Thus, some examples allow efficient, speedy and simple means for browsing and interacting with the contents of John's smartphone from nearby wireless devices. According to various implementations, the contents from John's smartphone can be presented either as (a) an interactive document akin to a webpage, for example as a nearby feed or timeline, at a client device, or (b) in the file system of the client device, or (c) within a third party application at the client device via an API. In these various implementations, the remote content of the servers should be a lightweight representation of the actual content stored at the servers and the actual content should only be transferred upon request from the client.
- In some implementations, presenting the nearby content via an interactive document is like creating a web page or a feed of nearby accessible content. A user of the client device can browse the contents and choose which items he wants to open or save. However, the user of the server device sharing the contents does not need to create the interactive document since it can be created on the fly at the client device, from the lightweight representation received from the server, by using a web page template or the like. By way of example only, presenting content as an interactive document or feed can be particularly advantageous in social situations such as a group of people at a birthday party or on a hike. In such circumstances, by providing the client and server software architecture at each persons' device via an application or built-in function of the device's operating system, each person can browse the photos taken by their friends' devices directly from their own individual device without having to rely on a wireless access point. Further, presenting content as an interactive document or feed can be particularly advantageous in a classroom where students can browse and download reading materials on their devices as shared from the professor's device, without requiring the professor to upload them to a website. Similarly, in a meeting or at a conference, parties can exchange business cards and documents without needing internet access or waiting for the content owner to send them by email.
- In some implementations, presenting the nearby content via the filesystem of a client device offers particular advantages. Users already know how to use the file manager interface of their device, for example the Finder in case of OSX or File Explorer in case of Windows. The file manager allows users to browse, open, rename, move, copy, tag and organize photos in a folder. For example, it is very convenient and handy if all the photos, videos and documents i.e. the server content, are made accessible directly via the Finder or other OS interface of the client, simply by placing a first electronic device having the content stored thereon in close enough physical proximity to be able to directly and wirelessly communicate with a second electronic device that acts as a client. For example, Wi-Fi Direct typically has a range of approximately 30 meters or 100 feet. Users already know what to do with the files, as they know the typical gestures of drag and drop, select, double-click to open, right click, etc. of the operating system of their device. However, it would be a time consuming file transfer exercise to copy the entire contents of the server, such as the entire photo and video library of the first electronic device to the second electronic device, and those of skill in the art would recognize time and processing restraints such a transfer poses. Typically, users have large volumes of content on their smartphones, in the order of thousands of photos/videos and tens of Gigabytes of data, so even over a high speed peer-to-peer wireless connection, transferring the entire contents of a server device would take a long time. Accordingly, the present inventors have proposed to provide a lightweight representation of server content which is presented at the client in a manner, which according to the particulars of the implementation, that appears as if the fileserver content are already existing at the client without actually needing to transfer the content beforehand. In this manner, the user of the client device is able to browse the entire contents of the servers and choose to download only the selected content, thus providing the user the ability to browse the entire aggregated content of the servers but also to select desired content on demand wirelessly.
- In some implementations, presenting the nearby content via an API hook would allow any third-party application to advantageously browse and interact with contents servers wirelessly. The third-party application may be customized to access the API hook, or use plug-ins that interact with the API hook, as will be appreciated and understood by those of skill in the art. For example, a user can edit a photo in the first electronic device directly from within a photo editing application executing on the second electronic device without needing to explicitly send the file to the second electronic device and/or send back the edited photo to the first electronic device.
- By connecting devices directly to each other as an ad-hoc, pop-up wireless network, the client and server device architecture is designed to support presentations of content according to the implementations described herein, and which may include one or more of the following mechanisms and technological advantages:
-
- a. Efficient discovery of nearby devices, identify the fastest way of connecting to a nearby device, and set up the connection, thus creating an ad-hoc peer-to-peer network of devices using point-to-point wireless connections, without any manual configuration by a user;
- b. An efficient, light-weight content discovery, aggregation and browsing client and server architecture that allows client device to browse content stored on one or more servers in-place, without having to create a local copy of the content at the client. This is advantageous because a client can have multiple neighboring server devices who are making content available, as well as multiple other client devices, so creating local copies would be prohibitively slow due to bandwidth and cost constraints, in addition to rapidly exhausting the client device's storage space. The goal is to provide the look-and-feel, and high performance of local data access to the client without incurring the expense of creating a local copy of the data on the client;
- c. Creation of a mechanism by which a device can choose to function as a client or as a server or both simultaneously. In case a device chooses to be a server, a mechanism by which the user/owner of the device can specify the scope of the data on the device that should be made available for discovery and access by other neighboring devices and to who;
- d. Creating access control mechanisms that allow the owner or producer of data to maintain the desired levels of privacy of the data hosted on a server device when it is accessed by multiple client devices. This access control mechanism determines which client device can access a given piece of data, and the type of access privilege granted (read only, read and write, allow copy, etc.);
- e. Creating multiple intuitive user interfaces by which a client can interact with the data hosted on other server devices in its vicinity. Such user interfaces might range from (i) a file manager to (ii) web pages to (iii) integrating over an API hook with existing custom applications for each file type, such as the ability to browse all photos on neighboring devices using a photos software application;
- f. Creating mechanisms by which data can be transferred directly from device to device over the wireless communication medium, without requiring an intermediary or proxy or connectivity to an external network like the internet;
- g. Creating and implementing protocols by which a server advertises itself on the short range wireless network and interfaces connectivity and file operations with nearby client devices;
- h. Maintaining the integrity of the data hosted by a server device, which could be potentially accessed and modified by multiple clients concurrently; and/or
- i. Discovering when devices move out of range, updating the available content available over the short range wireless network accordingly, and notifying consumers of the updated data as needed.
- In some implementations, a single user owns multiple devices, such as a mobile phone, a tablet, and a laptop, which each execute client and server software as will be described in greater detail below. In other implementations, multiple devices are operated by different users rather than a single user. For example, a user may take a photograph on a first electronic device, such as a phone, then place his phone next to a second electronic device, such as a laptop computing device, creating an instant short range wireless network. A federated view of content across all devices can be presented to the user from any one of the devices. Further, the user can access, view or modify the same content from any of his devices. The user may use a photo editing application on the second electronic device to edit the photo which remains in-place on the first electronic device. If the first electronic device is running low on storage, the user can simply drag and drop the file directly from the first electronic device to the second electronic device using the an OS interface or the like, and delete the copy of the photo on the first electronic device, releasing the associated storage space thereon. In contrast, with implementations where multiple users are present, each of the phone, laptop, tablet, etc. may be operated by a different user rather than a single user as described above.
- In some implementations, a mirror reflection of the photo and video content file and/or directory structure is presented on the client. As the user interacts with the presented content by choosing folders and selecting photos, the sub-directory tree and file content may be downloaded in real-time on demand in the background. If the user changes folders, the file list and file metadata of the currently selected folder begins to download. If the user selects a file such as a video, the video file begins to stream from the server electronic device. At all times, the content resides on the server electronic device, while from the second electronic device it appears that a local copy exists on the second electronic device. Any edits or changes made to the content from the second electronic device may be propagated to and reflected on the photo album on the first electronic device. Similarly, photos can be deleted from first electronic device by the second electronic device, such as by dragging the photos to the trash icon on the desktop of the second electronic device.
- In some implementations, the foregoing features are realized by running server software on the first electronic device and client software on the second electronic device, which is responsible for managing access privileges, managing connections and providing the interactive presentation of the contents. The first electronic device stores photo data in storage containers and each piece of content needs to be mapped to the filesystem interface of the second electronic device as an alias. File operations made on this alias copy of the content may be propagated to the actual file on the first electronic device. Alternatively, in other implementations, the content can be mapped to an interactive document or an API accessible by a third party application.
- The short range wireless network approach described in the implementations described herein has several advantages over the state of the art. Today, if a user wants to access content across devices, they have two options: (1) store the content on a cloud, which can be accessed by both devices by attaching to an external network connection, or (2) physically transfer a copy of the content from one device to another. The first approach suffers from the drawback of exposing the data to security breaches and potential loss of privacy. Also, the data is only accessible when access to the internet is available, unless another redundant copy is saved on each device. This approach results in wastage of storage space, which is often is a limiting factor in mobile devices, and is inefficient when there are multiple servers and multiple clients. The short range wireless network approach described in the implementations herein avoids all these drawbacks. Data can be accessed in-place in real-time by a remote device, which is something no current approach does. In case the remote device requests a copy of the data, it is transferred directly from server device to client device using P2P wireless technologies. At no point during the creation, operation or reconfiguration of a short range wireless network is a connection to an external network required. All operations are performed using device-to-device wireless communication.
- In addition, the short range wireless network allows for discovery of content in proximity to a device, something that no existing approach provides today. That is, it enables a user to browse the contents published by all neighboring devices, using a variety of supported user interfaces including a traditional file manager interface, an interactive document similar to a webpage or the familiar newsfeed used in social networks, or through a custom application that uses an API surfaced by the short range wireless network implementation.
- The short range wireless network approach provides an intuitive, natural way for people to interact with their surroundings and exchange information with those around them, restoring a more local, social flavor to societal interactions. It is the means to be able to directly, efficiently and securely present remote content of nearby devices connected over a peer-to-peer wireless mesh network that preserves the contextual relationship, or even constructs a new one, between the content items and optionally aggregates them from multiple source devices. A lightweight representation of the remote content is provided and/or displayed at the client in order to visualize its context while minimizing actual file transfer until it is actually requested at the client. Such an innovative mechanism has the potential to create a local popup social network, for example, a newsfeed aggregated from the shared photos of friends sitting in proximity, showing the latest 25 photos taken by the group.
- According to various implementations, a computer system and methods for creating a proximity-based ad-hoc network of devices inter-communicating using wireless communication media create an impromptu digital library of data aggregated from one or more of the devices participating in the network, which can be accessed by any of the devices participating in the network. This cooperating network comprised of devices in vicinity of each other may be referred to as a short range wireless network in some examples herein.
- The devices offering up data for discovery in the short range wireless network are called “servers”. The devices accessing and interacting with the data in the short range wireless network are called “clients”. The same device can function as client, or server, or both. A short range wireless network could be comprised of any device that is capable of wireless communication. This includes laptops, phones, desktops, digital cameras, embedded devices, wearable devices such as smart watches and fitness trackers, loT sensors, smart TV and set top boxes, car infotainment systems, in-flight entertainment systems and more. These devices could be carried by a person or animal, or be integrated into vehicles such as automobiles, planes and trains, or be a part of the environment such as traffic cameras, parking meters, home and industrial appliances etc.
- Each client device has its own view of the short range wireless network, based on which server devices are within range of this client. The short range wireless network forms automatically, based on the access privileges the client has been granted by various servers within its wireless range. Clients have the ability to request access authorization to any server(s) of their choosing, or to ask for higher levels of privilege to any data that a server within its short range wireless network is hosting. As the client's authorization level changes, its short range wireless network configuration and presented aggregated data changes correspondingly.
- The client has the ability to discover, view and interact with the aggregate data presented by all the servers in its short range wireless network, within its access rights and permissions, without actually moving the data to its local storage. The client can present this data library to the user through different user interfaces. These user interfaces include, but are not limited to, integration with their device's file manager such as the MacOS Finder, so that the contents of the short range wireless network appear as folders within the file manager, which the user can browse as a directory structure and interact with the presented data using familiar gestures such as double-click to open, drag to move, right click etc. Another user interface could be through integration with existing specialized applications for dealing with specific data types, such as a photo browser or editor application like Mac Photos, a contents browser like iTunes, Adobe Photoshop etc. A third user interface could be in the form of an interactive document, similar to a web page or the “news feed” or “timeline” in social networks. In this format, the client can interact with the data through actions like adding comments to a file, “liking” content etc. Whenever new content is made available, or existing shared content has been modified in some way, or any user has interacted with existing shared content such as commenting on it, the “news feed” is updated to reflect the new activity, and the clients could be optionally notified of such new activity.
- The client interacts with the digital library created within its short range wireless network without transferring the hosted content to its local device. The content remains on the server, with only the necessary information required to satisfy the client's current request being transferred directly over the wireless communication link established between the client and server devices. For example, if the client is merely browsing all the files in the library, only the metadata corresponding to the current directory structure being viewed by the client is transferred from the server to the client. If a client desires to open a video file using a video player, the video is streamed on demand to the client in small chunks according to which portion is being displayed in the video player. If the client navigates away from the video while it is playing mid-stream, the transfer of the rest of the video stream is paused until the data for the user's latest request has been transferred. This approach has several advantages. First, the server always maintains its “single source of truth”, namely, the most up-to-date copy of the file. Second, the server maintains control of its digital content, satisfying important privacy and security requirements for the owner of the data. The data can be optionally encrypted when transmitting it across the wireless link between the client and server, to increase security. Third, the client gets the look-and-feel and high performance of all this data being available locally, but the data is not consuming the storage space on the client side, because it is being streamed from the server on-demand. There are several other optimizations, described further in this disclosure, aimed at improving the real-time performance with which a client can interact with the digital library in its short range wireless network, such as prioritizing which data is retrieved at what point in time to provide the most optimal user experience.
- A device acting as a server provides a mechanism to choose data from various data storage repositories it hosts or has access to, to make part of the digital library of any short range wireless network that it participates in. The server also provides mechanisms to convert the data it is making available to the short range wireless network into an intermediate data format that can be transferred over to clients and interpreted by the client. The client can then make this data available to its user through any of the different user interface mechanisms described in the previous paragraphs.
- A server has the ability to specify what access privileges to provide for a specific piece of data. Examples of such access privileges include, but are not limited to, read-only, read and write, make copies, execute, etc. The same piece of data can have different access privileges for different users. That is, for a given piece of data, the server has the ability to determine and set which user or set of users have access to which data, and what access privileges each of these users have for that piece of data. These access privileges can be set manually by the server or server user in advance or upon request from the client, or through the application of user-defined rules.
- Servers have the ability to enforce access control to the data they are serving up. Such access control may be enforced through explicit user input, or by automatically enforcing access control based on preset criteria. Examples of such preset criteria include making a certain set of data available only to clients who are in a whitelist maintained by the server device. This whitelist could be created manually, or using certain user-defined rules such as including all mobile devices whose corresponding phone number or email address is in the address book of the server device, or are in social graph of the user or in the company directory. The server can also choose to deny access manually or through preset criteria such as denying access to any device in a blacklist maintained by the server. This blacklist could also be created manually or through user-defined rules. The whitelists and blacklists can also be set based on criteria such as location and duration. For example, a server may grant access to a set of data to all users within wireless range of its device from 2 p.m. to 3 p.m. on Jan. 1, 2017.
- The following are some additional scenarios in which various implementations are advantageous over conventional file transfer technologies.
- Imagine a family reunion in the great outdoors. The family flies in from various locations across the planet to unite together or a special occasion. Over the next few days, they indulge in camping, go on hikes, have special moments of unity and togetherness, adventure and daring. They capture these in photographs and videos they take of each other and their activities, to preserve these memories for a lifetime. When the vacation ends, each member of the family has photos on his/her device. Everyone in the family has a different set of photos they enjoy. Parents want all photos of their children, captured by anyone in the group. Children want photos of their favorite cousins and activities, but really aren't interested in the photos of the adults. On the evening before departure, the family gathers together and forms an instant short range wireless network with their devices, even though the resort is in a remote location with no access to the internet or cellular coverage. Each person browses all the photos in the short range wireless network, likes and comments on others' photos, and chooses the ones he wants to keep, downloading them to his own device to create local copies. When the family departs to their different lives the next day, they each carry with them the memories they cherish the most, to share with their friends when they get back.
- Some examples herein may create a high speed content sharing network between nearby devices. It is not practical or desirable to create a local copy of the remote content on every nearby device because doing that would require copious amounts of data transfer which will exceed the available time, network bandwidth and storage capacity of the client device. Nevertheless, implementations herein create an illusion that the remote content of the nearby device is actually available to the accessing client device for viewing and interacting with it. To achieve this outcome, some implementations may employ one or more of the techniques outlined below.
- The application running on the client and server discovers nearby devices and establishes the fastest available direct connection between them. When displaying the remote content of nearby devices, the client initially fetches only the content metadata and content list from the server. Then it fetches the content file icons. By doing so, the client is able to present a lightweight representation of the content i.e. the list of available content to its user without needing to fetch the actual content files. When the user selects a content item, the client application requests it from the server, on demand, in real-time. This way, the network bandwidth can be optimized for the content last requested by the user. If the user switches to a different view, fetching the content list of that view is prioritized. If a file item has been previously downloaded and is available in the cache, the cached copy is used as long as the content file has not been modified since it was last cached. By doing so, the application is able to create the illusion of a local copy of the content and deliver a near real-time user experience.
- Imagine walking into a museum. In each exhibit room, the museum hosts a short range wireless network server that makes detailed content available about the exhibits in that room. You could watch a video of the artist describing the significance of the piece, or leave a comment about the exhibits in the room in the interactive document hosted by the museum's short range wireless network server. As you walk into the next exhibit room on a different floor, the first short range wireless network server drops out of range, and the new one for this exhibit room comes into range, presenting a different set of content corresponding to the exhibits in this room. As you walk out of the museum, you have had a much richer experience, but you carry no printed material to discard, your phone has not used up any extra storage, and the museum did not have to set up a website or distribute an app for you to download. Same scenario applies when you visit the zoo or a Broadway show. short range wireless network serves as a digital content distribution platform by enabling all viewers to browse and interact with content available at that event location. The digital content can be distributed easily and quickly without too many steps.
- Imagine you are in an industry conference, having paid a hefty attendee fee for the exclusive privilege of being able to attend this conference in-person, and have access to the thought leaders in your industry. You attend a speaker session on a topic of interest to you by the leading expert, from 2-3 p.m. The speaker turns his mobile phone into a short range wireless network server, hosting the presentation content and additional reading materials, making them available to anyone in the room from 2-3 p.m. This allows you to browse through the presentation at your own pace, download a copy to your device, and make notes on it during the session. This is a privilege and convenience not available to people who could not attend the session in person.
- Imagine you are a photographer on a field trip taking hundreds of photos and videos. While on the field trip, you can use your tablet, phone or computer to browse, edit or delete photos on your digital camera without needing to transfer them. Once you return to your home or office, you place the digital camera on your table next to your computer with the large display and storage space. The short range wireless network technology described above enables the photographer to browse, edit and save his camera photos and videos from his computer.
- Imagine you have some photos on your phone or computer that you wish to carry on your tablet computing device, you can simply mount both the phone and the tablet on your computer, and drag and drop to copy the desired photos from the phone or computer to your tablet.
- Imagine you are in a transatlantic flight and you have time to kill, perhaps even make a few new friends or share some stories. Using a short range wireless network, anyone can share his photos from his smartphone to nearby passengers through a “nearby feed” of digital photos/videos and allow others to participate in liking, tagging, commenting or copying them.
- Imagine you are next to or walk into a shop and you can browse the product list, specials, detailed product information and coupons or commercial offerings, without requiring the shop staff to upload anything to the internet. Just take out your phone and check the nearby feed for contents and commercial offerings that the shop may be sharing with potential customers nearby.
- Imagine you are in classroom in a remote part of India and the students are able to collaboratively edit a document together using their devices without the need to connect to a Wi-Fi access point.
- Imagine you can video call your friend/colleague sitting in another cabin of the passenger airplane without needing to connect to the airplane Wi-Fi access point or internet.
- Imagine you are a mother driving your young twins. You simply place your smartphone inside the car and the twins riding in the back seat of the car are able to browse, select and watch two different animation movies from your smartphone on the displays mounted on the backside of the front seats, without you needing to choose and stream a specific video.
- Imagine you are sitting in a meeting with colleagues or clients and each of them can browse and markup the presentation stored in your smartphone or computer using their devices without you needing to send over the presentation document to them. They can also download the presentation to their device to review and peruse after the meeting.
- These are merely a few examples associated with utilizing the implementations herein in real-life scenarios. However, the implementations disclosed herein are in no way intended to be limited to these scenarios.
-
FIG. 1A shows an implementation of retrieval and presentation of remote content among computing devices in proximity using peer-to-peer wireless networking. Eachcomputer mobile device device 101 is bi-directionally connected todevices wireless connections device 102 is bi-directionally connected todevices wireless connections devices devices - In
FIG. 1A , a client application executed bymobile device 102, can individually or collectively interact with theremote content 101 a stored on theserver device 101, theremote content 103 a stored on theserver device 103, and theremote content 104 a stored on theserver device 104. At the same time, a client application executed by thecomputer 103 which also executesapplication 105 b individually or collectively interacts with the remote content from theserver devices application 105 b may be the client application itself or a 3rd party application connected to the client application.Mobile device 104 which also executes a client application, individually or collectively interacts with the remote content from theserver devices computer 101 also executing application 105 a, acting as client and part of the wireless mesh network, individually or collectively interacts with the remote content from theserver devices -
FIG. 1B shows an implementation of presenting aggregated remote contents on a mobile device or computer from various types of computing devices such as a wearable device like a smart watch, a digital camera and an embedded computing device. InFIG. 1B , awearable device 132, adigital camera 133, and an embeddedcomputing device 134, each acting as servers, are connected with amobile device 130 and acomputer 131 as clients. Thecomputer 131 is connected to theserver devices wireless connections files client device 130 is able to interact with the respective contents of each of theserver devices files devices -
FIG. 1C shows an implementation of presenting aggregated remote content on a client device 191, which is may be part of the wireless mesh network of either ofFIG. 1A andFIG. 1B , interacting with the remote contents of multiple nearby server devices, such aswearable device 132,digital camera 133, embeddedcomputing device 134, thecomputer 101 and themobile device 102. Client device 191 is connected wirelessly to each of thedevices direct wireless connections vehicle 190 or an in-flight entertainment console of anairplane 197. -
FIG. 1D shows an implementation of presenting aggregated remote contents on a screen of a client display device 121 a displaying theremote contents multiple devices devices wireless connections wireless connections box unit 121 b which presents the remote content by aconnection 123 a on the display device 121 a such as a television, monitor, projector or any device capable of displaying digital content and also has its own controller such as a remote control, front panel buttons, or the like. Theconnection 123 a may be a wired or wireless connection that connects the set-top-box 121 b unit to the display device 121 a. Auser 124 can interact with the remote content from thedevices remote representation box unit 121 b. -
FIG. 1E shows various implementations of presenting aggregated remote content. In afirst presentation 120 a, theremote contents icons 101 e, 103 e, and 104 e along with content metadata or the like. In anotherpresentation 120 b, the remote contents are displayed in a grid layout withcontent items server 102 presented ascontents presentation 120 d, thecontents server 102 may be combined and displayed inside a rendered page based on a predefined design template, such as a web page, a page formatted by a markup language, slide, document, multimedia document, applet, album, folder, newsfeed, timeline, map, mobile or desktop application layout, or any other kind of custom multimedia presentation layout or user interface, or any combination thereof. Further, the combined content of 101 a, 103 a, and 104 a may be presented in a multimedia form such as acollage presentation 120 c or a video presentation 120 e with or without metadata, subtitles or audio. In a case of the content 101 a, 103 a, and 104 a being aggregated audio content, of the presentation may be an audio output orplaylist 120 f. The aggregated content may be grouped by time, places, people, activities or its subject, and also searchable based on keywords, tags, time, place, people, activities, other content or metadata as criteria for grouping content. -
FIG. 1F shows details of the content aggregation implementations in cases of (i) a mobile device as a client with another mobile device as a server, (ii) a mobile device as a client with a computer as a server, (iii) a computer as a client and a mobile device as a server, and (iv) a computer as a client with another computer as a server. As shown inFIG. 1F ,client devices server devices content storage interface 210. A storage container can be a database and the like which is accessible via an API as thestorage container interface 210, or it can be a photo library of a mobile device which is accessible via a framework API as the storage container interface, or even a file system volume which is accessible via file system API as the storage container interface. For example, one way to access photo content of an electronic device implemented as a server may use an API, such as a photos framework API. - Returning to
FIG. 1F , theclient device 160 is connected to theserver device 162 via awireless connection 164 and also to theserver device 172 via awireless connection 177. Thecontents server device 162 may be aggregated together with thecontents server 172 and presented on the client application running ondevice 160 as a presentation of aggregatedcontents contents server device 162 may be aggregated together with thecontents server 172 and presented differently in anapplication 171 running on theclient device 170 as a presentation of aggregatedcontents application 171 may be theclient application 203, a 3rdparty application 206 or acustom application 207 as shown inFIG. 2A and explained later. -
FIG. 1G shows aclient device 180 accessing contents stored remotely by a content storage container onserver devices party application 206 shown inFIG. 2A . In implementations where the presentation of remote contents is a file system volume, the client application working together with the server application, will map each separate piece of content in theserver devices client device 180. At theclient device 180, aphoto 165 a is mapped to a photo file 165 d, avideo 166 a is mapped to avideo file 166 d,audio content 167 a is mapped to anaudio file 167 d, files 173 a and 174 a are mapped to afile 173 d and afile 174 d according to file types thereof. The client application together with the server application will also map file system operations applicable to each of the separate mapped content. For example, a file delete operation on the photo file 165 d by the client application 181 is performed on theserver device 162 as a remove operation on thephoto 165 a. One example of a 3rd party application 181 is a built-in file manager application provided by the OS of theclient device 180. Examples of built-in file manager applications are Finder on OS X operating system, Windows Explorer in Microsoft Windows operating system, and the like. Further, in the implementation shown inFIG. 1G , auser 182 of theclient device 180, auser 163 ofserver device 162, and auser 178 ofserver device 172 may or may not be the same person. In further implementations, the system may be applied in a fully automated manner in which theclient device 180, theserver devices 162 and/or 172 operate without user input or involvement. -
FIG. 1H shows yet another implementation where auser 154 copies content 156 a from hiscomputer 150 to amobile device 151 instep 157 creating acopy 156 b. Thecopy 156 b is then transferred instep 158 to amobile device 152 of anotheruser 155 creating asecond copy 156 c of the content. Theuser 155 can remotely access the second copy of thecontent 156 c via itsremote representation 156 d on acomputer 153 instep 159. The copyingstep 157 is executed by having the server application running on themobile device 151 and the client application running on thecomputer 150, where the interaction with the client application may be directly within the client application, via file system volume representation of the client application, or via an API exposed by the client application. The remote access instep 159 is performed by having the server application executed on themobile device 152 and the client application executed on thecomputer 153, where the interaction with the client application may be directly within the client application, via file system volume representation of the client application, or via an API exposed by the client application. -
FIG. 2A shows a high level overview system architecture of an exemplary implementation of a computer system for browsing and interacting with remote content. The system consists of aclient application 203 running on aclient device 202 and aserver application 205 running on aserver device 201. While twodevices FIG. 2A , it should be understood that a plurality of devices may be connected as a short range wireless network with one or more devices each executing aclient application 203 and aserver application 205. - The
server application 205 is responsible for extractingcontent 216 stored on thecontent storage container 211 via itsstorage interface 210. The server application is also responsible for converting thecontent 216 in thecontent storage container 211 into anintermediate data structure 215 to be transmitted to theclient application 203 in the form of network data packets over a peer-to-peer wireless link 204. Theintermediate data structure 215 is converted back to an appropriate format at theclient device 202 by theclient application 203 as anintermediate data structure 212.Content 216 may be in form of, but not limited to, a list, metadata, or raw binary data resembling a specific content type, for example a raw binary data of a JPG image or the like.Server application 205 interacts with theclient application 203 using a communication protocol over the peer-to-peer wireless link 204. Theserver application 205 is also responsible for performing operations on thecontent 216 based on the instructions received from theclient application 203 via peer-to-peer wireless link 204. Theserver application 205 may or may not have a user interface depending on the implementation. - Further, in
FIG. 2A , theclient application 203 is responsible for converting theintermediate data structure 212 into multiple representations to be presented on theclient device 202. In one implementation,client application 203 may convert theintermediate data structure 212 into anappropriate presentation interactive presentation 218 is generated by theclient application 203 for display on a user interface ofclient application 203 touser 220. In some implementations, theclient application 203 may convert theintermediate data structure 212 into a virtualfile system adapter 208 to be accessible by a 3rdparty application 206 as afile system structure 217 via the virtualfile system adapter 208. In some implementations, theclient application 203 converts theintermediate data structure 212 into a set of data structures accessible by anAPI 209 so that acustom application 207 can present it, for example, as aninteractive presentation 219 to auser 220. Theclient application 203 is also responsible for receiving and processing interaction requests from either its own user interface, the virtualfile system adapter 208, orAPI 209. The requests will then be converted into a communication protocol message to be delivered to theserver application 205 over the peer-to-peer wireless link 204. -
FIG. 2B describes an exemplary file system architecture of a UNIX-style OS. A file system volume that needs to be mounted on the operating system is connected to a virtual file system (VFS)layer 230 in the operating system kernel. Different types of file system formats may be connected at the same time to the virtualfile system layer 230, for example HFS, EXT4, FUSE, native or custom kernel extensions and the like. Applications that need to access the file system volume may use the standard file system APIs available in the standard C library (libc) 231. One implementation of thefile system API 222 is astandard C library 231 in case that the system is implemented in a UNIX-style OS environment. In order to access theclient application 203 via afile system API 222, the virtualfile system adapter 208 has to be connected to the virtualfile system layer 230. In some implementations, the virtualfile system adapter 208 may be directly connected to the virtualfile system layer 230 at the kernel level via a kernel extension or kernel module approach as in option 234B oroption 234C. In some implementations, likeoption 234A, the virtualfile system adapter 208 may be connected indirectly to the virtualfile system layer 230 via a user space file system such as FUSE which bridges the connection using the kernel component FUSE 232A and user space component libFUSE 232B. Depending on the implementation, theclient application 203 may reside in the kernel as in option 234B, in the user space as inoption 234A, or split into 2 parts as inoption 234C where the client application resides partly in thekernel 203A and partly in the user space 203B. - The
intermediate data structure server application 205 andclient application 203 to exchange the data related to the content being accessed. The type of content data may be one of, but not limited to, a content list, content metadata, or content binary data. In the case where the content data being exchanged is a content list, the intermediate data structure may be structured as arrays, dictionaries, and/or trees and encoded in a particular text format such as JSON, XML, HTML, RSYNC, a binary format following ASN.1 notation and the like.FIG. 2C shows an implementation of a content list exchange between a photo library 253 as astorage container 211 of theserver device 201 with the virtualfile system adapter 208 to present a content list in a filesystem structure tree 250 at theclient device 202. Theclient application 203 is requesting the content of the photo library 253 from theserver application 205. Content type of photo library 253 may be an object oftype PHAssetCollection 251A which is a photo album, or an object oftype PHAsset 252A which is an image or video content. Some example properties of PHAssetCollection class are localizedTitle, startDate, and endDate. Some example properties of PHAsset class are filename, creationDate, modificationDate, and size. Requesting the content of photo library 253 will make theserver application 205 extract the information from the properties ofPHAssetCollection 251A and convert it into intermediate data structure oftype JSON 251C in step 251B before transmitting to theclient application 203. When theclient application 203 received theJSON 251C, in step 251D, theclient application 203 will convert the intermediate data structure into a file system node attributes oftype folder 251E. When theclient application 203 requesting the content of thefolder 251, it will send a request to theserver application 205 to extract the content ofPHAssetCollection 251A which in this example isPHAsset 252A. Theserver application 205 will extract the information from the properties ofPHAsset 252A and convert it into intermediate data structure oftype JSON 252C instep 252B before transmitting to theclient application 203. When theclient application 203 received theJSON 252C, in step 252D, theclient application 203 will convert the intermediate data structure into a file system node attributes of type file 252E and present it asfile 252 under thefolder 251. -
FIG. 3A shows the software components of theclient application 203. Thenetwork manager 304 handles the network communication over thewireless interface 213 of theclient device 202. It is responsible for discovering server application(s) 205 running on nearby devices by using a service discovery module 308, establishing the peer-to-peer wireless connection to the nearby device, and handling the communication with the connectedserver application 205 using theprotocol handler 309. The peer-to-peer wireless connection may use one of, but not limited to Wi-Fi Direct, Bluetooth, or the like which is available on both client and server devices. In discovering nearby server application(s) 205, server discovery 308 may use a unique identifier to identify the server device(s) 201. The unique identifier of theserver device 201 may be in form of, but not limited to, Device Unique ID, Device Name, User ID/Login, Contact Info, or any other unique identifier of a user or machine as partially described inFIG. 2D . In some implementations, service discovery 308 may also function to advertise theclient application 203 to nearby server application(s) 205. -
Interaction controller 303 is the main component of theclient application 203 that controls the presentation of the remote content, interaction with the user interface 301 or API hooks 302, and handles the business logic for exchanging contents and operation instructions with theserver application 205.Interaction controller 303 together with transfer controller 307 are responsible for handling the content transfer mechanism betweenclient application 203 andserver application 205.Content aggregation controller 306 is responsible for reconstructing or creating group of the remote contents delivered from theserver application 205.Content caching controller 305 is responsible for caching remote contents received from theserver application 205 for the purpose of quick retrieval and increasing responsiveness of theclient application 203. The type of operations that can be performed byclient application 203 on the remote content is defined by the presentation form of the content on theclient application 203. For example, if the content is presented as interactive document of type newsfeed, user ofclient application 203 may tag the content, add a comment on the content, or mark the content as favorite or liked. In another example, if the content is presented as file system volume, the interaction will typically be a file system operation such as open and read the content, edit, delete, copy, move, etc. - In some implementations, a user interface 301 of
client application 203 is provided for presenting the remote contents directly to the user as well as accepting user input. In some implementations, API hooks 302 ofclient application 203 provide access to other application in several different methods. In some implementations, API hooks 302 may be connected topublic API 209 so anycustom application 207 may use the service ofclient application 203 for accessing the remote content over peer-to-peer wireless connection. In some implementations, API hooks 302 may be connected to a virtualfile system adapter 208 so any 3rdparty application 206 may access the remote contents transparently usingfile system API 222 of the operating system of the client device. -
FIG. 3B shows the software components of theserver application 205. Anetwork manager 310 handles the network communications over thewireless interface 214 of theserver device 201. It is responsible foradvertising server application 205 usingservice discovery 311 to be discoverable by nearby client device(s), accept peer-to-peer wireless connection established by client device(s), and handling the communication with the connected client application(s) 203 using theprotocol handler 312. The peer-to-peer wireless connection may use one of, but not limited to, Wi-Fi Direct, Bluetooth, or the like which is available on both client and server device. In advertising to nearby client application(s) 203,server discovery 311 may use a unique identifier to identify theserver device 201. The unique identifier of theserver device 201 may be in form of, but not limited to, UUID (Universally Unique Identifier), user login, email address, or any other unique identifier of a user or machine. The unique identifier of theserver device 201 may be in form of, but not limited to, Device Unique ID, Device Name, User ID/Login, Contact Info, or any other unique identifier of a user or machine as inFIG. 2D . In some implementations,service discovery 311 may also function to discover nearby client application(s) 203. - As shown in
FIG. 3B , theserver application 205 includes a content encoder-decoder 315 which responsible for extracting contents from different types ofstorage containers 211, such as anapp container 211A, adatabase 211B, afile system volume 211C via itsindividual storage interfaces server device 201. The content encoder-decoder 315 is also responsible for mapping the contents, its structure and context into an intermediate data structure before transmitting to theclient application 203 using thenetwork manager 310 over the peer-to-peer wireless network connection onwireless interface 214. Moreover, the content encoder-decoder 315 is also responsible for decoding the protocol message request coming from thenetwork manager 310 into content operation. - In some implementations, the
app container 211A is a user's photos library in a first electronic device. The user's photos library may be a private container managed by the photos app and accessible directly via a photos framework API. As one example, a photos framework may allow any app on the first electronic device to retrieve photos or videos for display and playback, edit their contents, or work with its albums or collections. More generally, an app container may be a storage container which has a limited method and scope of access, and may include access control and security mechanisms, and it is not possible to access the raw content directly without a designated interface such as the photo framework APIs in the case of a user's photos library. In the case of a user's photos library, the photos framework APIs may provide anapp storage interface 210A. In some implementations, thedatabase container 211B is an SQLite database. The method to access the database content is usingdatabase interface 210B which is SQLite library in the case of SQLite database storage format. In some implementations, thefile system volume 211C is HFS file system used by OS X or EXT4 file system typically used on Linux and accessible via a standard file system API. - The
server application 205 also contains anaccess control layer 314 that adds security and privacy handling of the content to be accessed byclient application 203. The privacy and security aspect of theaccess control layer 314 may include setting the permissions of the content accessible by one or more client application(s) 203. For example, content can be marked as hidden, read-only, modifiable, etc. This will limit the interaction types and level thereof on the content by theclient application 203. Another privacy and security aspect ofaccess control layer 314 is to control authorization of connection requests fromclient applications 203 running onclient devices 202. For example, theserver application 205 may prompt a user via the user interface 313 to authorize a connection request from a givenclient application 203. In another example, theserver device 201 may prompt a user via the user interface 313 to authorize a request from a givenclient application 203 to access a particular content, a content group, or a content type stored in one ormore storage containers 211 of theserver device 201. Authorization of connection request or access request on theserver application 205 may be performed automatically based on certain criteria without involvement of the user ofserver application 205. For example, theserver application 205 may automatically authorize a connection from a givenclient application 203 based on a current or last system state, as in the case of an auto-reconnection after a sudden network breakdown. In another example, theserver application 205 may incorporate additional authorization policies to screen requests from theclient application 203. -
FIGS. 3C, 3D and 3E show different implementations for aggregating and presenting the remote content from multiple server devices.FIG. 3C shows aggregated remote contents presented as an interactive document on theclient device 202 constructed from multiple different content types and content structures stored on multipledifferent server devices decoder 315A ofserver device 201A performs mapping of content and operations from theapp container 211A to be accessed by theinteraction controller 303. The content encoder-decoder 315B of server device 201B performs mapping of content and operations from thedatabase 211B to be accessed by theinteraction controller 303. The content encoder-decoder 315C ofserver device 201C performs mapping of content and operations from thefile system volume 211C to be accessed by theinteraction controller 303. Theinteraction controller 303 will then aggregate the remote contents from the multiple content encoder-decoders which is presented as theinteractive document 331. User interaction with theinteractive document 331 is handled and processed by theinteraction controller 303 which when necessary sends the interaction request to the respective content encoder-decoder of the server device. For example, when the user ofclient device 202 performs a delete operation of a remote content that belongs to theapp container 211A, such as a video, theinteraction controller 303 will send a delete request to content encoder-decoder 315A to delete the respective video in theapp container 211A. The content encoder-decoder 315A may reply with an acknowledgment of the operation back to theinteraction controller 303 so it updates the presentation on theinteractive document 331 accordingly. -
FIG. 3D is a modification of the implementation shown inFIG. 3C and shows the virtualfile system adapter 208 in the client application which presents the remote contents to the user of theclient device 202. The aggregated remote contents frommultiple server devices file system adapter 208 to the virtual file system layer of the operating system. This implementation allows a 3rdparty application 206 to access the remote content using file system APIs. The file system operation is mapped accordingly to an equivalent operation of the content. Each remote content is presented as a file of the file system volume. The same remote content may be presented at more than one location in the file system depending on the group created when aggregating the remote contents. For example, a photo stored insideapp container 211A under an album titled “Vacation” may be presented in the file system volume inside a folder “Vacation” and may also be presented inside a different folder titled “Latest Photos” when such a photo appears in both albums on the server. In the case of a “Latest Photos” folder, theclient application 203 using itscontent aggregation controller 306 will construct a new group of the remote content based on the metadata thereof such as the date when the photo(s) is taken. Another example of grouping that may be constructed is to group multiple photos accessed from multiple server devices based on the location of the photos taken. -
FIG. 3E is a modification of the implementation shown inFIG. 3C and shows the presentation of remote content performed by acustom application 207 through anAPI 209. Acustom application 207 interacts with theinteraction controller 303 using theAPI 209. TheAPI 209 is made accessible tocustom application 207 by means of, but not limited to, a shared library, messaging over a socket, a system call API, a web API, and the like. Thecustom application 207 presents the aggregated remote contents as aninteractive document 336 or any other presentation format described herein depending on the particular custom application requirements. - One example of grouping the content from plural devices is explained with respect to
FIG. 4A .FIG. 4A shows a diagram of a group of fourusers Machame route 400. TheMachame route 400 starts at a first location namedMachame Gate 401 and ends at a peak location, which isUhuru Peak 406, with multiple rest locations in between which areMachame Camp 402,Shira Camp 403,Barranco Camp 404, andBarafu Camp 405. - Each user takes photos along the way to the
Uhuru Peak 406. At restinglocation Barranco Camp 404, the user 424 would like to view and download the photos taken so far by each other mobile device. User 424, using hismobile device 420 and executing theclient application 203, will request a peer-to-peer wireless connection and authorization to access photos on each of themobile devices 421, 422, and 427 of theusers server application 205. Theserver device users mobile device 420 browses and/or downloads the authorized photos presented as an interactive document such as photo albums, with the albums created as groups of different photos by contexts such as location and/or date. As shown inFIG. 4B , user 424 is presented an interactive document and accesses the album “Barranco Camp” 404 a which contains a groupings of photo sets 421d 1, 422d d 1, album “Shira Camp” 403 a which contains a groupings of photo sets 421 c 1, 422c 1, and 423 c 1, album “Machame Camp” 402 a which contains a groupings of photo sets 421b b user 425 using his mobile device 421 may set automatic authorizations fordevice FIG. 2D which may be pre-assigned manually by server device user, or automatically based on a social network relationship between server device users and the client device users. The authorization given to a client device user via social graph may be assigned permanently by adding the respective user to the whitelist of the server device, or temporarily only during some period of time or at particular location(s). - Continuing from the exemplary implementation shown in
FIG. 4A , after the group ofusers user 429 usingclient application 203 on hiscomputer 428 requests access to the photos taken during the tour by theusers FIG. 4C .User 429 runsclient application 203 on thecomputer 428 and requests a peer-to-peer wireless connection and access authorization to thenearby devices client application 203 is authorized byusers respective devices client application 203 accesses the photos.FIG. 4C shows an implementation theclient application 203 is using a virtualfile system adapter 208 to present the remote contents so a 3rdparty application 206 may present the photos in a form of a folder tree. In some implementations, the photos may be grouped in directories named after the locations where the photos are taken as shown inFIG. 4C .Photos files 420 a 2, 421 a 2, 422 a 2, and 423 a 2.Photos files 420b b b Photos files 420 c 2, 421c 2, 422c 2, and 423 c 2.Photos files 420d d 2, 422d d 2.Photos e e e e 2.Photos 420 f, 421 f, 422 f, and 423 f are shown inside folder “Uhuru Peak” 406 b asfiles 420f 2, 421f 2, 422f 2, and 423f 2. In some implementations, the photos may be grouped in folders named after the event date, such as “Kilimanjaro Day 1”, “Kilimanjaro Day 2”, and so on. As should be understood by those of skill in the art, the method of grouping is also applicable for content types other than photos, such as videos, notes, documents, audio and the like. Another exemplary implementation of a folder structure is shown inFIG. 7F where contents stored on an electronic device are shown in different folders such as “Albums”, “Camera Roll”, “Documents”, “Favorites”, “Latest”, “Screenshots”, “Smart Albums” and “Videos”. - A more detailed implementation of remote content mapping when
client application 203 is presenting using a virtualfile system adapter 208 is shown inFIG. 4D . A photo and video storage container 430, equivalent tocontent storage container 211 on aserver device 201, containsalbums album 431 contains aphoto 433,video 434, etc., while thealbum 432 contains aphoto 435,video 436, etc. Acontact database 450, equivalent tocontent storage container 211 on aserver device 201, containscontact info file system volume 460, equivalent tocontent storage container 211 on aserver device 201, contains files stored infolder tree 461 withfile 465 at the root, file 463 and 464inside subfolder 462. An audio ormusic storage 440, equivalent tocontent storage container 211 on aserver device 201, containsaudio file containers server device 201.Client application 203 accessing the content ofcontainer file system adapter 208 will present the remote contents as folder tree structure inside file system volume 470, withfolder 471 as the root.Client application 203 using theinteraction controller 303 together with thecontent aggregation controller 306 maps the structure of the aggregated content as follow: (i) Photo and video container 430 is mapped as remote subfolder 430 a,album 431 is mapped asremote subfolder 431 a,album 432 is mapped asremote subfolder 432 a,photo 433 is mapped asfile 433 a underremote subfolder 431 a,video 434 is mapped asremote file 434 a underremote subfolder 431 a,photo 435 is mapped asremote file 435 a underremote subfolder 432 a, andvideo 436 is mapped as remote file 436 a underremote subfolder 432 a, (ii)contact database 450 is mapped as remote subfolder 450 a with thecontact info remote file audio storage 440 is mapped assubfolder 440 a withaudio content file storage 460 is mapped asremote subfolder 460 a, itssubfolder 462 is mapped as remote subfolder 462 a, file 463 is mapped as remote file 463 a, file 464 is mapped as remote file 464 a and file 465 is mapped asremote file 465 a. - Establishing remote content access on the
server device 201 from aclient device 202 first includes “content access privileges assignment” which occurs on theserver device 201 which involves selecting and assigning the access privileges to the contents to be shared withclient device 202. The assignment of access privileges may or may not involveuser 221. In thecase user 221 is not involved with the access privileges assignment, theserver application 205 may incorporate a special algorithm based on predefined rules to assign the access privileges on the contents. For example,server application 205 may automatically assign read-only privileges for photos taken at a current location to all nearby client devices. Secondly, “device access authorization” occurs when theclient device 202 requests access of the content stored onserver device 201, to further prevent random access from just any nearby device. Depending on the implementation, either one of content access privileges assignment and device access authorization may be provided separately without the other. - To perform the assignment of content access privileges, the user of a server device has to set the access privileges of the content to be accessible by nearby client devices. The access privileges type may be one of, but not limited to: allow view, allow copy, allow download, allow modification, allow delete, allow adding child content, allow adding comment, allow tagging, allow marking as favorite/like, etc. The access privileges of the content may be applied to different scopes, such as to anyone nearby (public) or a specific group of users, or a specific group of devices, or a specific user, or a specific device. Any content not assigned to a scope shall be private by default. A single content may be assigned to multiple scopes at the same time. For example, in a conference a user may choose to share his business card with anyone nearby, while in a company a team member may share certain contents or group of contents only with devices of team members, or certain content may be shared only with one's own devices (private).
- The method of setting the access privileges and/or the scope may be performed manually or automatically for each piece of content or a group of contents. Manual setting of access privileges and/or scope may be performed by the server's user by selecting and assigning it to each content item or a group of contents, either in advance or upon request. Assigning automatic content access privileges and/or scope may be achieved by detecting the content metadata with some other conditions. For example, access privilege may be granted automatically to a person in my phone's contact list or social network who is at the same time and location as the content (photo) taken. This may be further refined by the person's face detection in the case the content is a photo. Client device accessing remote content via virtual file system, the remote content access privileges are mapped into a file system permission, for example, allow read is mapped as a file read permission, and allow modification is mapped as a file write permission, and so on for other types of access privileges. Once the access privileges are assigned on the contents, the
server device 201 is able to share its contents with anynearby client device 202. - Accessing contents stored in
content storage container 211 ofserver device 201 fromclient device 202 will depends on aclient application 203 running on theclient device 202 and aserver application 205 running onserver device 201. Beforeclient application 203 can access the remote content onserver device 201, it has to follow the “device access authorization” process described in flowcharts shown inFIGS. 5A, 5B , and 5C. - A
client device 202 may initiate a remote access request following the flowchart onFIG. 5A starting fromstep 501. To access the contents ofserver device 201, theclient application 203 has to scan for and discover anyavailable server device 201 in the vicinity as instep 502. A discovered electronic device, as a server, on a client application user interface running on an OS is shown in the implementation ofFIG. 7B where discovered devices are listed under “Nearby devices”.Client application 203 has to select the discoveredserver device 201 from the list before accessing the content as instep 503. The process of selecting theserver device 201 instep 503 may or may not involve input from theuser 220. Incase user 220 is not involved in the selection of theserver device 201, theclient application 203 may automatically select theserver device 201 based on certain criteria. In one implementation, theclient application 203 may make a decision based on current or last system state, for example, in the case of auto-reconnection after a sudden network breakdown. In another implementation, theclient application 203 may incorporate a specific algorithm according to the application of the system to select theserver device 201 to access, for example, theserver device 201 is registered in the whitelist. After selecting theserver device 201, theclient application 203 will proceed to perform the process of remote access request as instep 510. The unique identifier of theserver device 201 may be in form of, but not limited to, Device Unique ID, Device Name, User ID/Login, Contact Info, or any other unique identifier of a user or machine as exemplified inFIG. 2D . - In some implementations, the
server device 201 may also trigger the initiation of the remote access request byclient device 202. This process follows the flowchart shown inFIG. 5B starting fromstep 504 where theserver application 205 is started and running on theserver device 201. Theserver application 205 scans for and discovers anyavailable client devices 202 in the vicinity thereof as instep 505.Server application 205 will select the discoveredclient device 202 from the list. The process of selecting theclient device 202 instep 506 may or may not involve input from theuser 221. Incase user 221 is not involved in the selection of theclient device 202, in one implementation theserver application 205 may automatically select theclient device 202 based on certain criteria. In one implementation, theserver application 205 may make a decision based on current or last system state, for example, in the case of auto-reconnection after a sudden network breakdown. In another implementation, theserver application 205 may incorporate a specific algorithm according to the application of the system to select theclient device 202, for example, if theclient device 202 was registered in the whitelist. Afterclient device 202 is selected by severapplication 205, theserver application 205 will notify theclient application 203 running onclient device 202 to send remote access request to itself (the server application 205) as instep 507, followed by the process of remote access request as instep 510. The unique identifier of theserver device 201 may be in form of, but not limited to, Device Unique ID, Device Name, User ID/Login, Contact Info, or any other unique identifier of a user or machine as exemplified inFIG. 2D . -
FIG. 5C shows a flowchart of the processing of a remote access request. The process starts from step 511 whereclient application 203 sends a remote access request to theserver application 205 running on theserver device 201. Instep 512, theaccess control component 314 of theserver application 205 will check if the identifier ofclient device 202 is registered in its blacklist. If the identifier ofclient device 202 is registered in the blacklist, theserver application 205 is disconnected fromclient application 203. If the identifier ofclient device 202 is not registered in the blacklist,server application 205 will proceed to check the identifier against the whitelist in step 513. The unique identifier of theserver device 201 may be in form of, but not limited to, Device Unique ID, Device Name, User ID/Login, Contact Info, or any other unique identifier of a user or machine as exemplified inFIG. 2D . - In step 513, the
access control component 314 of theserver application 205 will check if theclient device 202 is in its whitelist. If the identifier ofclient device 202 is found in the whitelist, theserver application 205 is connected toclient application 203 onclient device 202, as instep 520. The unique identifier of theserver device 201 may be in form of, but not limited to, Device Unique ID, Device Name, User ID/Login, Contact Info, or any other unique identifier of a user or machine as exemplified inFIG. 2D . Next instep 522, theinteraction controller 303 of theclient application 203 is presenting the remote contents ofserver device 201 touser 220. The presentation to theuser 220 may be in form of userinterface client application 203,custom application 207 viaAPI file system adapter 208. Upon completion ofstep 522,user 220 is able to interact with the content ofserver device 201 remotely. - If the identifier of
client application 203 is not found in the whitelist in step 513, theserver application 205 will ask theuser 221 to authorize the remote access request instep 515. An exemplary implementation ofstep 515 is shown inFIG. 7C where an electronic device, as a server, receives an authorization request from another electronic device as a client (e.g., “Neeraj's MacBook Pro”) to access photos stored thereon. Theuser 221 will then respond to the remote access authorization request instep 516. In one implementation, the steps inside 514 may be performed automatically based on certain criteria without involvement of theuser 221. For example, theserver application 205 may make a decision based on current or last system state, such as in the case of auto-reconnection after a sudden network breakdown. In another implementation, theserver application 205 may incorporate a specific algorithm according to the application of the system to authorize the remote access request, for example, ifusers step 516, there are four possible authorization responses that can be given by theuser 221 or automatically by the system in case user input is not involved and include: “Authorize remote access for current session only” 516A, “Authorize remote access for current and future sessions” 516B, “Do not authorize remote access for current session” 516C, or “Do not authorize remote access for current or future sessions” 516D. An exemplary implementation after authorization is shown inFIG. 7D . - At the
step 516A and 516B, in some implementations the user may also set the access privileges of the contents or group of contents to be shared to theclient application 203. The process of content access privileges assignment may be performed at the same time with the process of device access authorization. - In case the
user 221 gives theauthorization type 516A, theserver application 205 is connected toclient application 203 as instep 520, followed by presentation of remote content byinteraction controller 303 ofclient application 203 to theuser 220 instep 522. The presentation to theuser 220 may be in form of userinterface client application 203,custom application 207 viaAPI file system adapter 208. Upon completion ofstep 522,user 220 is able to interact with the content ofserver device 201 remotely. - In case the
user 221 gives the authorization type 516B, onstep 517 theserver application 205 will register the identifier ofclient device 202 in the whitelist of theaccess control component 314 so theclient device 202 is automatically authorized next time it requests to access the content of theserver device 201, followed bysteps - In case the
user 221 gives the authorization type 516C, theserver application 205 will notify theclient application 203 that its remote access request is denied in step 519. In step 519, theclient application 203 may or may not notify theuser 220. As a result, theserver application 205 is disconnected fromclient application 203 of theclient device 202 instep 521. - In case the
user 221 gives the authorization type 516D, theserver application 205 will register the identifier ofclient device 202 in the blacklist of theaccess control component 314 so theclient device 202 is automatically rejected next time it requests to remotely access the content ofserver device 201, followed bysteps 519 and 521. -
FIGS. 6A to 6F show sequence diagrams of exemplary communications in the system architecture ofFIG. 2A where aclient application 203 presents access to content using different implementations including: (i) access using the user interface ofclient application 203, (ii) access using thefile system interface 222, and/or (iii) access using theAPI 209. Whenever it is stated thatuser 220 is accessing or interacting with remote content via theclient application 203, it is assumed that the user may perform the action with any of the three implementations mentioned, unless explicitly stated otherwise. Whenever it is stated that theserver application 205 is accessing or performing an operation on thestorage container 211, it is implied that the communications or the actions are carried out via thestorage container interface 210. Whenever it is stated thatclient application 203 is communicating with theserver application 205, and vice versa, it is implied that the communication involves exchange of protocol messages over the peer-to-peer wireless network. - The initial process of accessing the server device's contents consist of three operations: (i) an
access authorization operation 600A, (ii) a content list andmetadata retrieval operation 600B, and (iii) an additionalcontent metadata operation 600C. Inoperation 600A,user 220 using theclient application 203 selects a discoveredserver device 201 atstep 601.FIG. 7A shows a screenshot of one implementation of the client application (e.g., “AirMount”) in the menu bar of a Mac OS. -
Client application 203 sends a protocol message toserver application 205 running onserver device 201 to access the server'sstorage container 211.Server application 205 may reply with the authorization status to theclient application 203 according to the implementation shown inFIG. 5C atstep 603.FIG. 7B shows a screenshot of one implementation of the client application in the menu bar showing a nearby server electronic device (e.g., “Neeraj'siPhone 6s”). - Next, in
operation 600B, theclient application 203 will request the remote content list together with its metadata from theserver application 205. Starting withstep 604, theclient application 203 sends a protocol message to request the remote content list and metadata toserver application 205.Server application 205 will translate the protocol message into an instruction to fetch the authorized content list and its metadata from the storage container(s) 211 atstep 605.Storage container 211 will then reply with the content list together with its metadata atstep 606. After receiving the content list and its metadata, at step 607, theserver application 205 will encode it into an intermediate data structure, such as JSON or any other data encoding type as in the implementation shown inFIG. 2C , to be sent back toclient application 203 atstep 608. Upon receiving the encoded remote content list and metadata, atstep 609 theclient application 203 will decode it and present the remote content list to theuser 220 atstep 611. Before presenting the remote content list to theuser 220, in some implementations at step 610 theclient application 203 may cache, or store into memory of the client device, the remote content list and its metadata. At this time, theclient application 203 has most of the information of the remote content to which it may access and generally consists of a list of content items identified by unique identifiers and metadata associated with each content such as, name, creation date, modification date, content size, etc. The unique identifier of the content item may be in form of, but not limited to, a content resource path or a unique identifier returned by thestorage interface 210. The information received by theclient application 203 at this point is sufficient to present the list of remote contents to theuser 220 as a lightweight representation that is representative of the remote contents thereof.FIG. 7E shows a screenshot of one implementation of the Mac OS with the Finder displaying the photo and video contents of the server device (e.g. “Neeraj'siPhone 6s”), with photo albums organized into corresponding folders. InFIG. 7E , the lightweight representation is understood to indicate that while remote content of the server appears to be located at the client from the Finder, the representation of the remote content is generated from decoding the intermediate data structure which includes a content list and metadata of the listed content rather than the actual data of the content. In this sense, the encoded remote content list andmetadata 608 is lightweight in that it does not include the actual data of the content and requires less bandwidth in order to be transmitted than does the actual data of each content item of the content list as a whole. - Nevertheless, some additional information of the content item may not be provided by the
server application 205 at this time such as icons, location info, or additional metadata like EXIF. In order to provide a richer content presentation at the client device, in some implementations, a second request of additional metadata is sent inoperation 600C. Namely, theclient application 203 sends a protocol message toserver application 205 to request for additional metadata of the remote content atstep 612.Server application 205 will then translate the protocol message into additional metadata fetching operation(s) of thestorage container 211 at step 613. Afterstorage container 211 returns the additional content metadata atstep 614, theserver application 205 will again encode it into intermediate data structure atstep 615 and send it via protocol message toclient application 203 at step 616.Client application 203 will decode the intermediate data structure from the protocol message at step 617 and may cache the decoded additional metadata at step 618. The decoded additional metadata will then be combined with the previous metadata of the content received inoperation 600B and the remote content list presentation is refreshed with the newly updated metadata to theuser 220 atstep 619. Upon completion ofoperation 600C, theclient application 203 will present the content list in a rich representation which may include, for example, a photo is displayed as a file with its associated thumbnail instead of a generic file icon as in the implementation whereuser 220 accessing theclient application 203 uses the 3rdparty application 206 via virtual file system method. In the implementation whereuser 220 is presented the aggregated content as an interactive document format, a photo may be displayed with a low resolution version duringoperation 600B and which is then updated to a higher resolution inoperation 600C. Splitting content metadata fetching into two ormore operations operation 600B, the data transfer is controlled to optimize network bandwidth, so the user sees all the permitted contents, can recognize each piece of content and navigate within the content list. While the user is browsing the content list, theoperation 600C is started in order to furnish additional metadata so that theuser 220 is provided a better representation of the aggregated remote content. Up tooperation 600C, theuser 220 is able to remotely browse all the authorized contents of thestorage container 211 without any of content itself being transferred to theclient application 203. -
FIG. 6C shows a sequence diagram ofuser 220 reading or opening a remote content stored in thestorage container 211 on theserver device 201. Beginning withoperation 620A, when theuser 220 inputs to read a remote content viaclient application 203 for the first time at step 621, theclient application 203 will send a protocol message toserver application 205 requesting the remote content data atstep 622A.Server application 205 will then convert the protocol message into an operation to fetch the content data fromstorage container 211 atstep 623. Afterstorage container 211 returns the content data atstep 624,server application 205 will encode the content data in intermediate data structure and transmit it to theclient application 203 atstep 626.Client application 203 will decode the received remote content data atstep 627.Client application 203 may or may not cache the received remote content data depending on the implementation onstep 628.Client application 203 will then present the decoded remote content to the user atstep 629A. A subsequent open operation on the same remote content by theuser 220 will follow operation 620B. Upon receiving the open request from theuser 220 at step 621B, the client application will try to load the cached remote content first at step 622B. If the cached content is found, it will immediately return and present the remote content to theuser 220 atstep 629B. If the cached content is not found, it will follow the same sequence asoperation 620A. The implementation of remote content caching will significantly increase the responsiveness of theclient application 203 to theuser 220. -
FIG. 6D shows a sequence diagram of creating content via theclient application 203. Inoperation 630A inFIG. 6D , theuser 220 may create a new remote content viaclient application 203 instep 631. Whenuser 220 creates a new content,client application 203 may first cache the content data atstep 632 in a memory area of the client device, or directly send a protocol message of the request to create remote content toserver application 205 atstep 633. The protocol message sent to theserver application 205 instep 633 will include the data and the metadata of the remote content to furnish the information when creating the actual content in thestorage container 211. Upon receiving the protocol message,server application 205 will decode it and extract the content data together with its metadata atstep 634.Server application 205 will then create the content in thestorage container 211 at step 635. After thestorage container 211 creates the content, it will return the result to theserver application 205 at step 636A in case of success, and step 636B in case of failure. In some implementations, the return status fromstorage container 211 may be more than just success or failure depending on the type of thestorage container 211 of theserver device 201. The result will then be propagated to theclient application 203 atstep user 220 atstep 638A or 638B respectively. In the case of success onstep 638A, client application may notify user by updating the presentation of the content such as setting the creation progressively to 100%, or showing a completion message, etc. In the case of failure on step 638B,client application 203 may notify the user by showing error message or some other message indicating a fail operation. -
FIG. 6E shows a sequence diagram of deleting content via theclient application 203. Inoperation 640A inFIG. 6E , theuser 220 is performing a delete operation on remote content via theclient application 203 atstep 641. Theclient application 203 will then send a protocol message requesting theserver application 205 to delete the remote content atstep 642. Theserver application 205 will decode the protocol message and attempt a delete operation of the content in thestorage container 211. Somestorage containers 211, such as a photo library, may require the server device'suser 221 to give confirmation before a delete action can actually be performed. Atstep 644, thestorage container 211 may ask theuser 221 to give confirmation of the content deletion. One example ofstep 644 is shown inFIG. 7H shows a screenshot of one implementation of the server app showing the request from a client application to delete photo content on the server device. If theuser 221 confirms the deletion atstep 645A, the sequence proceeds tooperation 640B. Thestorage container 211 will return to the server application 205 a status that the content deletion is success at step 646A.Server application 205 will propagate the success status to theclient application 203 via protocol message at step 647A. Upon receiving the success status in step 647A,client application 203 first delete the cached remote content, if any, at step 648A followed by a notification to theuser 220 that the delete operation is successful atstep 649A. The availability of the remote content in the cached is dependent on the condition whether or not theuser 220 has previously opened the remote content atoperation 620A. When theuser 221 denies the content deletion at step 645B, thestorage container 211 will return a fail status toserver application 205 at step 646B inoperation 640C. In some implementations, the return status fromstorage container 211 may be more than just success or failure depending on the types) of the storage container(s) 211 of theserver device 201.Server application 205 will then propagate the failure status to theclient application 203 via a protocol message at step 647B. Upon receiving and decoding the protocol message, theclient application 203 will notify theuser 220 to inform that the delete request has failed at step 649B. -
FIG. 6F shows a sequence diagram of the modify operation of remote content. In particular, edits can be made in place, so there is no need for additional steps to send the file back and forth between the server and client device. Inoperation 650A ofFIG. 6F ,user 220 performs a modification of remote content via theclient application 203 atstep 651. Theclient application 203 will send a protocol message toserver application 205 requesting to modify a remote content atstep 652.Server application 205 will decode the protocol message and perform the modification operation of the content in thestorage container 211 atstep 653. The protocol message sent atstep 652 may include the modified content data, and/or content metadata. The modified content data may be the entire data or differential data from the previous version. Some storage container(s) 211, such as a photo library, may require the server device'suser 221 to give confirmation before a modification can be applied to a content. Atstep 654, the storage container will ask theuser 221 to give confirmation on the content modification request. One implementation ofstep 654 is shown inFIG. 7G which shows a screenshot of one implementation of the server app showing a request from a client application to modify photo content on the server device. If theuser 221 confirms the modification request it will follow the sequence inoperation 650B, otherwise it will follow the sequence inoperation 650C. Afteruser 221 confirms the modification request at step 655A, thestorage container 211 will return the status as success at step 656A which also means the modification is applied to the content at the server device. For example, a modification request on a photo may be a cropping operation. Upon confirming the modification by the user, the cropped photo is applied to thestorage container 211.Server application 205 will propagate the success status toclient application 203 in a protocol message atstep 657A. Upon decoding the protocol message with the success status, theclient application 203 may update the cached remote content, if any, with the modified version, at step 658A, so subsequent request to open the remote content will already have the modified version of the content.Client application 203 will then notify theuser 220 that the content modification operation is successful atstep 659A. The notification instep 659A may be presented to the user with the modified form of the content, for example a cropped photo in the case the modification operation is cropping. In thecase user 221 deny the content modification request at 655B, thestorage container 211 will not apply the modification to the stored content, and return modification status as failed to theserver application 205 at 656B.Server application 205 will propagate the failure status to theclient application 203 via a protocol message at 657B.Client application 203 will decode the protocol message with the failure status and notify theuser 220, at 659B, so theuser 220 will still access the remote content unmodified. - In one implementation, the
client application 203 may further manage operation priority handling to prioritize plural operations performed by theuser 220. In some implementations, there are three operation categories that may be performed remotely by theclient application 203 to thestorage container 211, which include: (i) Category A operations which are content list andmetadata retrieval operations 600B, (ii) Category B operations which are additionalcontent metadata operations 600C, and (iii) Category C operations which consist of readoperations 620A & 620B, createoperations 630A, deleteoperations 640A, or modifyoperations 650A. To increase the responsiveness of theclient application 203 to theuser 220, operations of Category C may take the highest priority followed by Category A and then followed by Category B. -
FIG. 6G shows a sequence diagram of priority handling for remote content operations of different categories. An example of operation priority handling between different operation categories is shown inFIG. 6G . Theserver application 205 manages an operation stack to suspend one or more operations when a higher priority operation is to be performed in advanced. InFIG. 6G , anoperation 600C which belongs to Category A is requested byclient application 203 to be performed on theserver application 205 atstep 661.Server application 205 is processing theoperation 600C atstep 662. Before completing the processing ofoperation 600C,server application 205 receives a request fromclient application 203 to process anoperation 620A which is a Category C operation atstep 663. As soon as theserver application 205 receives the request foroperation 620A, it suspends the currently runningoperation 600C at step 664. The suspendedoperation 600C will then be pushed to the stack at step 665 and the stack is now holding the suspendedoperation 600C as shown in 671B.Server application 205 will then continue to processoperation 620A atstep 666 and send the response ofoperation 620A to theclient application 203 at 667. Upon finishing the processing ofstep 666, theserver application 205 will pop theoperation 600C back from the stack atstep 668 which will make the stack back to the state before any of the operation is performed at 671C. Once theoperation 600C is popped from the stack, it is resumed from its last operational state atstep 669 and a response is sent to theclient application 203 atstep 670. The foregoing sequence ensures that the current intention of theuser 220 is fulfilled first before completing other less urgent tasks. Similar management is applied foroperations -
FIG. 6H shows a sequence diagram of priority handling for remote content operations of the same category. Within the same operation category, the operation that is performed last will always be served first. For example, inFIG. 6H ,client application 203 performs anoperation 620A on remote content A toserver application 205 at step 672. Whileserver application 205 processes the request ofoperation 620A on content A atstep 673,client application 203 performs anotheroperation 620A on another remote content B at step 674. Upon receiving the request ofoperation 620A on content B,server application 205 will immediately suspendoperation 620A on content A atstep 675, and push the suspendedoperation 620A on content A to stack at step 676. The stack that was initially instate 682A, will now change to state 682B withoperation 620A on content A sitting on the top of the stack.Server application 205 will then process theoperation 620A on content B atstep 677, and return the response toclient application 203 atstep 678. Upon completing the processing ofoperation 620A on content B, theserver application 205 will pop theoperation 620A on content A back from the stack atstep 679, resume it at step 680, and send the response toclient application 203 atstep 681. After this, the stack is instate 682C, and is back to the same state before any of the operations were performed. - In some implementations, operation Category A has a higher priority than Category B since the Category A operation has a more significant impact on the browsing experience of the user compared to Category B. It is assumed that user browsing and interaction experience with the content list should not be compromised in exchange with a richer content presentation. This assumption is more prominent in the case that the
client application 203 is accessed viafile system interface 222 where the 3rdparty application 206 may navigate on the directory tree in a random and quick manner, for example navigating folder trees using a file manager such as Finder in Mac OS. Depending on the application and system requirements, the operation categories may be defined as more than just three types and set as different priority levels for each in some implementations. The assignment of the operations into a category may also depend on the application and system requirements of the implementations. An operation may belong to one or more category depending on the application or system conditions or may even change categories at runtime. - As described above with reference to the drawings, content(s) of nearby server(s) are presented to be interfaced with at client(s) over a peer-to-peer direct wireless network. The clients and servers may be concurrently provided in one or more devices. Among the advantages of the peer-to-peer direct wireless network, conventional network infrastructure and wired connections can be foregone. Moreover, once connected, the clients can retrieve, present, interact and operate on the aggregated contents of the servers via a lightweight representation of the content of the servers. Aggregated content(s) may be presented in the form of an interactive document, a filesystem volume, and/or an API, different from the original form the content(s) are stored at each server. Further, authorizations to access content can be provided at the servers to limit the clients directly interactions and operations on the content(s) of the server(s). The types of interactions the client may perform can vary by presentation but generally include viewing, browsing, editing, deleting as well as liking tagging, and commenting.
- Although specific details of implementations are described with regard to the architectures and sequence diagrams presented in the figures, certain acts shown in the figures need not be performed in the order described, and may be modified, and/or may be omitted entirely, depending on the circumstances. As described in this application, the aforementioned features may be implemented using software, hardware, firmware, or a combination thereof. Moreover, the acts and methods described may be implemented by a computer, smartphone device, or other types of computing devices based on instructions stored on memory, the memory comprising one or more computer-readable storage media.
- Such media may be any available physical media accessible by the one or more devices to implement the instructions stored thereon. Such media may include, but is not limited to, random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other solid-state memory technology, compact disk read-only memory (CD-ROM), other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store non-transitory computer-readable information and which can be accessed by a processor for execution.
- Furthermore, it should be emphasized that conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, are generally intended to convey that certain implementations include, while other implementations do not, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or acts are in any way required for one or more implementations or that one or more implementations necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular implementations.
- It should be emphasized that the implementations described herein may be realized in any of various forms. For example, some implementations may be realized as a computer-implemented method, a computer-readable media, or a computer system. In some implementations, a non-transitory computer-readable memory medium may be configured to store instructions and/or data, where the instructions cause processors of the computer system to perform foregoing acts described herein. Although the implementations above have been described in considerable detail, numerous variations, modifications, and combinations to the disclosed implementations will become apparent to those skilled in the art having considered the disclosure in its entirety.
Claims (26)
1. A system, comprising:
a server electronic device configured to activate, from within a content-editor application running on the server electronic device, a component of a client electronic device; and the client electronic device, wherein the client electronic device is configured to:
control the component of the client electronic device to generate data for the content-editor application; and
modify a content-editing view of the content-editor application of the server electronic device using the data.
2. The system of claim 1 , wherein the client electronic device is a smartphone or a tablet device, wherein the component of the client electronic device is a camera, and wherein the client electronic device is configured to control the component by operating the camera to capture an image, responsive to selection of an image capture option at the client electronic device.
3. The system of claim 2 , wherein the client electronic device is configured to modify the content-editing view using the data by transmitting the captured image from the client electronic device to the content-editor application of the server electronic device, responsive to the selection of the image capture option at the client electronic device, and without further input to the client electronic device or the server electronic device.
4. The system of claim 3 , wherein the content-editor application comprises a word processing application with a displayed document for editing in the content-editing view, and wherein the client electronic device is configured to transmit the captured image for display in the displayed document, without storing the captured image at the client electronic device.
5. The system of claim 4 , wherein the server electronic device is further configured to, prior to the selection of the image capture option:
provide, for display at the client electronic device, a preview image stream from the camera; and
provide, for display within the displayed document at the server electronic device, the preview image stream from the camera.
6. The system of claim 2 , further comprising a third electronic device, wherein the server electronic device is further configured to:
prior to activating the component of the client electronic device:
obtain, at the content-editor application, a list of communicatively coupled devices and an indication of one or more data-generating features for each of the communicatively coupled devices;
provide, for display with the content-editor application, a list of remotely obtainable content types based on the one or more data-generating features;
receive, with the content-editor application, a selection of one of the remotely obtainable content types; and
identify, with the content-editor application, the client electronic device and the third electronic device as available devices for providing the one of the remotely obtainable content types; and
activate, concurrently with activating the component of the client electronic device, a component of the third electronic device, responsive to the selection of the one of the remotely obtainable content types.
7. The system of claim 6 , wherein the server electronic device is further configured to:
receive, with the content-editor application from the client electronic device, an indication that the client electronic device has been selected for obtaining the one of the remotely obtainable content types; and
deactivate, responsive to the indication, the component of the third electronic device.
8. The system of claim 7 , wherein the indication comprises an indication of motion of the client electronic device.
9. The system of claim 7 , wherein the indication comprises an indication of an image capture operation on the client electronic device.
10. The system of claim 7 , wherein the indication comprises an indication of motion of a stylus associated with the client electronic device.
11. The system of claim 7 , wherein the indication comprises touch input to a touchscreen of the client electronic device.
12. The system of claim 1 , wherein the client electronic device is configured to:
receive an image from the server electronic device;
display the image;
receive image markup input via a touchscreen of the client electronic device; and
provide image markup metadata based on the image markup input to the server electronic device, without storing the image or the image markup metadata, and without sending the image to the server electronic device, wherein the data comprises the image markup metadata.
13. A method comprising:
controlling, by a client device, a component of the client device to generate data for a content-editor application running on a server device, the component having been activated from within the content-editor application running on the server device; and
modifying, by the client device, a content-editing view of the content-editor application of the server device using the generated data.
14. The method of claim 13 , wherein the client device is a smartphone or a tablet device, wherein the component of the client device is a camera, and wherein the controlling comprises operating the camera to capture an image, responsive to selection of an image capture option at the client device.
15. The method of claim 14 , wherein the modifying comprises transmitting the captured image from the client device to the content-editor application of the server electronic device, responsive to the selection of the image capture option at the client device, and without further input to the client device or the server device.
16. A server device comprising:
a memory; and
at least one processor configured to:
provide, for display using a theme, a user-interface view of an application, the user-interface view including one or more selectable options for modifying the user-interface view, wherein the one or more selectable options include at least one option to obtain data from a client device that is communicatively coupled to the server device;
receive, via the application, a selection of the at least one option to obtain the data from the client device;
activate, via the application, a component of the client device to generate the data;
receive the data from the client device; and
modify the theme of user-interface view using the data.
17. The server device of claim 16 , wherein:
the at least one option to obtain the data from the client device comprises an option to obtain mood data using a camera or a sensor of the client device; and
the data comprises the mood data, the mood data indicative of a mood of a user of the client device.
18. The server device of claim 16 , wherein the at least one processor is further configured to:
discover the client device;
identify one or more available features of the client device; and
identify one or more types of input that are obtainable using the one or more available features, at least one of the one or more types of input corresponding to the data.
19. The server device of claim 18 , wherein the at least one processor is further configured to generate the one or more selectable options based on the one or more types of input.
20. The server device of claim 16 , wherein the component of the client device comprises a camera, a touch screen, a stylus, a light sensor, a motion sensor, an activity sensor, or a location sensor.
21. The server device of claim 16 , wherein the data from the client device comprises an image, a video, augmented reality content, image markup metadata, handwriting recognition data, freehand sketch data, steps data, heart rate data, electro-cardio data, calorie data, blood pressure data, or mood data.
22. The server device of claim 16 , wherein the component of the client device is activated and the data is received without execution of the application on the client device.
23. The server device of claim 16 , wherein the component of the client device is unrelated to the server device or to the application.
24. The server device of claim 16 , wherein the component of the client device comprises at least one of a camera of the client device, a stylus of the client device, or a sensor of the client device, or a health monitoring application associated with the sensor of the client device.
25. The server device of claim 16 , wherein the application comprises at least one of a word processor, a presentation editor, an email editor, a spreadsheet, an image editing application, or a messaging application.
26. The server device of claim 16 , wherein receiving the data comprises:
generating commands for operating the component of the client device; and
providing the generated commands to the client device with communications circuitry of the server device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/222,704 US20230362242A1 (en) | 2016-07-22 | 2023-07-17 | Direct input from a nearby device |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662365784P | 2016-07-22 | 2016-07-22 | |
US15/655,934 US10791172B2 (en) | 2016-07-22 | 2017-07-21 | Systems and methods for interacting with nearby people and devices |
US16/885,565 US10951698B2 (en) | 2016-07-22 | 2020-05-28 | Systems and methods to discover and notify devices that come in close proximity with each other |
US17/098,747 US11019141B2 (en) | 2016-07-22 | 2020-11-16 | Systems and methods to discover and notify devices that come in close proximity with each other |
US17/226,260 US11115467B2 (en) | 2016-07-22 | 2021-04-09 | Systems and methods to discover and notify devices that come in close proximity with each other |
US17/391,219 US11265373B2 (en) | 2016-07-22 | 2021-08-02 | Systems and methods to discover and notify devices that come in close proximity with each other |
US17/575,698 US20220141285A1 (en) | 2016-07-22 | 2022-01-14 | Systems and methods to discover and notify devices that come in close proximity with each other |
US18/222,704 US20230362242A1 (en) | 2016-07-22 | 2023-07-17 | Direct input from a nearby device |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/575,698 Continuation US20220141285A1 (en) | 2016-07-22 | 2022-01-14 | Systems and methods to discover and notify devices that come in close proximity with each other |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230362242A1 true US20230362242A1 (en) | 2023-11-09 |
Family
ID=60990194
Family Applications (10)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/655,934 Active 2037-10-02 US10791172B2 (en) | 2016-07-22 | 2017-07-21 | Systems and methods for interacting with nearby people and devices |
US16/275,615 Active 2037-09-23 US10742729B2 (en) | 2016-07-22 | 2019-02-14 | Proximity network for interacting with nearby devices |
US16/885,565 Active US10951698B2 (en) | 2016-07-22 | 2020-05-28 | Systems and methods to discover and notify devices that come in close proximity with each other |
US17/098,747 Active US11019141B2 (en) | 2016-07-22 | 2020-11-16 | Systems and methods to discover and notify devices that come in close proximity with each other |
US17/226,260 Active US11115467B2 (en) | 2016-07-22 | 2021-04-09 | Systems and methods to discover and notify devices that come in close proximity with each other |
US17/391,219 Active US11265373B2 (en) | 2016-07-22 | 2021-08-02 | Systems and methods to discover and notify devices that come in close proximity with each other |
US17/575,698 Pending US20220141285A1 (en) | 2016-07-22 | 2022-01-14 | Systems and methods to discover and notify devices that come in close proximity with each other |
US18/222,704 Pending US20230362242A1 (en) | 2016-07-22 | 2023-07-17 | Direct input from a nearby device |
US18/222,699 Pending US20230412677A1 (en) | 2016-07-22 | 2023-07-17 | Direct input from a nearby device |
US18/222,706 Pending US20230362243A1 (en) | 2016-07-22 | 2023-07-17 | Direct input from a nearby device |
Family Applications Before (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/655,934 Active 2037-10-02 US10791172B2 (en) | 2016-07-22 | 2017-07-21 | Systems and methods for interacting with nearby people and devices |
US16/275,615 Active 2037-09-23 US10742729B2 (en) | 2016-07-22 | 2019-02-14 | Proximity network for interacting with nearby devices |
US16/885,565 Active US10951698B2 (en) | 2016-07-22 | 2020-05-28 | Systems and methods to discover and notify devices that come in close proximity with each other |
US17/098,747 Active US11019141B2 (en) | 2016-07-22 | 2020-11-16 | Systems and methods to discover and notify devices that come in close proximity with each other |
US17/226,260 Active US11115467B2 (en) | 2016-07-22 | 2021-04-09 | Systems and methods to discover and notify devices that come in close proximity with each other |
US17/391,219 Active US11265373B2 (en) | 2016-07-22 | 2021-08-02 | Systems and methods to discover and notify devices that come in close proximity with each other |
US17/575,698 Pending US20220141285A1 (en) | 2016-07-22 | 2022-01-14 | Systems and methods to discover and notify devices that come in close proximity with each other |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/222,699 Pending US20230412677A1 (en) | 2016-07-22 | 2023-07-17 | Direct input from a nearby device |
US18/222,706 Pending US20230362243A1 (en) | 2016-07-22 | 2023-07-17 | Direct input from a nearby device |
Country Status (1)
Country | Link |
---|---|
US (10) | US10791172B2 (en) |
Families Citing this family (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10791172B2 (en) * | 2016-07-22 | 2020-09-29 | Tinker Pte. Ltd. | Systems and methods for interacting with nearby people and devices |
US11334252B2 (en) * | 2016-08-08 | 2022-05-17 | Dynalink Technologies, Llc | Dynamic data communication in an encapsulated area |
US10169056B2 (en) * | 2016-08-31 | 2019-01-01 | International Business Machines Corporation | Effective management of virtual containers in a desktop environment |
JP2018085069A (en) * | 2016-11-25 | 2018-05-31 | 富士通株式会社 | Information reception terminal, information distribution system, display method and display program |
US10353663B2 (en) * | 2017-04-04 | 2019-07-16 | Village Experts, Inc. | Multimedia conferencing |
US11055800B2 (en) * | 2017-12-04 | 2021-07-06 | Telcom Ventures, Llc | Methods of verifying the onboard presence of a passenger, and related wireless electronic devices |
GB2569651A (en) * | 2017-12-22 | 2019-06-26 | Veea Systems Ltd | Edge computing system |
US11126393B1 (en) * | 2018-04-20 | 2021-09-21 | Quizzit, Inc. | Card products utilizing thin screen displays |
US11290530B2 (en) * | 2018-06-01 | 2022-03-29 | Apple Inc. | Customizable, pull-based asset transfer requests using object models |
US10728587B2 (en) | 2018-06-08 | 2020-07-28 | Panasonic Avionics Corporation | Vehicle entertainment system |
US11136123B2 (en) | 2018-06-08 | 2021-10-05 | Panasonic Avionics Corporation | Methods and systems for storing content for a vehicle entertainment system |
JP6516060B1 (en) * | 2018-06-12 | 2019-05-22 | トヨタ自動車株式会社 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM |
WO2020013579A1 (en) * | 2018-07-09 | 2020-01-16 | Samsung Electronics Co., Ltd. | Method and device for retrieving content |
US11294976B1 (en) * | 2018-07-22 | 2022-04-05 | Tuple Software LLC | Ad-hoc venue engagement system |
US11038554B2 (en) * | 2018-08-10 | 2021-06-15 | Stmicroelectronics (Grenoble 2) Sas | Distributed microcontroller |
US10608936B1 (en) * | 2018-10-10 | 2020-03-31 | Intuit Inc. | Implementing data sharing features in legacy software applications |
JP2020129204A (en) * | 2019-02-07 | 2020-08-27 | キヤノン株式会社 | Communication device, its control method, and program |
US10956140B2 (en) | 2019-04-05 | 2021-03-23 | Sap Se | Software installation through an overlay file system |
US10942723B2 (en) | 2019-04-05 | 2021-03-09 | Sap Se | Format for multi-artefact software packages |
US11232078B2 (en) | 2019-04-05 | 2022-01-25 | Sap Se | Multitenancy using an overlay file system |
US10809994B1 (en) * | 2019-04-05 | 2020-10-20 | Sap Se | Declarative multi-artefact software installation |
US11113249B2 (en) | 2019-04-05 | 2021-09-07 | Sap Se | Multitenant application server using a union file system |
US10693956B1 (en) * | 2019-04-19 | 2020-06-23 | Greenfly, Inc. | Methods and systems for secure information storage and delivery |
US11917520B2 (en) | 2019-09-11 | 2024-02-27 | Carrier Corporation | Bluetooth mesh routing with subnets |
US11172240B2 (en) * | 2019-11-04 | 2021-11-09 | Panasonic Avionics Corporation | Content loading through ad-hoc wireless networks between aircraft on the ground |
US20210174354A1 (en) * | 2019-11-14 | 2021-06-10 | Horus Foster, Inc. | Anonymous peer-to-peer payment system |
KR20210101496A (en) * | 2020-02-10 | 2021-08-19 | 삼성전자주식회사 | Method for communication based on state of external electronic apparatus and electronic appratus thereof |
US11706826B2 (en) * | 2020-09-30 | 2023-07-18 | Panasonic Avionics Corporation | Methods and systems for deploying a portable computing device on a transportation vehicle |
US11171964B1 (en) * | 2020-12-23 | 2021-11-09 | Citrix Systems, Inc. | Authentication using device and user identity |
TWI750973B (en) * | 2020-12-25 | 2021-12-21 | 扉睿科技股份有限公司 | Internet of things system based on security orientation and group sharing |
US11200306B1 (en) | 2021-02-25 | 2021-12-14 | Telcom Ventures, Llc | Methods, devices, and systems for authenticating user identity for location-based deliveries |
US20230140819A1 (en) * | 2021-10-28 | 2023-05-04 | The Boeing Company | Peer-to-peer data sharing for reduced network demand |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130238702A1 (en) * | 2012-01-06 | 2013-09-12 | Qualcomm Incorporated | Wireless display with multiscreen service |
US20150172584A1 (en) * | 2012-09-25 | 2015-06-18 | Samsung Electronics Co., Ltd. | Method for transmitting image and electronic device thereof |
US20150180916A1 (en) * | 2013-12-20 | 2015-06-25 | Samsung Electronics Co., Ltd. | Portable apparatus and method for sharing content thereof |
US9224364B2 (en) * | 2010-01-12 | 2015-12-29 | Apple Inc. | Apparatus and method for interacting with handheld carrier hosting media content |
Family Cites Families (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6539393B1 (en) * | 1999-09-30 | 2003-03-25 | Hill-Rom Services, Inc. | Portable locator system |
US7092965B2 (en) * | 2002-07-09 | 2006-08-15 | Lightsurf Technologies, Inc. | System and method for improved compression of DCT compressed images |
US7386276B2 (en) * | 2002-08-27 | 2008-06-10 | Sama Robert J | Wireless information retrieval and content dissemination system and method |
WO2005107389A2 (en) * | 2004-05-03 | 2005-11-17 | Mac Ventures Group, Inc. | Processing of trade show information |
US20060200570A1 (en) | 2005-03-02 | 2006-09-07 | Nokia Corporation | Discovering and mounting network file systems via ad hoc, peer-to-peer networks |
US10003762B2 (en) * | 2005-04-26 | 2018-06-19 | Invention Science Fund I, Llc | Shared image devices |
US20060291412A1 (en) * | 2005-06-24 | 2006-12-28 | Naqvi Shamim A | Associated device discovery in IMS networks |
KR100813972B1 (en) * | 2006-03-08 | 2008-03-14 | 삼성전자주식회사 | Client apparatus and method for streaming contents and computer readable recording medium storing program for performing the method |
US8086535B2 (en) | 2006-04-04 | 2011-12-27 | Apple Inc. | Decoupling rights in a digital content unit from download |
US20080051033A1 (en) * | 2006-08-28 | 2008-02-28 | Charles Martin Hymes | Wireless communications with visually- identified targets |
US8458363B2 (en) | 2008-06-08 | 2013-06-04 | Apple Inc. | System and method for simplified data transfer |
US8526885B2 (en) | 2008-09-30 | 2013-09-03 | Apple Inc | Peer-to-peer host station |
JP5999645B2 (en) * | 2009-09-08 | 2016-10-05 | ロンギチュード エンタープライズ フラッシュ エスエイアールエル | Apparatus, system, and method for caching data on a solid state storage device |
US8417777B2 (en) * | 2009-12-11 | 2013-04-09 | James W. Hutchison | Apparatus for signaling circle of friends |
US20110163944A1 (en) | 2010-01-05 | 2011-07-07 | Apple Inc. | Intuitive, gesture-based communications with physics metaphors |
US11182455B2 (en) * | 2011-01-29 | 2021-11-23 | Sdl Netherlands B.V. | Taxonomy driven multi-system networking and content delivery |
US10387836B2 (en) * | 2015-11-24 | 2019-08-20 | David Howard Sitrick | Systems and methods providing collaborating among a plurality of users |
US10402485B2 (en) * | 2011-05-06 | 2019-09-03 | David H. Sitrick | Systems and methodologies providing controlled collaboration among a plurality of users |
US11165963B2 (en) * | 2011-06-05 | 2021-11-02 | Apple Inc. | Device, method, and graphical user interface for accessing an application in a locked device |
KR101814810B1 (en) | 2011-08-08 | 2018-01-04 | 삼성전자주식회사 | Method and apparatus for wi-fi p2p group formation using wi-fi direct |
KR101814120B1 (en) * | 2011-08-26 | 2018-01-03 | 에스프린팅솔루션 주식회사 | Method and apparatus for inserting image to electrical document |
US9185248B2 (en) * | 2012-02-29 | 2015-11-10 | Blackberry Limited | Method and device for sharing a camera feature |
US8838697B2 (en) | 2012-03-08 | 2014-09-16 | Apple Inc. | Peer-to-peer file transfer between computer systems and storage devices |
US9195473B2 (en) | 2012-04-05 | 2015-11-24 | Blackberry Limited | Method for sharing an internal storage of a portable electronic device on a host electronic device and an electronic device configured for same |
US9445267B2 (en) | 2012-08-31 | 2016-09-13 | Apple Inc. | Bump or close proximity triggered wireless technology |
CN103795747A (en) | 2012-10-30 | 2014-05-14 | 中兴通讯股份有限公司 | File transfer method and device through Wi-Fi Direct |
KR20140080726A (en) * | 2012-12-14 | 2014-07-01 | 한국전자통신연구원 | Apparatus and Method for Remote Control using Dynamic Script |
US9195689B2 (en) * | 2013-02-19 | 2015-11-24 | Business Objects Software, Ltd. | Converting structured data into database entries |
US10243786B2 (en) | 2013-05-20 | 2019-03-26 | Citrix Systems, Inc. | Proximity and context aware mobile workspaces in enterprise systems |
US9853719B2 (en) | 2013-06-09 | 2017-12-26 | Apple Inc. | Discovery of nearby devices for file transfer and other communications |
US9762562B2 (en) | 2013-09-13 | 2017-09-12 | Facebook, Inc. | Techniques for multi-standard peer-to-peer connection |
EP3069546A1 (en) | 2013-12-18 | 2016-09-21 | Apple Inc. | Gesture-based information exchange between devices in proximity |
US20150201443A1 (en) * | 2014-01-10 | 2015-07-16 | Qualcomm Incorporated | Point and share using ir triggered p2p |
US20150230078A1 (en) | 2014-02-10 | 2015-08-13 | Apple Inc. | Secure Ad Hoc Data Backup to Nearby Friend Devices |
US10387004B2 (en) * | 2014-04-17 | 2019-08-20 | Jimmy Albert | Real time monitoring of users within a predetermined range and selective receipt of virtual cards |
WO2016064106A1 (en) * | 2014-10-22 | 2016-04-28 | 삼성전자 주식회사 | Mobile device comprising stylus pen and operation method therefor |
US9769564B2 (en) * | 2015-02-11 | 2017-09-19 | Google Inc. | Methods, systems, and media for ambient background noise modification based on mood and/or behavior information |
EP3259929A4 (en) * | 2015-02-16 | 2018-10-17 | Nokia Technologies Oy | Service discovery |
US9923941B2 (en) * | 2015-11-05 | 2018-03-20 | International Business Machines Corporation | Method and system for dynamic proximity-based media sharing |
US10917767B2 (en) | 2016-03-31 | 2021-02-09 | Intel Corporation | IOT device selection |
US10791172B2 (en) * | 2016-07-22 | 2020-09-29 | Tinker Pte. Ltd. | Systems and methods for interacting with nearby people and devices |
-
2017
- 2017-07-21 US US15/655,934 patent/US10791172B2/en active Active
-
2019
- 2019-02-14 US US16/275,615 patent/US10742729B2/en active Active
-
2020
- 2020-05-28 US US16/885,565 patent/US10951698B2/en active Active
- 2020-11-16 US US17/098,747 patent/US11019141B2/en active Active
-
2021
- 2021-04-09 US US17/226,260 patent/US11115467B2/en active Active
- 2021-08-02 US US17/391,219 patent/US11265373B2/en active Active
-
2022
- 2022-01-14 US US17/575,698 patent/US20220141285A1/en active Pending
-
2023
- 2023-07-17 US US18/222,704 patent/US20230362242A1/en active Pending
- 2023-07-17 US US18/222,699 patent/US20230412677A1/en active Pending
- 2023-07-17 US US18/222,706 patent/US20230362243A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9224364B2 (en) * | 2010-01-12 | 2015-12-29 | Apple Inc. | Apparatus and method for interacting with handheld carrier hosting media content |
US20130238702A1 (en) * | 2012-01-06 | 2013-09-12 | Qualcomm Incorporated | Wireless display with multiscreen service |
US20150172584A1 (en) * | 2012-09-25 | 2015-06-18 | Samsung Electronics Co., Ltd. | Method for transmitting image and electronic device thereof |
US20150180916A1 (en) * | 2013-12-20 | 2015-06-25 | Samsung Electronics Co., Ltd. | Portable apparatus and method for sharing content thereof |
Also Published As
Publication number | Publication date |
---|---|
US10951698B2 (en) | 2021-03-16 |
US20210360063A1 (en) | 2021-11-18 |
US20210067586A1 (en) | 2021-03-04 |
US20190182319A1 (en) | 2019-06-13 |
US20230362243A1 (en) | 2023-11-09 |
US20200296157A1 (en) | 2020-09-17 |
US11115467B2 (en) | 2021-09-07 |
US20180027070A1 (en) | 2018-01-25 |
US10791172B2 (en) | 2020-09-29 |
US11019141B2 (en) | 2021-05-25 |
US11265373B2 (en) | 2022-03-01 |
US20220141285A1 (en) | 2022-05-05 |
US20210227025A1 (en) | 2021-07-22 |
US20230412677A1 (en) | 2023-12-21 |
US10742729B2 (en) | 2020-08-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230362242A1 (en) | Direct input from a nearby device | |
US11863537B2 (en) | Systems, methods, and media for a cloud based social media network | |
US9280545B2 (en) | Generating and updating event-based playback experiences | |
US9143601B2 (en) | Event-based media grouping, playback, and sharing | |
US8473429B2 (en) | Managing personal digital assets over multiple devices | |
US20150033153A1 (en) | Group interaction around common online content | |
US20160078582A1 (en) | Sharing Media | |
US20140270571A1 (en) | Shuffle algorithm and navigation | |
JP2017502436A (en) | System and method for generating a shared virtual space | |
WO2016106088A1 (en) | Ubiquitous content access and management | |
KR102108849B1 (en) | Systems and methods for multiple photo feed stories | |
JP6215359B2 (en) | Providing access to information across multiple computing devices | |
KR101519421B1 (en) | System for sharing picture image and video contents | |
US20190286827A1 (en) | Unified storage management | |
US20140013193A1 (en) | Methods and systems for capturing information-enhanced images | |
US20160080439A1 (en) | Media Sharing Device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |