US20170300513A1 - Content Clustering System and Method - Google Patents
Content Clustering System and Method Download PDFInfo
- Publication number
- US20170300513A1 US20170300513A1 US15/475,867 US201715475867A US2017300513A1 US 20170300513 A1 US20170300513 A1 US 20170300513A1 US 201715475867 A US201715475867 A US 201715475867A US 2017300513 A1 US2017300513 A1 US 2017300513A1
- Authority
- US
- United States
- Prior art keywords
- content
- user
- app
- data
- metadata
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G06F17/30268—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
- G06F16/435—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/5866—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G06K9/6218—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
Definitions
- the present application relates to the field of computerized systems that analyze content on mobile devices for the purpose of clustering content together.
- An embodiment of the present invention creates implicit content on a mobile device by monitoring and recording input from sensors on the device. This embodiment also analyzes metadata from the implicit content and metadata from explicit content created by a user for the purpose of creating content clusters, which are confirmed by the user as actual events. Events can then be grouped according to metadata and event information into a presentation grouping.
- FIG. 1 is a schematic diagram showing a mobile device and a plurality of servers communicating over a network.
- FIG. 2 is a schematic diagram of showing an application accepting input to form a content cluster.
- FIG. 3 is a schematic diagram showing content being clustered by a media organization app.
- FIG. 4 is a schematic diagram showing content clusters being confirmed as events through a user interface.
- FIG. 5 is a schematic diagram showing events being clustered into a presentation grouping by the media organization app.
- FIG. 6 is a flow chart showing a method for generating implicit content.
- FIG. 7 is a flow chart showing a method for content clustering.
- FIG. 8 is a flow chart showing a method for the grouping of events into presentation groupings.
- FIG. 1 shows a mobile device 100 utilizing one embodiment of the present invention.
- the mobile device 100 can communicate over a wide area network 170 with a plurality of computing devices.
- the mobile device 100 communicates with a media organization server 180 , a global event database server 190 , one or more cloud content servers 192 , and a third-party information provider server 194 .
- the mobile device 100 can take the form of a smart phone or tablet computer.
- the device 100 will include a display 110 for displaying information to a user, a processor 120 for processing instructions and data for the device 100 , a memory 130 for storing processing instructions and data, and one or more user input interfaces 142 to allow the user to provide instructions and data to the mobile device 100 .
- the display 110 can be use LCD, OLED, or similar technology to provide a color display for the user. In some embodiments, the display 110 incorporates touchscreen capabilities so as to function as a user input interface 142 .
- the processor 120 can be a general purpose CPU, such as those provided by Intel Corporation (Mountain View, Calif.) or Advanced Micro Devices, Inc.
- Mobile devices such as device 100 generally use specific operating systems designed for such devices, such as iOS from Apple Inc. (Cupertino, Calif.) or ANDROID OS from Google Inc. (Menlo Park, Calif.).
- the operating systems are stored on the memory 130 and are used by the processor 120 to provide a user interface for the display 110 and user input devices 142 , handle communications for the device 100 , and to manage applications (or apps) that are stored in the memory 130 .
- the memory 130 is shown in FIG. 1 with two different types of apps, namely content creation apps 132 and a media organization app 134 .
- the content creation apps 132 are apps that create explicit media content 136 in the memory 130 , and include video creation apps, still image creation apps, and audio recording apps.
- the media organization app 134 creates implicit content 138 .
- the media organization app 134 is responsible for gathering the different types of explicit media content 136 and the implicit content 138 (referred to together as content 140 ), analyzing the content 140 , and then organizing the content 140 into clusters, events, and presentation groupings that are stored in media organization data 139 as described below.
- the mobile device 100 communicates over the network 170 through one of two network interfaces, namely a Wi-Fi network interface 144 and a cellular network interface 146 .
- the Wi-Fi network interface 144 connects the device 100 to a local wireless network that provides connection to the wide area network 170 .
- the Wi-Fi network interface 144 preferably connects via one of the Institute of Electrical and Electronics Engineers' (IEEE) 802.11 standards.
- the local network is based on TCP/IP, and the Wi-Fi network interface includes TCP/IP protocol stacks.
- the cellular network interface 146 communicates over a cellular data network. The provider of the cellular data network then provides an interface to the wide area network 170 .
- the wide area network 170 is the Internet.
- the mobile device 100 uses sensors 150 for a variety of purposes on the device 100 .
- the sensors 150 provide the means to create media content 136 .
- the content creation apps 132 respond to signals from the user input 142 to capture media content 136 using the camera sensor 152 and the microphone 154 .
- These types of media content 136 are known as “explicit media content” because the user has explicitly requested that the mobile device 100 capture and store this media content 136 .
- a user might instruct a photo taking app 132 to take a still photograph using the camera 152 , or to stitch together a stream of input from the camera sensor 152 into a panorama image that is stored as explicit media content 136 .
- a movie app 132 might record input from the camera 152 and microphone 154 sensors as a video file 136 .
- a voice memo app 132 might record input from the microphone sensor 154 to create an audio media content file 136 .
- these content creation apps 132 respond to an explicit request from a user to create the media content 136 .
- the explicit media content 136 is stored as a file or a data record in the memory 130 of the mobile device 100 .
- This file or data record includes both the actual content recorded by the sensors 150 and metadata associated with that recording.
- the metadata will include the date and time at which the media content 136 was recorded, as determined by the clock 156 . Frequently, the metadata also includes a geographic location where the media content 136 was created.
- the geographic location can be determined from the GPS sensor 158 , or by using other location identifying techniques such as identifying nearby Wi-Fi networks using the Wi-Fi Network Interface 144 , or through nearby cell tower identification using the cellular network interface 146 .
- Some content creation apps 132 will include facial recognition capabilities in order to tag the identity of individuals within a photo or video file 136 .
- Other apps 132 will allow a user a manually tag their files 136 so as to identify the individuals (or “participants”) portrayed in those media files 136 . These identity tags can then be added to the metadata stored with the media content file 136 in memory 130 .
- the explicit media content 136 will be stored remotely on a cloud content server 192 .
- all photographs taken by the camera 152 may be stored in memory 130 as explicit media content 136 and may also be transmitted over one of the network interfaces 144 , 146 to the cloud content server 192 .
- the locally stored explicit media content 136 may be temporary in nature, with permanent storage provided on the cloud content server 192 .
- the cloud content server 192 will be provided by a third party, such as the FLICKR service provided by Yahoo! Inc. of Sunnyvale, Calif.
- the media organization app 134 creates implicit content 138 by monitoring the sensors 150 on the mobile device 100 and storing related data as implicit content 138 when it monitors an interesting change in the sensors 150 .
- the media organization app 134 might be monitoring the GPS sensor 158 and accelerometer 160 during a family driving vacation from Chicago, Ill. to Yellowstone National Park in Wyoming.
- the accelerometer 160 can indicate when the family car stops, and then determine the location of the mobile device 100 using the GPS sensor 158 .
- the media organization app 134 can determine that the car was stopped during this family vacation for 3 hours, 15 minutes in Wall, S. Dak. This data could be stored as implicit content 138 in the memory 130 .
- the app 134 may also uses one of the network interfaces 144 , 146 to obtain additional information about this implicit content 138 .
- the app 134 may contact a global event database server 190 that contains information about a great number of events (or “occurrences”).
- This type of database server 190 which is provided by several third parties over the Internet 170 , allows users to specify a geographic location and a time, and the server 190 will respond with information about occurrences happening near that location around that time.
- the information returned from the global event database server will generally include a title for the occurrence, a description for that occurrence, a time period during which that occurrence takes place, and an exact physical location for that occurrence.
- the app 134 may inquire whether there are any events happening in Wall at the time the vehicle was stopped.
- the event database server 190 may indicate that at this time, a parade was happening in downtown Wall.
- the app 134 may also make inquiries from different information provider servers 194 , such as a server 194 that provides weather information for a particular geographic location.
- the media organization app 134 would be able to create implicit content 138 indicating that from 12:15 to 3:30 pm on Jul. 4, 2013, the user of the mobile device 100 stopped in Wall, S. Dak. and witnessed a parade in sunny, 92 degree weather.
- the media organization app 134 can take advantage of any of the sensors 150 on the mobile device 100 , including the camera 152 , microphone 154 , clock 156 , GPS sensor 158 , accelerometer 160 , gyroscope 162 , ambient light sensor 164 , and proximity sensor 166 .
- the app 134 can define monitoring modes that determine the extent to which it monitors the various sensor 150 . For instance, in one monitoring mode the app 134 could provide reverse geocoding by periodically (or continually) recording a location for the user from the GPS sensor 158 . In another mode, the app 134 could monitor the accelerometer to indicate when the user is moving or has stopped moving. In a third mode, the app 134 could periodically monitor the microphone 154 .
- the app 134 would wait for the next interval before it again monitored the microphone 154 . If interesting noises were detected (e.g., noises that were characteristic of human voices), the app 134 could record a small amount of the conversation and record it as implicit content 138 in memory 130 , along with the time and location at which the conversation was recorded.
- the use of another app such as one of the content creation apps 132 , triggers the creation of an implicit content file 138 .
- the use of a photo or movie app 132 may cause the media organization app 134 to record the GPS location, the current weather, and the current event, if any, noted by the global event database server 190 .
- the app 132 in this fourth mode may record sounds from the microphone 154 to capture conversations between the user of the mobile device 100 and her photography subjects. These conversations would be stored as implicit content 138 in memory 130 .
- the media organization app 134 collects the content 140 from the memory 130 (and from cloud content servers 192 ) and organizes the content 140 into content clusters.
- Content clusters are groups of content 140 that are grouped together as belonging to a particular occurrence or event. As described below, content clusters are presented to the user for modification and verification, after which the content groupings are referred to as user-verified events. Events may involve numerous elements of content 140 , or may involve only a single element of content 140 .
- the content clusters and events are stored in media organization data 139 .
- the content clusters and events could be stored on a media organization server 180 accessible by the mobile device 100 over the network 170 .
- the media organization server 180 contains a programmable digital processor 182 , such as a general purpose CPU manufactured by Intel Corporation (Mountain View, Calif.) or Advanced Micro Devices, Inc. (Sunnyvale, Calif.).
- the server 180 further contains a wireless or wired network interface 184 to communicate with remote computing devices, such as mobile device 100 , over the network 170 .
- the processor 182 is programmed using a set of software instructions stored on a non-volatile, non-transitory, computer readable medium 186 , such as a hard drive or flash memory device.
- the software typically includes operating system software, such as LINUX (available from multiple companies under open source licensing terms) or WINDOWS (available from Microsoft Corporation of Redmond, Wash.).
- the processor 182 performs the media organization functions of server 180 under the direction of application programming 187 .
- Each user of the server 180 is separately defined and identified in the user data 188 .
- the media organization app 134 can assist the user in creating an account on the media organization server 180 .
- the account can require a username and password to access user content 189 that is stored on the server 180 on behalf of the users identified in data 188 .
- the media organization server 180 can operate behind the media organization app 134 , meaning that the user of the mobile device 100 need only access the server 180 through the user interface provided by the app 134 .
- the media organization server 180 can provide a web-based interface to the user content 189 , allowing a user to access and manipulate the user content 189 on any computing device with web access to the Internet 170 . This allows users to organize their user content 189 and format presentations of that data 189 via any web browser.
- the media organization server 180 contains information about content clusters and events created by a number of users, this server 180 can easily create its own database of past occurrences and events that could be useful to the media organization app 134 when clustering media. For instance, a first user could cluster media about a parade that they witnessed between 12:30 and 1:30 pm in Wall, S. Dak. on Jul. 4, 2013. The user could verify this cluster as a user-verified event, and could add a title and description to the event. This data would then be uploaded to the user data 188 on server 180 . At a later time, a mobile device 100 of a second user could make an inquiry to the media organization server 180 about events that occurred in downtown Wall, S. Dak. at 1 pm on Jul. 4, 2013.
- the server 180 could identify this time and location using the event created by the previous user, and return the title and description of the event to the mobile device 100 of the second user.
- the media organization server 180 could become a crowd-sourced event database server providing information similar to that provided by server 190 (except likely limited to past and not future events).
- FIG. 2 schematically illustrates the interaction of the media organization app 134 with content 140 and the other inputs that allow the media organization app 134 to create content clusters.
- the content 140 is found in the physical memory 130 of the mobile device 100 .
- this data 140 is found on “the cloud” 200 , meaning that the data is stored on remote servers 180 , 192 accessible by the mobile device 100 over network 170 .
- the dual possible locations for this content 140 is shown in FIG. 2 by locating the data 140 both within memory box 130 and the dotted cloud storage box 200 .
- the explicit media content 136 shown in FIG. 2 includes video content 222 , photo content 232 , and audio content 242 .
- the video content 222 is created by a video app 220 running on the processor 120 of the mobile device 100 .
- the video content 222 is created, it is stored along with metadata 224 that describes the video content 222 , including such information as when and where the video was created.
- a photo app 230 creates the photo content 232 and its related metadata 234
- a voice recording app 240 creates audio content 242 and metadata 244 .
- These three apps 220 , 230 , 240 may be standard apps provided along with the mobile operating system when the user purchased the mobile device 100 .
- the data 222 , 232 , 242 from these apps 220 , 230 , 240 are stored in known locations in the local memory 130 or on the cloud data system 200 .
- Third party or specialty apps 250 , 260 can also create explicit content 136 that is accessed by the media organization app 134 .
- the first specialty app 250 creates both photo content 232 and audio content 242 , and stores this data 232 , 242 and related metadata 234 , 244 in the same locations in memory 130 where the standard apps 230 , 240 provided with the device 100 store similar data.
- the second specialty app 260 also creates explicit media content 262 and related metadata 264 , but this content 262 is not stored in the standard locations in memory 130 . However, as long as the media organization app 134 is informed of the location of this specialty app content 262 on memory 130 , such content 262 can also be organized by the app 134 .
- the media organization app 134 In addition to the explicit content 222 - 262 , the media organization app 134 also organizes implicit content 138 and its metadata 274 . In one embodiment, this implicit content 138 is created by the same app 134 that organizes the content 140 into content clusters. In other embodiments, the media organization app 134 is split into two separate apps, with one app monitoring the sensors 150 and creating implicit content 138 , and the other app 134 being responsible for organizing content 140 .
- FIG. 2 also shows a calendar app 210 creating calendar data 212 on the mobile device 100 .
- this data can be used by the media organization app 134 as it arranges content 140 into content clusters.
- the calendar data 212 may have explicit descriptions describing where the user was scheduled to be at a particular time.
- the media organization app 134 can use this data to develop a better understanding about how to organize content 140 that was acquired at that same time.
- the app 134 also receives additional information about occurrences and events from the global event database server 190 and the crowd-sourced event data from the media organization server 180 .
- the data from these sources 180 , 190 is also very useful to the app 134 as it organizes the content 140 .
- the app 134 accesses all this content 140 , from the same locations in which the data was originally stored by the creating apps 210 - 260 and organizes it into content clusters using additional data from servers 180 and 190 .
- the content 140 is organized based primarily on the metadata 224 , 234 , 244 , 254 , 264 , and 274 that was attached to the content 140 by the app that created the content 140 .
- the media organization app 134 can augment the metadata. For instance, the app 134 could use facial recognition (or voice recognition) data 280 available on the mobile device 100 or over the network 170 to identify participants in the content 140 .
- Such recognition can occur using the processor 120 of the mobile device, but in most cases it is more efficient to use the processing power of a cloud content server 192 or the media organization server 180 to perform this recognition. Regardless of where it occurs, any matches to known participants will be used by the app 134 to organize the content 140 .
- FIG. 3 shows an example of one embodiment of a media organization app 300 organizing a plurality of items 310 - 370 into two content clusters 380 , 390 .
- there are three items of explicit content namely content one 310 , content two 320 and content three 330 .
- Content one 310 is associated with three items of metadata 312 - 316 , which indicate that content one 310 was acquired at time “Time 1” ( 312 ), at location “Loc. 1” ( 314 ), and that participants A and B participate in this content (metadata 316 ).
- Content one 310 could be, for example, a photograph of A & B, taken at Time 1 and Loc. 1.
- the metadata 322 - 326 for content two 320 indicates that it was acquired at time “Time 1.2” (slightly later than time “Time 1”), location “Loc. 1.1” (close to but not the same as “Loc. 1”), and included participants A & C.
- the metadata for content three 330 indicates only that it occurred at time “Time 2.1”.
- the media organization app 300 is also organizing one implicit content item 340 , which has metadata indicating that it was taken at time “Time 2” and location “Loc. 1”.
- the media organization app 300 has also obtained data 350 from one of the event database servers 180 , 190 .
- This data 350 indicates (through metadata 352 - 356 ) that an event with a description of “Descr. 1” occurred at location “Loc. 1” for the duration of “Time 1-1.2”.
- the app 300 pulled relevant information form the calendar data 212 and discovered two relevant calendar events.
- the first calendar item 360 indicates that the user was to be at an event with a title of “Title 1” at time “Time 1”, while the second calendar item 370 describes an event with a title of “Title 1” at time “Time 2”.
- the media organization app 300 gathers all of this information 310 - 370 together and attempts to organize the information 310 - 370 into content clusters.
- the app 300 identified a first cluster 380 consisting of explicit content one 310 , explicit content two 320 , event database information 350 , and calendar item one 360 .
- the media organization app 300 grouped these items of data 310 , 320 , 350 , 360 primarily using time and location information.
- the app 300 recognized that each of these items occurred at a similar time between “Time 1” and “Time 1.2”.
- the location was either “Loc. 1” or close by location “Loc. 1.1”.
- calendar data 212 or data from event databases 180 , 190 will identify not just a single time but an actual time duration.
- the calendar data 212 may indicate that a party was scheduled from 6 pm to 2 am. Based on this duration information, the media organization app 300 will be more likely to cluster content from 6 pm and content at 1 am as part of the same event.
- the calendar data 212 may identify a family camping trip that lasts for two days and three nights, which might cause the app 300 to group all content from that duration as a single event.
- the media organization app 300 identifies items 310 , 320 , 350 , 360 as being part of the cluster 380 , it stores this information in media organization data 139 on the mobile device 100 . This information may also be stored in the user content 189 stored for the user on the media organization server 180 .
- the information about cluster 380 not only identifies items of data 310 , 320 , 350 , 360 , as belonging to the cluster, but also aggregates the metadata from these items into metadata 382 for the entire content cluster 380 .
- This metadata 382 includes metadata from the explicit content 310 - 320 , which indicated that this content within this cluster 380 occurred during the time duration of “Time 1-1.2” and at location “Loc.
- the metadata from content 310 and 320 also indicated that this content involved participants A, B, and C.
- the content cluster metadata 282 can also indicate that this content relates to an event with the title “Title 1” having a description “Descr. 1”.
- the second content cluster 390 grouped together explicit content 330 , implicit content 340 , and calendar item two 370 primarily because these items 330 , 340 , 370 all occurred at time “Time 2” or soon thereafter (“Time 2.1”) and indicated either that they occurred at the same location (“Loc. 1”) or did not indication a location at all.
- the cluster metadata 392 for this content cluster 390 indicates the time frame (“Time 2-2.1”) and location (“Loc. 1”) taken from the explicit content 330 and the implicit content 340 .
- the metadata 392 also includes the title “Title 1” from calendar item 2 , which was linked with the others items 330 , 340 by the common time frame.
- An important feature of this embodiment of the present invention is that the clustering of content 380 , 390 is done automatically without user involvement.
- the user only needs to create explicit content 136 with their mobile device 100 using their normal content creation apps 132 . These apps 132 save their explicit content 136 as usual.
- the media organization app 300 can run in the background creating implicit content 138 (pursuant to earlier user instructions or preference settings).
- the media organization app 300 gathers the content 140 , makes inquiries from external event databases 180 , 190 , examines the user calendar data 212 , and then creates content clusters 280 , 290 for the user.
- This later time can be when the media organization app 300 is opened by the user and the user requests that the content clustering step occur. Alternatively, this later time can occur periodically in the background. For instance, the user may request through preference settings that the content clustering and database inquiries take place every night between midnight and two a.m., but only when the mobile device 100 is plugged into a power source.
- the media organization app 300 preferably gives the user the right to affirm or correct these clusters 380 , 390 .
- content cluster one 380 , cluster two 390 , and a third content cluster 410 are presented to a user through a user interface, represented in FIG. 4 by element 400 .
- the user interface 400 presents these clusters 380 , 390 , 410 and their contents for the user to review.
- the user can confirm a cluster as accurate and complete, as this user did with content cluster one 380 .
- the media organization app 300 will consider the cluster to be a user-confirmed event, such as event one 420 shown in FIG. 4 . Note that event one 420 contains the same metadata 382 that the content cluster 380 had before it was confirmed
- the media organization app 300 created separate clusters 390 , 410 , with cluster Two 390 occurring at time “Time 2” and cluster three 410 occurring at time “Time 2.5.” While the app 300 viewed these time frames as different enough as to create two separate clusters 390 , 410 , the user in FIG. 4 chose to combine the separate clusters 390 , 410 into a single user-confirmed event two 430 .
- the metadata 432 for event two 430 includes a time frame “Time 2-2.5” derived from the metadata 392 , 412 of both of the original content clusters 390 , 410 .
- the event two metadata 432 also can contain user added additions, such as the user description 433 of this event 430 .
- Each user-defined event includes one or more content items 140 that relate to a particular event that was likely attended by the user.
- the event might be a wedding, a party with a friend, or a child's swim meet.
- the user can better appreciate the content 140 .
- these events 420 , 430 are enhanced by the addition of implicit content 138 , and by the added data from calendar data 212 or one of the event databases 180 , 190 .
- a presentation grouping 500 is a grouping of two or more events according to a common subject for presentation together.
- the presentation may be slide show, a video file, a web site, or some unique combination that combines the media from multiple events 420 , 430 into a single presentation.
- Events 420 , 430 are grouped together by a common theme or subject. It is possible that some events 420 , 430 will be grouped into multiple presentation groupings 500 , while other events will not be grouped into any presentation groupings 500 .
- event one 420 is shown having title “Title 1” taken from the calendar item one 360 and event two 430 also has a title of “Title 1” taken from calendar item two 370 .
- the media organization app 300 recognizes this commonality, and then suggests that these two events 420 , 430 be combined into a single presentation grouping 500 .
- This grouping 500 contains both events 420 , 430 , and has metadata 502 taken from the metadata 422 , 432 of the two events 420 , 430 .
- metadata 502 that was shared by all events 420 , 430 in the presentation grouping 500 are bolded (namely the timeframe “Time 1-2.5”, the location “Loc. 1” and the title “Title 1”), which indicates that these elements in the metadata 502 are most likely to apply to the presentation grouping as a whole 500 .
- a user may have ten calendar entries all labeled “Third Grade Swim Meet.” Although this parent attended all ten swim meets, the parent took pictures (i.e., created explicit media content 136 ) at only six of these meets.
- the media organization app 300 will cluster this content 136 into six content clusters, with each cluster also containing a calendar entry with the same “Third Grade Swim Meet” title. Because of this commonality, the app 300 will automatically create a presentation grouping 500 containing content 136 from all six swim meets without including intervening content that is not related to the swim meets.
- these two events 420 , 430 may not have been grouped in a single presentation grouping 500 if the user had not created calendar entries with the same title “Title 1” for each event. While they shared the same location (“Loc. 1”), this might not have been enough commonality for the app 300 to group the events 420 , 430 together. However, if these events were swim meets and were sponsored by an organization that posted every meet in the global event database server 190 , this presentation grouping 500 could still be created. As long as one item in a cluster identifies a location and another identifies a time, then the global event database server 190 should be able to identify any events were scheduled at the same location and time. Each event 420 , 430 would then include the identification of the event received from the global event server 190 , and the media organization app 300 would be able to group the same events 420 , 430 as a presentation grouping 500 .
- another parent of a child in the third grade swim team may have created and labeled events using the media organization app 300 .
- the server 180 would now have knowledge of these swim meets.
- the media organization app 300 would query the server 180 and receive an identification of these swim meets, which would be added into their own events 420 , 430 .
- FIG. 6 shows a method 600 that is used to create implicit content 138 on the mobile device 100 .
- the method begins at step 610 , during which a user selects a particular mode to be used to monitor the sensors 150 of the mobile device 100 .
- the selected monitoring mode establishes which of the sensors 150 will be monitored by the method 600 , and also establishes a trigger that will be use to start recording data.
- a walking tour mode could be established in which an accelerometer is routinely (every few seconds) measured to determine whether an individual is currently walking (or running).
- a trigger event could be defined to detect a change in the walking status of the individual (e.g., a user who was walking is now standing still, or vice versa).
- the trigger could require that the change in status last more than two minutes.
- This alternative walking tour mode would be designed to record when the user starts walking or stops walking, but would not record temporary stops (for less than two minutes). So a user that is walking down a path may meet a friend and talk for ten minutes, and then continue down the path. When the user reaches a restaurant, the user stops, has lunch, and then returns home. This mode would record when the user started walking, when the user stopped to talk to a friend, when the user started again, when the user ate lunch, when the user finished lunch and stared walking home, and when the user returned home.
- This mode would not record when the user stopped to get a drink of water (because the user stopped for less than two minutes), or when the user got up at lunch to wash his hands (because the user walked for less than two minutes).
- Other modes might include a car trip mode, which would monitor an accelerometer and GPS device to record car stops that lasted longer than an hour, or a lunch conversation mode, which randomly monitors the microphone to listen for human voices and records one minute of the conversation if voices are recognized.
- the point of selecting a monitoring mode in step 610 is to ensure that the user approves of the monitoring of the sensors 150 that must be done to create implicit content 138 , and that the user desires to create this type of content 138 .
- the processor 120 will monitor the sensors 150 of the mobile device 100 at step 620 looking for a triggering event.
- the sensors 150 to be monitored and the triggering event will be determined by the selected monitoring mode. If the processor 120 detects a trigger at step 630 , the processor 120 will record data from the sensors 150 in step 640 .
- the data recorded from the sensors 150 does not have to be limited to, or even include, the sensor data that was used to detect the trigger in step 630 .
- the triggering event may be that the user took their cellular phone 100 out of their pocket. This could be determined by monitoring the accelerometer 160 and the ambient light sensor 164 .
- the processor 120 might record the location of the device 100 as indicated by the GPS sensor 158 , the current time as indicated by the clock 156 , and the next two minutes of conversation as received by the microphone 154 .
- Step 650 determines whether data from external sources are to be included as part of this implicit content 138 .
- data may include, for example, the weather at the currently location of the device 100 , or the presence of mobile devices 100 belonging to friends in the general proximity. If step 650 determines that external data will be included, a request for external data is made in step 652 , and the results of that request are received in step 654 .
- the media organization app 134 might request local weather information from another app on the mobile device 100 or from a weather database 194 accessible over the network 170 .
- a “locate my friends” app that detects the presence of mobile devices belong to a user's friend could be requested to identify any friends that are nearby at this time.
- the data from these apps or remote servers is received at step 654 , and combined with the data recorded from the sensors 150 at step 640 .
- a monitoring mode may establish that the data gathered after a triggering event (step 630 ) is always to be stored as an implicit content 138 .
- the monitoring mode may impose requirements before the data can be saved. For instance, the lunch conversation mode may not save the recorded audio as implicit content 138 if analysis of the recording indicates that the voices would be too muffled to be understood.
- the condition for saving the data under the monitoring mode is met at step 660 , then the data (including both sensor data recorded at step 640 and external data received at step 654 ) is recorded as implicit content at 670 . If the step 660 determines that the condition is not met, step 270 is skipped.
- the process 600 either returns to monitoring the device sensors 150 at step 620 , or ends depending on whether additional monitoring is expected by the monitoring mode.
- FIG. 7 shows a method 700 for clustering content 140 into content clusters.
- the process 700 starts at step 705 by gathering the explicit content 136 from the memory 130 on the mobile device 100 , a cloud storage server 192 , or both.
- the implicit content 138 is gathered at step 710 , again either from memory 130 or from user content storage 189 at server 180 .
- These steps 705 , 710 may gather all information available at these data locations, or may only search for new content 140 added since the last time the app 134 organized the content 140 .
- the media organization app 134 accessing facial or voice recognition data 280 in order to supplement the participant information found in the metadata for the gathered content 140 .
- this step 715 could be skipped if participant information was already adequately found in the metadata for the content 140 , or if no participant recognition data 280 were available to the app 134 .
- the media organization app 134 analyses the metadata for the content 140 , paying particular attention to location, time, participant, and title metadata (if available) for the content 140 .
- the app 134 uses the time information taken from the content 140 to analyze the calendar data 212 looking for any calendar defined events that relate to the content 140 being analyzed (step 725 ).
- the app 134 uses time and location information from the content 140 to search for occurrence information from one or more third party event databases 190 (step 730 ).
- the app 134 also makes a similar query at step 735 to the crowd-sourced event definitions maintained by the media organization server 180 . If the calendar data or the responses to the queries made in steps 730 , 735 contain data that is relevant to the content 140 being analyzed, such data will be included with the content 140 at step 740 .
- the content 140 and the relevant data from steps 725 - 735 are clustered together by comparing metadata from the content 140 and the added data.
- clusters are based primarily on similarities in time metadata.
- the app 134 attempts to group the content 140 by looking for clusters in the time metadata.
- location metadata is also examined, whereby the app 134 ensures that no content cluster contains data from disparate locations.
- step 750 metadata is created for the content clusters by examining the metadata from the content 140 and the additional data obtained through steps 725 - 735 .
- the clusters are then stored in the media organization data 139 in memory 130 , in the user content 189 of the media organization server 180 , or both.
- the automatically created content clusters are presented through a user interface to a user for confirmation as user-confirmed events.
- the user can confirm a cluster without change as an event, can split one cluster into multiple events, or combine two or more clusters into a single event.
- the app 134 receives the verified events from the user interface at step 765 .
- the user can also confirm and supplement the metadata, adding descriptions and tags to the metadata as the user sees fit.
- the verified events are saved in step 770 with the media organization data 139 in memory 130 , and/or in the user content 189 of the media organization server 180 . As explained above, these data locations 139 , 189 can be designed to hold only the organizational information for the content 140 while the content 140 itself remains in its original locations unaltered.
- all of the organized content 140 can be gathered and stored together as user content 189 stored at the media organization server 180 . While this would involve a large amount of data transfer, the media organization app 134 can be programmed to upload this data only in certain environments, such as when connected to a power supply, with access to the Internet 170 via Wi-Fi Network Interface 144 , and only between the hours of midnight and 5 am. Alternatively, this data could be uploaded continuously to the remote media organization server 180 in the background while the mobile device 100 is otherwise inactive or even while the device 100 is performing other tasks.
- FIG. 8 shows a method 800 for grouping events into presentation groupings.
- This method 800 starts at step 805 , wherein events are identified by the media organization app 134 for grouping.
- Step 805 might be limited to clusters that have formally become user-verified events through steps 765 and 770 .
- the process 800 may include unverified content clusters stored at step 755 .
- the app 134 examines the metadata for each event and cluster, and then attempts to find commonalities between the events and clusters. As explained above, these commonalities can frequently be based upon event information obtained from calendar data 212 or from data obtained by outside event data 180 , 190 .
- step 810 uses commonality in the metadata that does not relate to closeness-in-time.
- the reason for this is that content that was collected close to the same time as other similar content would, in most cases, have already been clustered together into events. Consequently, it is likely that the separate events being grouped together into a presentation grouping would not share a common time with one another.
- the app 134 uses the metadata from the combined events to create metadata for the presentation groupings.
- the presentation groupings and metadata are then stored at step 820 in the media organization data 139 or in the user data 189 on server 180 .
- the user is allowed to verify the presentation groupings created at step 810 .
- the user is given the ability to add events or content 140 directly to a presentation grouping, or to remove events or content 140 from the presentation grouping.
- the user is also given the ability to modify the metadata, and to format the presentation grouping as desired by the user.
- the presentation grouping may be used to create a web site, a slide show, or a video presentation of the combined content.
- numerous formatting options will be available to a user at step 825 to format the presentation grouping.
- the user modifications to the presentation groupings are stored at locations 139 or 189 , and the process 800 ends.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Library & Information Science (AREA)
- Signal Processing (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Information Transfer Between Computers (AREA)
Abstract
A computerized system and method are presented that creates implicit content on a mobile device by monitoring and recording input from sensors on the device. Metadata from the implicit content and from user-created content is then analyzed the purpose of event identification. Using the metadata and event identification, the content is created into clusters, which can be confirmed by the user as actual events. Events can then be grouped according to metadata and event information into a presentation grouping.
Description
- This application is a continuation of U.S. patent application Ser. No. 13/832,177, filed Mar. 15, 2013, which is hereby incorporated by reference in its entirety.
- The present application relates to the field of computerized systems that analyze content on mobile devices for the purpose of clustering content together.
- An embodiment of the present invention creates implicit content on a mobile device by monitoring and recording input from sensors on the device. This embodiment also analyzes metadata from the implicit content and metadata from explicit content created by a user for the purpose of creating content clusters, which are confirmed by the user as actual events. Events can then be grouped according to metadata and event information into a presentation grouping.
-
FIG. 1 is a schematic diagram showing a mobile device and a plurality of servers communicating over a network. -
FIG. 2 is a schematic diagram of showing an application accepting input to form a content cluster. -
FIG. 3 is a schematic diagram showing content being clustered by a media organization app. -
FIG. 4 is a schematic diagram showing content clusters being confirmed as events through a user interface. -
FIG. 5 is a schematic diagram showing events being clustered into a presentation grouping by the media organization app. -
FIG. 6 is a flow chart showing a method for generating implicit content. -
FIG. 7 is a flow chart showing a method for content clustering. -
FIG. 8 is a flow chart showing a method for the grouping of events into presentation groupings. -
FIG. 1 shows amobile device 100 utilizing one embodiment of the present invention. Themobile device 100 can communicate over awide area network 170 with a plurality of computing devices. InFIG. 1 , themobile device 100 communicates with amedia organization server 180, a globalevent database server 190, one or morecloud content servers 192, and a third-partyinformation provider server 194. - The
mobile device 100 can take the form of a smart phone or tablet computer. As such, thedevice 100 will include adisplay 110 for displaying information to a user, aprocessor 120 for processing instructions and data for thedevice 100, amemory 130 for storing processing instructions and data, and one or moreuser input interfaces 142 to allow the user to provide instructions and data to themobile device 100. Thedisplay 110 can be use LCD, OLED, or similar technology to provide a color display for the user. In some embodiments, thedisplay 110 incorporates touchscreen capabilities so as to function as auser input interface 142. Theprocessor 120 can be a general purpose CPU, such as those provided by Intel Corporation (Mountain View, Calif.) or Advanced Micro Devices, Inc. (Sunnyvale, Calif.), or a mobile specific processor, such as those designed by ARM Holdings (Cambridge, UK). Mobile devices such asdevice 100 generally use specific operating systems designed for such devices, such as iOS from Apple Inc. (Cupertino, Calif.) or ANDROID OS from Google Inc. (Menlo Park, Calif.). The operating systems are stored on thememory 130 and are used by theprocessor 120 to provide a user interface for thedisplay 110 anduser input devices 142, handle communications for thedevice 100, and to manage applications (or apps) that are stored in thememory 130. Thememory 130 is shown inFIG. 1 with two different types of apps, namelycontent creation apps 132 and amedia organization app 134. Thecontent creation apps 132 are apps that createexplicit media content 136 in thememory 130, and include video creation apps, still image creation apps, and audio recording apps. Themedia organization app 134 createsimplicit content 138. Themedia organization app 134 is responsible for gathering the different types ofexplicit media content 136 and the implicit content 138 (referred to together as content 140), analyzing thecontent 140, and then organizing thecontent 140 into clusters, events, and presentation groupings that are stored inmedia organization data 139 as described below. - The
mobile device 100 communicates over thenetwork 170 through one of two network interfaces, namely a Wi-Fi network interface 144 and acellular network interface 146. The Wi-Fi network interface 144 connects thedevice 100 to a local wireless network that provides connection to thewide area network 170. The Wi-Fi network interface 144 preferably connects via one of the Institute of Electrical and Electronics Engineers' (IEEE) 802.11 standards. In one embodiment, the local network is based on TCP/IP, and the Wi-Fi network interface includes TCP/IP protocol stacks. Thecellular network interface 146 communicates over a cellular data network. The provider of the cellular data network then provides an interface to thewide area network 170. In one embodiment, thewide area network 170 is the Internet. - The
mobile device 100 usessensors 150 for a variety of purposes on thedevice 100. In the present embodiment, thesensors 150 provide the means to createmedia content 136. Thecontent creation apps 132 respond to signals from theuser input 142 to capturemedia content 136 using thecamera sensor 152 and the microphone 154. These types ofmedia content 136 are known as “explicit media content” because the user has explicitly requested that themobile device 100 capture and store thismedia content 136. For instance, a user might instruct aphoto taking app 132 to take a still photograph using thecamera 152, or to stitch together a stream of input from thecamera sensor 152 into a panorama image that is stored asexplicit media content 136. Amovie app 132 might record input from thecamera 152 andmicrophone 154 sensors as avideo file 136. Or avoice memo app 132 might record input from themicrophone sensor 154 to create an audiomedia content file 136. In each case, thesecontent creation apps 132 respond to an explicit request from a user to create themedia content 136. In most cases, theexplicit media content 136 is stored as a file or a data record in thememory 130 of themobile device 100. This file or data record includes both the actual content recorded by thesensors 150 and metadata associated with that recording. The metadata will include the date and time at which themedia content 136 was recorded, as determined by theclock 156. Frequently, the metadata also includes a geographic location where themedia content 136 was created. The geographic location can be determined from theGPS sensor 158, or by using other location identifying techniques such as identifying nearby Wi-Fi networks using the Wi-Fi Network Interface 144, or through nearby cell tower identification using thecellular network interface 146. Somecontent creation apps 132 will include facial recognition capabilities in order to tag the identity of individuals within a photo orvideo file 136.Other apps 132 will allow a user a manually tag theirfiles 136 so as to identify the individuals (or “participants”) portrayed in thosemedia files 136. These identity tags can then be added to the metadata stored with themedia content file 136 inmemory 130. - In some embodiments, the
explicit media content 136 will be stored remotely on acloud content server 192. For example, all photographs taken by thecamera 152 may be stored inmemory 130 asexplicit media content 136 and may also be transmitted over one of thenetwork interfaces cloud content server 192. The locally storedexplicit media content 136 may be temporary in nature, with permanent storage provided on thecloud content server 192. In some circumstances, thecloud content server 192 will be provided by a third party, such as the FLICKR service provided by Yahoo! Inc. of Sunnyvale, Calif. - The
media organization app 134 createsimplicit content 138 by monitoring thesensors 150 on themobile device 100 and storing related data asimplicit content 138 when it monitors an interesting change in thesensors 150. For instance, themedia organization app 134 might be monitoring theGPS sensor 158 andaccelerometer 160 during a family driving vacation from Chicago, Ill. to Yellowstone National Park in Wyoming. Theaccelerometer 160 can indicate when the family car stops, and then determine the location of themobile device 100 using theGPS sensor 158. By monitoring theaccelerometer 160 and the GPS sensor 158 (at least periodically), themedia organization app 134 can determine that the car was stopped during this family vacation for 3 hours, 15 minutes in Wall, S. Dak. This data could be stored asimplicit content 138 in thememory 130. - When the
app 134 creates thisimplicit content 138, it may also uses one of the network interfaces 144, 146 to obtain additional information about thisimplicit content 138. For example, theapp 134 may contact a globalevent database server 190 that contains information about a great number of events (or “occurrences”). This type ofdatabase server 190, which is provided by several third parties over theInternet 170, allows users to specify a geographic location and a time, and theserver 190 will respond with information about occurrences happening near that location around that time. The information returned from the global event database server will generally include a title for the occurrence, a description for that occurrence, a time period during which that occurrence takes place, and an exact physical location for that occurrence. For example, during the stop in Wall, S. Dak., theapp 134 may inquire whether there are any events happening in Wall at the time the vehicle was stopped. Theevent database server 190 may indicate that at this time, a parade was happening in downtown Wall. Theapp 134 may also make inquiries from differentinformation provider servers 194, such as aserver 194 that provides weather information for a particular geographic location. By acquiring this information fromexternal database sources media organization app 134 would be able to createimplicit content 138 indicating that from 12:15 to 3:30 pm on Jul. 4, 2013, the user of themobile device 100 stopped in Wall, S. Dak. and witnessed a parade in sunny, 92 degree weather. - The
media organization app 134 can take advantage of any of thesensors 150 on themobile device 100, including thecamera 152,microphone 154,clock 156,GPS sensor 158,accelerometer 160,gyroscope 162, ambientlight sensor 164, andproximity sensor 166. Theapp 134 can define monitoring modes that determine the extent to which it monitors thevarious sensor 150. For instance, in one monitoring mode theapp 134 could provide reverse geocoding by periodically (or continually) recording a location for the user from theGPS sensor 158. In another mode, theapp 134 could monitor the accelerometer to indicate when the user is moving or has stopped moving. In a third mode, theapp 134 could periodically monitor themicrophone 154. If no interesting noises are detected, theapp 134 would wait for the next interval before it again monitored themicrophone 154. If interesting noises were detected (e.g., noises that were characteristic of human voices), theapp 134 could record a small amount of the conversation and record it asimplicit content 138 inmemory 130, along with the time and location at which the conversation was recorded. In a fourth mode, the use of another app, such as one of thecontent creation apps 132, triggers the creation of animplicit content file 138. For instance, the use of a photo ormovie app 132 may cause themedia organization app 134 to record the GPS location, the current weather, and the current event, if any, noted by the globalevent database server 190. In addition, theapp 132 in this fourth mode may record sounds from themicrophone 154 to capture conversations between the user of themobile device 100 and her photography subjects. These conversations would be stored asimplicit content 138 inmemory 130. - When requested by the user, the
media organization app 134 collects the content 140 from the memory 130 (and from cloud content servers 192) and organizes thecontent 140 into content clusters. Content clusters are groups ofcontent 140 that are grouped together as belonging to a particular occurrence or event. As described below, content clusters are presented to the user for modification and verification, after which the content groupings are referred to as user-verified events. Events may involve numerous elements ofcontent 140, or may involve only a single element ofcontent 140. In the preferred embodiment, the content clusters and events are stored inmedia organization data 139. In addition, the content clusters and events could be stored on amedia organization server 180 accessible by themobile device 100 over thenetwork 170. - The
media organization server 180 contains a programmabledigital processor 182, such as a general purpose CPU manufactured by Intel Corporation (Mountain View, Calif.) or Advanced Micro Devices, Inc. (Sunnyvale, Calif.). Theserver 180 further contains a wireless or wirednetwork interface 184 to communicate with remote computing devices, such asmobile device 100, over thenetwork 170. Theprocessor 182 is programmed using a set of software instructions stored on a non-volatile, non-transitory, computerreadable medium 186, such as a hard drive or flash memory device. The software typically includes operating system software, such as LINUX (available from multiple companies under open source licensing terms) or WINDOWS (available from Microsoft Corporation of Redmond, Wash.). - The
processor 182 performs the media organization functions ofserver 180 under the direction ofapplication programming 187. Each user of theserver 180 is separately defined and identified in theuser data 188. Themedia organization app 134 can assist the user in creating an account on themedia organization server 180. The account can require a username and password to accessuser content 189 that is stored on theserver 180 on behalf of the users identified indata 188. Themedia organization server 180 can operate behind themedia organization app 134, meaning that the user of themobile device 100 need only access theserver 180 through the user interface provided by theapp 134. In addition, themedia organization server 180 can provide a web-based interface to theuser content 189, allowing a user to access and manipulate theuser content 189 on any computing device with web access to theInternet 170. This allows users to organize theiruser content 189 and format presentations of thatdata 189 via any web browser. - Because the
media organization server 180 contains information about content clusters and events created by a number of users, thisserver 180 can easily create its own database of past occurrences and events that could be useful to themedia organization app 134 when clustering media. For instance, a first user could cluster media about a parade that they witnessed between 12:30 and 1:30 pm in Wall, S. Dak. on Jul. 4, 2013. The user could verify this cluster as a user-verified event, and could add a title and description to the event. This data would then be uploaded to theuser data 188 onserver 180. At a later time, amobile device 100 of a second user could make an inquiry to themedia organization server 180 about events that occurred in downtown Wall, S. Dak. at 1 pm on Jul. 4, 2013. Theserver 180 could identify this time and location using the event created by the previous user, and return the title and description of the event to themobile device 100 of the second user. In effect, themedia organization server 180 could become a crowd-sourced event database server providing information similar to that provided by server 190 (except likely limited to past and not future events). -
FIG. 2 schematically illustrates the interaction of themedia organization app 134 withcontent 140 and the other inputs that allow themedia organization app 134 to create content clusters. In one embodiment, thecontent 140 is found in thephysical memory 130 of themobile device 100. In another embodiment, thisdata 140 is found on “the cloud” 200, meaning that the data is stored onremote servers mobile device 100 overnetwork 170. The dual possible locations for thiscontent 140 is shown inFIG. 2 by locating thedata 140 both withinmemory box 130 and the dottedcloud storage box 200. - The
explicit media content 136 shown inFIG. 2 includesvideo content 222,photo content 232, andaudio content 242. Thevideo content 222 is created by avideo app 220 running on theprocessor 120 of themobile device 100. When thevideo content 222 is created, it is stored along with metadata 224 that describes thevideo content 222, including such information as when and where the video was created. Similarly aphoto app 230 creates thephoto content 232 and its related metadata 234, and avoice recording app 240 createsaudio content 242 andmetadata 244. These threeapps mobile device 100. Thedata apps local memory 130 or on thecloud data system 200. - Third party or
specialty apps explicit content 136 that is accessed by themedia organization app 134. Thefirst specialty app 250 creates bothphoto content 232 andaudio content 242, and stores thisdata related metadata 234, 244 in the same locations inmemory 130 where thestandard apps device 100 store similar data. Thesecond specialty app 260 also createsexplicit media content 262 andrelated metadata 264, but thiscontent 262 is not stored in the standard locations inmemory 130. However, as long as themedia organization app 134 is informed of the location of thisspecialty app content 262 onmemory 130,such content 262 can also be organized by theapp 134. - In addition to the explicit content 222-262, the
media organization app 134 also organizesimplicit content 138 and itsmetadata 274. In one embodiment, thisimplicit content 138 is created by thesame app 134 that organizes thecontent 140 into content clusters. In other embodiments, themedia organization app 134 is split into two separate apps, with one app monitoring thesensors 150 and creatingimplicit content 138, and theother app 134 being responsible for organizingcontent 140. -
FIG. 2 also shows acalendar app 210 creatingcalendar data 212 on themobile device 100. In one embodiment, this data can be used by themedia organization app 134 as it arrangescontent 140 into content clusters. As explained below, thecalendar data 212 may have explicit descriptions describing where the user was scheduled to be at a particular time. Themedia organization app 134 can use this data to develop a better understanding about how to organizecontent 140 that was acquired at that same time. Theapp 134 also receives additional information about occurrences and events from the globalevent database server 190 and the crowd-sourced event data from themedia organization server 180. The data from thesesources app 134 as it organizes thecontent 140. - The
app 134 accesses all thiscontent 140, from the same locations in which the data was originally stored by the creating apps 210-260 and organizes it into content clusters using additional data fromservers content 140 is organized based primarily on themetadata content 140 by the app that created thecontent 140. In some circumstances, themedia organization app 134 can augment the metadata. For instance, theapp 134 could use facial recognition (or voice recognition)data 280 available on themobile device 100 or over thenetwork 170 to identify participants in thecontent 140. Such recognition can occur using theprocessor 120 of the mobile device, but in most cases it is more efficient to use the processing power of acloud content server 192 or themedia organization server 180 to perform this recognition. Regardless of where it occurs, any matches to known participants will be used by theapp 134 to organize thecontent 140. -
FIG. 3 shows an example of one embodiment of amedia organization app 300 organizing a plurality of items 310-370 into twocontent clusters Time 1” (312), at location “Loc. 1” (314), and that participants A and B participate in this content (metadata 316). Content one 310 could be, for example, a photograph of A & B, taken atTime 1 and Loc. 1. Similarly, the metadata 322-326 for content two 320 indicates that it was acquired at time “Time 1.2” (slightly later than time “Time 1”), location “Loc. 1.1” (close to but not the same as “Loc. 1”), and included participants A & C. The metadata for content three 330 indicates only that it occurred at time “Time 2.1”. - In addition to the three
explicit content items media organization app 300 is also organizing oneimplicit content item 340, which has metadata indicating that it was taken at time “Time 2” and location “Loc. 1”. Themedia organization app 300 has also obtaineddata 350 from one of theevent database servers data 350 indicates (through metadata 352-356) that an event with a description of “Descr. 1” occurred at location “Loc. 1” for the duration of “Time 1-1.2”. Finally, theapp 300 pulled relevant information form thecalendar data 212 and discovered two relevant calendar events. Thefirst calendar item 360 indicates that the user was to be at an event with a title of “Title 1” at time “Time 1”, while thesecond calendar item 370 describes an event with a title of “Title 1” at time “Time 2”. - The
media organization app 300 gathers all of this information 310-370 together and attempts to organize the information 310-370 into content clusters. In this case, theapp 300 identified afirst cluster 380 consisting of explicit content one 310, explicit content two 320,event database information 350, and calendar item one 360. Themedia organization app 300 grouped these items ofdata app 300 recognized that each of these items occurred at a similar time between “Time 1” and “Time 1.2”. Furthermore, to the extent that theitems calendar data 212 or data fromevent databases data calendar data 212 may indicate that a party was scheduled from 6 pm to 2 am. Based on this duration information, themedia organization app 300 will be more likely to cluster content from 6 pm and content at 1 am as part of the same event. Similarly, thecalendar data 212 may identify a family camping trip that lasts for two days and three nights, which might cause theapp 300 to group all content from that duration as a single event. - Once the
media organization app 300 identifiesitems cluster 380, it stores this information inmedia organization data 139 on themobile device 100. This information may also be stored in theuser content 189 stored for the user on themedia organization server 180. The information aboutcluster 380 not only identifies items ofdata metadata 382 for theentire content cluster 380. Thismetadata 382 includes metadata from the explicit content 310-320, which indicated that this content within thiscluster 380 occurred during the time duration of “Time 1-1.2” and at location “Loc. 1.” The metadata fromcontent media organization app 300 accessed thecalendar data 212 and the data from theevent database servers Title 1” having a description “Descr. 1”. - The
second content cluster 390 grouped togetherexplicit content 330,implicit content 340, and calendar item two 370 primarily because theseitems Time 2” or soon thereafter (“Time 2.1”) and indicated either that they occurred at the same location (“Loc. 1”) or did not indication a location at all. Thecluster metadata 392 for thiscontent cluster 390 indicates the time frame (“Time 2-2.1”) and location (“Loc. 1”) taken from theexplicit content 330 and theimplicit content 340. Themetadata 392 also includes the title “Title 1” fromcalendar item 2, which was linked with theothers items - An important feature of this embodiment of the present invention is that the clustering of
content explicit content 136 with theirmobile device 100 using their normalcontent creation apps 132. Theseapps 132 save theirexplicit content 136 as usual. Themedia organization app 300 can run in the background creating implicit content 138 (pursuant to earlier user instructions or preference settings). At a later time, themedia organization app 300 gathers thecontent 140, makes inquiries fromexternal event databases user calendar data 212, and then createscontent clusters 280, 290 for the user. This later time can be when themedia organization app 300 is opened by the user and the user requests that the content clustering step occur. Alternatively, this later time can occur periodically in the background. For instance, the user may request through preference settings that the content clustering and database inquiries take place every night between midnight and two a.m., but only when themobile device 100 is plugged into a power source. - Because the content clustering shown in
FIG. 2 takes place without user involvement, themedia organization app 300 preferably gives the user the right to affirm or correct theseclusters FIG. 4 , content cluster one 380, cluster two 390, and athird content cluster 410 are presented to a user through a user interface, represented inFIG. 4 byelement 400. Theuser interface 400 presents theseclusters cluster 380 is confirmed, themedia organization app 300 will consider the cluster to be a user-confirmed event, such as event one 420 shown inFIG. 4 . Note that event one 420 contains thesame metadata 382 that thecontent cluster 380 had before it was confirmed - Sometimes the user will wish to consolidate two different clusters into a single event. In
FIG. 4 , themedia organization app 300 createdseparate clusters Time 2” and cluster three 410 occurring at time “Time 2.5.” While theapp 300 viewed these time frames as different enough as to create twoseparate clusters FIG. 4 chose to combine theseparate clusters metadata 432 for event two 430 includes a time frame “Time 2-2.5” derived from themetadata original content clusters metadata 432 also can contain user added additions, such as theuser description 433 of thisevent 430. - Each user-defined event includes one or
more content items 140 that relate to a particular event that was likely attended by the user. The event might be a wedding, a party with a friend, or a child's swim meet. By clustering thecontent 140 together intoevents content 140. Furthermore, theseevents implicit content 138, and by the added data fromcalendar data 212 or one of theevent databases - In
FIG. 5 , themedia organization app 300 is being used to establish apresentation grouping 500. Apresentation grouping 500 is a grouping of two or more events according to a common subject for presentation together. The presentation may be slide show, a video file, a web site, or some unique combination that combines the media frommultiple events Events events multiple presentation groupings 500, while other events will not be grouped into anypresentation groupings 500. - In
FIG. 5 , event one 420 is shown having title “Title 1” taken from the calendar item one 360 and event two 430 also has a title of “Title 1” taken from calendar item two 370. Themedia organization app 300 recognizes this commonality, and then suggests that these twoevents single presentation grouping 500. Thisgrouping 500 contains bothevents metadata events FIG. 5 ,metadata 502 that was shared by allevents presentation grouping 500 are bolded (namely the timeframe “Time 1-2.5”, the location “Loc. 1” and the title “Title 1”), which indicates that these elements in themetadata 502 are most likely to apply to the presentation grouping as a whole 500. - Frequently, many events will be combined into a
single presentation grouping 500. For instance, a user may have ten calendar entries all labeled “Third Grade Swim Meet.” Although this parent attended all ten swim meets, the parent took pictures (i.e., created explicit media content 136) at only six of these meets. Themedia organization app 300 will cluster thiscontent 136 into six content clusters, with each cluster also containing a calendar entry with the same “Third Grade Swim Meet” title. Because of this commonality, theapp 300 will automatically create apresentation grouping 500 containingcontent 136 from all six swim meets without including intervening content that is not related to the swim meets. - It is true that, in the example shown in
FIG. 5 , these twoevents single presentation grouping 500 if the user had not created calendar entries with the same title “Title 1” for each event. While they shared the same location (“Loc. 1”), this might not have been enough commonality for theapp 300 to group theevents event database server 190, thispresentation grouping 500 could still be created. As long as one item in a cluster identifies a location and another identifies a time, then the globalevent database server 190 should be able to identify any events were scheduled at the same location and time. Eachevent global event server 190, and themedia organization app 300 would be able to group thesame events presentation grouping 500. - Alternatively, another parent of a child in the third grade swim team may have created and labeled events using the
media organization app 300. When this data was uploaded to themedia organization server 180, theserver 180 would now have knowledge of these swim meets. When the next user attempts to cluster content taken at the same swim meets, themedia organization app 300 would query theserver 180 and receive an identification of these swim meets, which would be added into theirown events -
FIG. 6 shows amethod 600 that is used to createimplicit content 138 on themobile device 100. The method begins atstep 610, during which a user selects a particular mode to be used to monitor thesensors 150 of themobile device 100. The selected monitoring mode establishes which of thesensors 150 will be monitored by themethod 600, and also establishes a trigger that will be use to start recording data. For example, a walking tour mode could be established in which an accelerometer is routinely (every few seconds) measured to determine whether an individual is currently walking (or running). A trigger event could be defined to detect a change in the walking status of the individual (e.g., a user who was walking is now standing still, or vice versa). Alternatively, the trigger could require that the change in status last more than two minutes. This alternative walking tour mode would be designed to record when the user starts walking or stops walking, but would not record temporary stops (for less than two minutes). So a user that is walking down a path may meet a friend and talk for ten minutes, and then continue down the path. When the user reaches a restaurant, the user stops, has lunch, and then returns home. This mode would record when the user started walking, when the user stopped to talk to a friend, when the user started again, when the user ate lunch, when the user finished lunch and stared walking home, and when the user returned home. This mode would not record when the user stopped to get a drink of water (because the user stopped for less than two minutes), or when the user got up at lunch to wash his hands (because the user walked for less than two minutes). Other modes might include a car trip mode, which would monitor an accelerometer and GPS device to record car stops that lasted longer than an hour, or a lunch conversation mode, which randomly monitors the microphone to listen for human voices and records one minute of the conversation if voices are recognized. The point of selecting a monitoring mode instep 610 is to ensure that the user approves of the monitoring of thesensors 150 that must be done to createimplicit content 138, and that the user desires to create this type ofcontent 138. - Once the mode is established, the
processor 120 will monitor thesensors 150 of themobile device 100 atstep 620 looking for a triggering event. Thesensors 150 to be monitored and the triggering event will be determined by the selected monitoring mode. If theprocessor 120 detects a trigger atstep 630, theprocessor 120 will record data from thesensors 150 instep 640. Note that the data recorded from thesensors 150 does not have to be limited to, or even include, the sensor data that was used to detect the trigger instep 630. For instance, the triggering event may be that the user took theircellular phone 100 out of their pocket. This could be determined by monitoring theaccelerometer 160 and the ambientlight sensor 164. When this occurs, theprocessor 120 might record the location of thedevice 100 as indicated by theGPS sensor 158, the current time as indicated by theclock 156, and the next two minutes of conversation as received by themicrophone 154. - Step 650 determines whether data from external sources are to be included as part of this
implicit content 138. Such data may include, for example, the weather at the currently location of thedevice 100, or the presence ofmobile devices 100 belonging to friends in the general proximity. Ifstep 650 determines that external data will be included, a request for external data is made instep 652, and the results of that request are received instep 654. For example, themedia organization app 134 might request local weather information from another app on themobile device 100 or from aweather database 194 accessible over thenetwork 170. Alternative, a “locate my friends” app that detects the presence of mobile devices belong to a user's friend could be requested to identify any friends that are nearby at this time. The data from these apps or remote servers is received atstep 654, and combined with the data recorded from thesensors 150 atstep 640. - At
step 660, a determination is made whether to save this accumulated data. In some circumstances, a monitoring mode may establish that the data gathered after a triggering event (step 630) is always to be stored as animplicit content 138. In other circumstances, the monitoring mode may impose requirements before the data can be saved. For instance, the lunch conversation mode may not save the recorded audio asimplicit content 138 if analysis of the recording indicates that the voices would be too muffled to be understood. If the condition for saving the data under the monitoring mode is met atstep 660, then the data (including both sensor data recorded atstep 640 and external data received at step 654) is recorded as implicit content at 670. If thestep 660 determines that the condition is not met, step 270 is skipped. Atstep 680, theprocess 600 either returns to monitoring thedevice sensors 150 atstep 620, or ends depending on whether additional monitoring is expected by the monitoring mode. -
FIG. 7 shows amethod 700 forclustering content 140 into content clusters. Theprocess 700 starts atstep 705 by gathering theexplicit content 136 from thememory 130 on themobile device 100, acloud storage server 192, or both. Next theimplicit content 138 is gathered atstep 710, again either frommemory 130 or fromuser content storage 189 atserver 180. Thesesteps new content 140 added since the last time theapp 134 organized thecontent 140. - At
step 715, themedia organization app 134 accessing facial orvoice recognition data 280 in order to supplement the participant information found in the metadata for the gatheredcontent 140. Of course, thisstep 715 could be skipped if participant information was already adequately found in the metadata for thecontent 140, or if noparticipant recognition data 280 were available to theapp 134. - At
step 720, themedia organization app 134 analyses the metadata for thecontent 140, paying particular attention to location, time, participant, and title metadata (if available) for thecontent 140. Using the time information taken from thecontent 140, theapp 134 analyzes thecalendar data 212 looking for any calendar defined events that relate to thecontent 140 being analyzed (step 725). In addition, theapp 134 uses time and location information from thecontent 140 to search for occurrence information from one or more third party event databases 190 (step 730). Theapp 134 also makes a similar query atstep 735 to the crowd-sourced event definitions maintained by themedia organization server 180. If the calendar data or the responses to the queries made insteps content 140 being analyzed, such data will be included with thecontent 140 atstep 740. - At
step 745, thecontent 140 and the relevant data from steps 725-735 are clustered together by comparing metadata from thecontent 140 and the added data. In one embodiment, clusters are based primarily on similarities in time metadata. In this embodiment, theapp 134 attempts to group thecontent 140 by looking for clusters in the time metadata. In other embodiments, location metadata is also examined, whereby theapp 134 ensures that no content cluster contains data from disparate locations. - At
step 750, metadata is created for the content clusters by examining the metadata from thecontent 140 and the additional data obtained through steps 725-735. The clusters are then stored in themedia organization data 139 inmemory 130, in theuser content 189 of themedia organization server 180, or both. - At
step 760, the automatically created content clusters are presented through a user interface to a user for confirmation as user-confirmed events. The user can confirm a cluster without change as an event, can split one cluster into multiple events, or combine two or more clusters into a single event. Theapp 134 receives the verified events from the user interface atstep 765. The user can also confirm and supplement the metadata, adding descriptions and tags to the metadata as the user sees fit. Finally, the verified events are saved instep 770 with themedia organization data 139 inmemory 130, and/or in theuser content 189 of themedia organization server 180. As explained above, thesedata locations content 140 while thecontent 140 itself remains in its original locations unaltered. Alternatively, all of the organizedcontent 140 can be gathered and stored together asuser content 189 stored at themedia organization server 180. While this would involve a large amount of data transfer, themedia organization app 134 can be programmed to upload this data only in certain environments, such as when connected to a power supply, with access to theInternet 170 via Wi-Fi Network Interface 144, and only between the hours of midnight and 5 am. Alternatively, this data could be uploaded continuously to the remotemedia organization server 180 in the background while themobile device 100 is otherwise inactive or even while thedevice 100 is performing other tasks. -
FIG. 8 shows amethod 800 for grouping events into presentation groupings. Thismethod 800 starts atstep 805, wherein events are identified by themedia organization app 134 for grouping. Step 805 might be limited to clusters that have formally become user-verified events throughsteps process 800 may include unverified content clusters stored atstep 755. Atstep 810, theapp 134 examines the metadata for each event and cluster, and then attempts to find commonalities between the events and clusters. As explained above, these commonalities can frequently be based upon event information obtained fromcalendar data 212 or from data obtained byoutside event data - In one embodiment, step 810 uses commonality in the metadata that does not relate to closeness-in-time. The reason for this is that content that was collected close to the same time as other similar content would, in most cases, have already been clustered together into events. Consequently, it is likely that the separate events being grouped together into a presentation grouping would not share a common time with one another. However, it may be useful to recognize commonalities in the time metadata that are not related to closeness-in-time. For instance, the
app 134 may recognize that numerous content clusters or events occur on Thursday evenings from 6 pm to 8 pm. Theapp 134 may recognize this as a connection between the events, and therefore propose combining all events that occur on Thursday evenings from 6 pm to 8 pm as part of a presentation grouping. - At
step 815, theapp 134 uses the metadata from the combined events to create metadata for the presentation groupings. The presentation groupings and metadata are then stored atstep 820 in themedia organization data 139 or in theuser data 189 onserver 180. - At
step 820, the user is allowed to verify the presentation groupings created atstep 810. The user is given the ability to add events orcontent 140 directly to a presentation grouping, or to remove events orcontent 140 from the presentation grouping. The user is also given the ability to modify the metadata, and to format the presentation grouping as desired by the user. As explained above, the presentation grouping may be used to create a web site, a slide show, or a video presentation of the combined content. As a result, numerous formatting options will be available to a user atstep 825 to format the presentation grouping. Atstep 830, the user modifications to the presentation groupings are stored atlocations process 800 ends. - The many features and advantages of the invention are apparent from the above description. Numerous modifications and variations will readily occur to those skilled in the art. Since such modifications are possible, the invention is not to be limited to the exact construction and operation illustrated and described. Rather, the present invention should be limited only by the following claims.
Claims (1)
1. A mobile communication device comprising:
a) a processor that is controlled via programming instructions;
b) a non-transitory computer readable memory;
c) a user input device for receiving explicit input instructions from a user;
d) an optical sensor;
e) non-optical sensors selected from a group consisting of an accelerometer, a gyroscope, and a location identifying sensor;
f) explicit content generation programming stored on the memory and performed by the processor, the explicit content generation programming causing the processor to respond to an explicit input instruction from the user input device by storing image content on the memory, the image content including:
i) an image file recorded by the optical sensor, and
ii) image time metadata indicating the time at which the image file was captured;
g) implicit content generation programming stored on the memory and performed by the processor, the implicit content generation programming causing the processor to:
i) monitor the non-optical sensors;
ii) identify a change in the non-optical sensors;
iii) in response to the change in the non-optical sensors, storing implicit content on the memory, the implicit content including
(1) an indication of the change in the non-optical sensors, and
(2) implicit time metadata identifying the time at which the change in the non-optical sensors was identified;
h) content clustering programming that groups the image content and the implicit content into a cluster based on similarities between the image time metadata and the implicit time metadata.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/475,867 US20170300513A1 (en) | 2013-03-15 | 2017-03-31 | Content Clustering System and Method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/832,177 US9626365B2 (en) | 2013-03-15 | 2013-03-15 | Content clustering system and method |
US15/475,867 US20170300513A1 (en) | 2013-03-15 | 2017-03-31 | Content Clustering System and Method |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/832,177 Continuation US9626365B2 (en) | 2013-03-15 | 2013-03-15 | Content clustering system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
US20170300513A1 true US20170300513A1 (en) | 2017-10-19 |
Family
ID=51533126
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/832,177 Expired - Fee Related US9626365B2 (en) | 2013-03-15 | 2013-03-15 | Content clustering system and method |
US15/475,867 Abandoned US20170300513A1 (en) | 2013-03-15 | 2017-03-31 | Content Clustering System and Method |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/832,177 Expired - Fee Related US9626365B2 (en) | 2013-03-15 | 2013-03-15 | Content clustering system and method |
Country Status (1)
Country | Link |
---|---|
US (2) | US9626365B2 (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9942334B2 (en) | 2013-01-31 | 2018-04-10 | Microsoft Technology Licensing, Llc | Activity graphs |
US10007897B2 (en) * | 2013-05-20 | 2018-06-26 | Microsoft Technology Licensing, Llc | Auto-calendaring |
US10261672B1 (en) * | 2014-09-16 | 2019-04-16 | Amazon Technologies, Inc. | Contextual launch interfaces |
US10958778B1 (en) * | 2014-09-17 | 2021-03-23 | Peggy S. Miller | Contact system for a computing platform |
US20160125061A1 (en) * | 2014-10-29 | 2016-05-05 | Performance Content Group Inc. | System and method for content selection |
US9990128B2 (en) | 2016-06-12 | 2018-06-05 | Apple Inc. | Messaging application interacting with one or more extension applications |
US10785175B2 (en) | 2016-06-12 | 2020-09-22 | Apple Inc. | Polling extension application for interacting with a messaging application |
US10595169B2 (en) | 2016-06-12 | 2020-03-17 | Apple Inc. | Message extension app store |
US10852912B2 (en) | 2016-06-12 | 2020-12-01 | Apple Inc. | Image creation app in messaging app |
US10505872B2 (en) * | 2016-06-12 | 2019-12-10 | Apple Inc. | Messaging application interacting with one or more extension applications |
CN112004031B (en) * | 2020-07-31 | 2023-04-07 | 北京完美知识科技有限公司 | Video generation method, device and equipment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130275886A1 (en) * | 2012-04-11 | 2013-10-17 | Myriata, Inc. | System and method for transporting a virtual avatar within multiple virtual environments |
Family Cites Families (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8271336B2 (en) * | 1999-11-22 | 2012-09-18 | Accenture Global Services Gmbh | Increased visibility during order management in a network-based supply chain environment |
US7130807B1 (en) * | 1999-11-22 | 2006-10-31 | Accenture Llp | Technology sharing during demand and supply planning in a network-based supply chain environment |
DE20213627U1 (en) | 2002-08-30 | 2003-08-28 | Müller, Heiko, 01099 Dresden | Mobile recording media |
US20040177149A1 (en) | 2003-03-05 | 2004-09-09 | Zullo Paul F. | System and method for presentation at the election of a user of media event information and further media event information of media events all related to a preselected time period |
US20070136773A1 (en) | 2005-12-14 | 2007-06-14 | O'neil Douglas | Systems and methods for providing television services using implicit content to indicate the availability of additional content |
US20080032739A1 (en) * | 2005-12-21 | 2008-02-07 | Faraz Hoodbhoy | Management of digital media using portable wireless devices in a client-server network |
US7673248B2 (en) | 2006-11-06 | 2010-03-02 | International Business Machines Corporation | Combining calendar entries with map views |
US8515460B2 (en) * | 2007-02-12 | 2013-08-20 | Microsoft Corporation | Tagging data utilizing nearby device information |
US8234586B2 (en) | 2008-03-26 | 2012-07-31 | Microsoft Corporation | User interface framework and techniques |
JP2009253847A (en) | 2008-04-09 | 2009-10-29 | Canon Inc | Information processing apparatus and method of controlling the same, program, and storage medium |
US8813107B2 (en) * | 2008-06-27 | 2014-08-19 | Yahoo! Inc. | System and method for location based media delivery |
US8452855B2 (en) * | 2008-06-27 | 2013-05-28 | Yahoo! Inc. | System and method for presentation of media related to a context |
US20090327288A1 (en) | 2008-06-29 | 2009-12-31 | Microsoft Corporation | Content enumeration techniques for portable devices |
US8281027B2 (en) * | 2008-09-19 | 2012-10-02 | Yahoo! Inc. | System and method for distributing media related to a location |
US8914228B2 (en) | 2008-09-26 | 2014-12-16 | Blackberry Limited | Method of sharing event information and map location information |
US9043276B2 (en) | 2008-10-03 | 2015-05-26 | Microsoft Technology Licensing, Llc | Packaging and bulk transfer of files and metadata for synchronization |
US20100174998A1 (en) | 2009-01-06 | 2010-07-08 | Kiha Software Inc. | Calendaring Location-Based Events and Associated Travel |
US8862987B2 (en) | 2009-03-31 | 2014-10-14 | Intel Corporation | Capture and display of digital images based on related metadata |
US9218530B2 (en) * | 2010-11-04 | 2015-12-22 | Digimarc Corporation | Smartphone-based methods and systems |
US8831279B2 (en) * | 2011-03-04 | 2014-09-09 | Digimarc Corporation | Smartphone-based methods and systems |
US20110167357A1 (en) * | 2010-01-05 | 2011-07-07 | Todd Benjamin | Scenario-Based Content Organization and Retrieval |
WO2011097405A1 (en) | 2010-02-03 | 2011-08-11 | Nik Software, Inc. | Narrative-based media organizing system for converting digital media into personal story |
US8422852B2 (en) | 2010-04-09 | 2013-04-16 | Microsoft Corporation | Automated story generation |
US9936333B2 (en) * | 2010-08-10 | 2018-04-03 | Microsoft Technology Licensing, Llc | Location and contextual-based mobile application promotion and delivery |
US8327284B2 (en) | 2010-08-24 | 2012-12-04 | Apple Inc. | Acquisition and presentation of dynamic media asset information for events |
US8327253B2 (en) | 2010-11-09 | 2012-12-04 | Shutterfly, Inc. | System and method for creating photo books using video |
US9026909B2 (en) | 2011-02-16 | 2015-05-05 | Apple Inc. | Keyword list view |
US8879890B2 (en) * | 2011-02-21 | 2014-11-04 | Kodak Alaris Inc. | Method for media reliving playback |
GB201104542D0 (en) | 2011-03-17 | 2011-05-04 | Rose Anthony | Content provision |
US10013136B2 (en) * | 2011-09-29 | 2018-07-03 | Michael L Bachman | User interface, method and system for crowdsourcing event notification sharing using mobile devices |
US9009596B2 (en) | 2011-11-21 | 2015-04-14 | Verizon Patent And Licensing Inc. | Methods and systems for presenting media content generated by attendees of a live event |
US8745057B1 (en) | 2011-11-28 | 2014-06-03 | Google Inc. | Creating and organizing events in an activity stream |
US20130144847A1 (en) | 2011-12-05 | 2013-06-06 | Google Inc. | De-Duplication of Featured Content |
US9009159B2 (en) | 2012-01-23 | 2015-04-14 | Microsoft Technology Licensing, Llc | Population and/or animation of spatial visualization(s) |
US9591181B2 (en) | 2012-03-06 | 2017-03-07 | Apple Inc. | Sharing images from image viewing and editing application |
US9177007B2 (en) | 2012-05-14 | 2015-11-03 | Salesforce.Com, Inc. | Computer implemented methods and apparatus to interact with records using a publisher of an information feed of an online social network |
US9665773B2 (en) | 2012-06-25 | 2017-05-30 | Google Inc. | Searching for events by attendants |
US20140068433A1 (en) * | 2012-08-30 | 2014-03-06 | Suresh Chitturi | Rating media fragments and use of rated media fragments |
US9892203B2 (en) * | 2012-10-29 | 2018-02-13 | Dropbox, Inc. | Organizing network-stored content items into shared groups |
US9294787B2 (en) * | 2012-11-16 | 2016-03-22 | Yahoo! Inc. | Ad hoc collaboration network for capturing audio/video data |
US8745617B1 (en) * | 2013-02-11 | 2014-06-03 | Google Inc. | Managing applications on a client device |
US20140236709A1 (en) * | 2013-02-16 | 2014-08-21 | Ncr Corporation | Techniques for advertising |
US9886173B2 (en) * | 2013-03-15 | 2018-02-06 | Ambient Consulting, LLC | Content presentation and augmentation system and method |
US9460057B2 (en) * | 2013-03-15 | 2016-10-04 | Filmstrip, Inc. | Theme-based media content generation system and method |
-
2013
- 2013-03-15 US US13/832,177 patent/US9626365B2/en not_active Expired - Fee Related
-
2017
- 2017-03-31 US US15/475,867 patent/US20170300513A1/en not_active Abandoned
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130275886A1 (en) * | 2012-04-11 | 2013-10-17 | Myriata, Inc. | System and method for transporting a virtual avatar within multiple virtual environments |
Also Published As
Publication number | Publication date |
---|---|
US20140280122A1 (en) | 2014-09-18 |
US9626365B2 (en) | 2017-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20170300513A1 (en) | Content Clustering System and Method | |
US10185476B2 (en) | Content presentation and augmentation system and method | |
US9460057B2 (en) | Theme-based media content generation system and method | |
US10365797B2 (en) | Group membership content presentation and augmentation system and method | |
US10803112B2 (en) | Dynamic tagging recommendation | |
US10043059B2 (en) | Assisted photo-tagging with facial recognition models | |
US10891342B2 (en) | Content data determination, transmission and storage for local devices | |
US8774452B2 (en) | Preferred images from captured video sequence | |
US8885960B2 (en) | Linking photographs via face, time, and location | |
US20140304019A1 (en) | Media capture device-based organization of multimedia items including unobtrusive task encouragement functionality | |
KR102015067B1 (en) | Capturing media content in accordance with a viewer expression | |
US10102208B2 (en) | Automatic multimedia slideshows for social media-enabled mobile devices | |
US20190087500A1 (en) | Media selection and display based on conversation topics | |
US10282061B2 (en) | Electronic device for playing-playing contents and method thereof | |
EP1630694A2 (en) | System and method to associate content types in a portable communication device | |
US10318574B1 (en) | Generating moments | |
US20150139508A1 (en) | Method and apparatus for storing and retrieving personal contact information | |
US20150147045A1 (en) | Computer ecosystem with automatically curated video montage | |
JP2021535508A (en) | Methods and devices for reducing false positives in face recognition | |
Adams et al. | Extraction of social context and application to personal multimedia exploration | |
WO2012109330A1 (en) | Content storage management in cameras | |
KR20150110330A (en) | Method for collection multimedia information and device thereof | |
CN112352233A (en) | Automated digital asset sharing advice | |
JP7515552B2 (en) | Automatic generation of people groups and image-based creations | |
KR20150112789A (en) | Method for sharing data of electronic device and electronic device thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |