US20190045252A1 - Digital video file generation - Google Patents
Digital video file generation Download PDFInfo
- Publication number
- US20190045252A1 US20190045252A1 US16/146,484 US201816146484A US2019045252A1 US 20190045252 A1 US20190045252 A1 US 20190045252A1 US 201816146484 A US201816146484 A US 201816146484A US 2019045252 A1 US2019045252 A1 US 2019045252A1
- Authority
- US
- United States
- Prior art keywords
- digital video
- video files
- files
- user
- digital
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 claims abstract description 88
- 238000004891 communication Methods 0.000 claims description 41
- 238000003860 storage Methods 0.000 claims description 28
- 239000000945 filler Substances 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 15
- 238000009826 distribution Methods 0.000 claims description 11
- 230000003997 social interaction Effects 0.000 description 103
- 230000003993 interaction Effects 0.000 description 92
- 230000004044 response Effects 0.000 description 88
- 230000015654 memory Effects 0.000 description 35
- 238000010586 diagram Methods 0.000 description 29
- 230000000007 visual effect Effects 0.000 description 27
- 230000007704 transition Effects 0.000 description 24
- 230000002452 interceptive effect Effects 0.000 description 21
- 230000002996 emotional effect Effects 0.000 description 20
- 238000004458 analytical method Methods 0.000 description 18
- 238000013473 artificial intelligence Methods 0.000 description 18
- 238000004422 calculation algorithm Methods 0.000 description 18
- 230000036651 mood Effects 0.000 description 18
- 230000008569 process Effects 0.000 description 18
- 230000003111 delayed effect Effects 0.000 description 17
- 238000007726 management method Methods 0.000 description 14
- 238000012384 transportation and delivery Methods 0.000 description 14
- 230000009471 action Effects 0.000 description 13
- 230000008901 benefit Effects 0.000 description 11
- 241001342895 Chorus Species 0.000 description 10
- HAORKNGNJCEJBX-UHFFFAOYSA-N cyprodinil Chemical compound N=1C(C)=CC(C2CC2)=NC=1NC1=CC=CC=C1 HAORKNGNJCEJBX-UHFFFAOYSA-N 0.000 description 10
- 230000001934 delay Effects 0.000 description 10
- 230000000694 effects Effects 0.000 description 10
- 230000001965 increasing effect Effects 0.000 description 10
- 230000008451 emotion Effects 0.000 description 9
- 230000000994 depressogenic effect Effects 0.000 description 8
- 230000007246 mechanism Effects 0.000 description 8
- 230000000699 topical effect Effects 0.000 description 7
- 230000006399 behavior Effects 0.000 description 6
- 230000003139 buffering effect Effects 0.000 description 6
- 238000013499 data model Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000001737 promoting effect Effects 0.000 description 4
- 208000019901 Anxiety disease Diseases 0.000 description 3
- 206010035039 Piloerection Diseases 0.000 description 3
- 230000036506 anxiety Effects 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000007423 decrease Effects 0.000 description 3
- 230000006397 emotional response Effects 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 238000003058 natural language processing Methods 0.000 description 3
- 238000012552 review Methods 0.000 description 3
- 238000012163 sequencing technique Methods 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 230000003442 weekly effect Effects 0.000 description 3
- 230000036642 wellbeing Effects 0.000 description 3
- 230000000712 assembly Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 230000003796 beauty Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000017531 blood circulation Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 210000000245 forearm Anatomy 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000010348 incorporation Methods 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 230000006996 mental state Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000006461 physiological response Effects 0.000 description 2
- 238000013439 planning Methods 0.000 description 2
- 230000011514 reflex Effects 0.000 description 2
- 230000001105 regulatory effect Effects 0.000 description 2
- 230000029058 respiratory gaseous exchange Effects 0.000 description 2
- 239000011435 rock Substances 0.000 description 2
- 238000010079 rubber tapping Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 239000007858 starting material Substances 0.000 description 2
- 230000000638 stimulation Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 206010016326 Feeling cold Diseases 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000036626 alertness Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 210000004727 amygdala Anatomy 0.000 description 1
- 230000009910 autonomic response Effects 0.000 description 1
- ZYXYTGQFPZEUFX-UHFFFAOYSA-N benzpyrimoxan Chemical compound O1C(OCCC1)C=1C(=NC=NC=1)OCC1=CC=C(C=C1)C(F)(F)F ZYXYTGQFPZEUFX-UHFFFAOYSA-N 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 210000001638 cerebellum Anatomy 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000002716 delivery method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000000763 evoking effect Effects 0.000 description 1
- 210000001320 hippocampus Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 210000001259 mesencephalon Anatomy 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- 238000003032 molecular docking Methods 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000007230 neural mechanism Effects 0.000 description 1
- 210000001009 nucleus accumben Anatomy 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000005371 pilomotor reflex Effects 0.000 description 1
- 230000003389 potentiating effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000002250 progressing effect Effects 0.000 description 1
- 230000001681 protective effect Effects 0.000 description 1
- 238000010926 purge Methods 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 231100000430 skin reaction Toxicity 0.000 description 1
- 238000012358 sourcing Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 210000001103 thalamus Anatomy 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 210000001030 ventral striatum Anatomy 0.000 description 1
- 230000003612 virological effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/2668—Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/219—Managing data history or versioning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/28—Databases characterised by their database models, e.g. relational or object models
- G06F16/284—Relational databases
- G06F16/288—Entity relationship models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
- G06F16/435—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/71—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/735—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G06F17/30029—
-
- G06F17/30309—
-
- G06F17/30604—
-
- G06F17/30828—
-
- G06F17/30858—
-
- G06F17/30867—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/27—Server based end-user applications
- H04N21/274—Storing end-user multimedia data in response to end-user request, e.g. network recorder
- H04N21/2743—Video hosting of uploaded data from client
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Definitions
- This specification relates to technology for efficiently generating digital video files to include personalized content.
- Media personalization software such as video and audio editing software, provides users with features that can be used to combine media content (e.g., videos, audio, images, text) in various ways.
- video editing software can allow a user to trim video clips, combine video clips, add audio tracks, and to add graphics, images, and text.
- Video editing software can rely on users to retrieve and identify video clips for editing, the manner and timing with which video clips are combined, and the ultimate composition of the final video.
- social media platforms e.g., FACEBOOK, TWITTER, LINKEDIN, INSTAGRAM
- social media platforms vary in their approach to online interactions between users
- social media platforms generally provide features through which users can share information and interact with a broader collection of users on the platform.
- users on social media platforms can post content that is then distributed to other users on the social media platform, such as friends, followers, or fans of the user posting the content.
- Such distribution of content among users can be non-private in that it is broadcast among a broad group of users, which can sometimes be people without any sort of social connection to the posting user.
- This document generally describes improved technology for personalizing media content to more consistently and efficiently generate emotionally impactful personalized media content.
- Computer systems, techniques, and devices are described for automating the personalization of media content to ensure that personalized content is presented at appropriate times and places on the underlying media content that is being personalized, and to ensure that the quality of the underlying media content is undisturbed by the personalization. For example, music and videos often have “chill” moments that are emotionally impactful for listeners/viewers, such as the chorus in the song “Let It Go” from the movie Frozen.
- the technology described in this document can automate generation of a personalized “mediagram” (personalized media content conveying a message) based on an excerpt of the song “Let It Go” with personalization (e.g., text, images, video, audio) at appropriate times and locations around the song's chorus to provide an emotionally impactful message that leverages the chill moment from the song.
- personalized “mediagram” personalized media content conveying a message
- personalization e.g., text, images, video, audio
- This document also generally describes an improved social platform to enhance the quality of social interactions and relationships among users.
- a social platform can include a variety of features, such as private communication channels between users centered around the user relationships, relationship concierge features to facilitate and improve the quality of social interactions, time delays between social interactions to alleviate the pressure and stress on users of needing to respond quickly, temporary social interactions that are inaccessible to users involved in the interactions after a threshold period of time, private group communication channels and group relationship concierge features, relationship scoring features, personalized media content creation and distribution features, interactive and social emotional well-being meters through which users can identify their own emotional state and the states of other users, and/or combinations thereof.
- Such features can assist users in building and maintaining strong relationships with other users.
- a method for automatically generating personalized videos includes outputting, in a user interface on a client computing device, information identifying a plurality of preselected videos, wherein each of the plurality of preselected videos (i) are excerpts of longer videos and (ii) include at least one emotionally impactful moment; receiving, through an input subsystem on the client computing device, selection of a particular video from the plurality of preselected videos; retrieving, by the client computing device and in response to receiving the selection, the particular video and a personalization template for the particular video, wherein the personalization template designates particular types of media content to be combined with the video at particular locations to maximize the at least one emotionally impactful moment for an intended recipient; outputting, by the client computing device, a plurality of input fields prompting the user to select the particular types of media content from one or more personal repositories of media content; automatically retrieving, in response to user selections through the plurality of input fields, personal media content from the one or more personal repositories of media content; automatically assembling, by the client computing device
- the longer videos can be full-length music videos that include audio tracks containing full-length songs.
- the plurality of preselected videos can include audio tracks containing excerpts of the full-length songs.
- An audio track for the particular video, in its entirety, can be an excerpt of the audio track for a particular longer video.
- a video track for the particular video can include (i) a first portion that is an excerpt of a video track for the particular longer video and (ii) one or more second portions that are filler video not from the particular longer video.
- the one or more second portions of the particular video can be locations in the particular video where personalized video tracks derived from the personal media content are automatically inserted.
- the video tracks can be derived from the personal media content are not inserted at or over the first portion.
- the first portion of the video track can correspond to an emotionally impactful moment in the particular video.
- Personal media content can be designated as being the most emotionally impactful from among the personal media content is automatically positioned immediately following the first portion.
- the longer videos can be full-length movies that include audio tracks containing full-length movie sound tracks.
- the plurality of preselected videos can include audio tracks containing excerpts of the full-length movie sound tracks.
- the personal media content can include one or more of: digital photos, digital videos, and personalized text.
- the method can further include automatically analyzing, by the client computing device, waveforms for another longer video to automatically identify an emotionally impactful moment; determining, by the client computing device, starting and end points within the other longer video based on intro and outro transition points within a threshold timestamp from the emotionally moment in the other longer video; automatically generating, by the client computing device, a video excerpt from the other longer video using the starting and end points; and adding the video excerpt from the other longer video to the plurality of preselected videos.
- the method can further include generating a personalization template for the video excerpt from the other longer video based, at least in part, on the location of the emotionally impactful moment within the video excerpt.
- the automatic waveform analysis can be performed based on one or more of the following waveform characteristics: mode, volume, tempo, mood, tone, and pitch.
- the personalized video can include a mediagram that is intended to provide an emotionally impactful message that is specifically tailored to a relationship between a sender and recipient.
- a method for providing a social media platform for enhancing and improving social interactions among users includes retrieving, by a relationship concierge running on a social media system, (i) user profiles for a first user and a second user, and (ii) a relationship profile for a relationship between the first user and the second user; retrieving, by the relationship concierge, historical interactions among the first user and the second user on the social media platform; determining, by the relationship concierge, whether provide a social interaction prompt to one or more of the first user and the second user based on the user profiles, the relationship profile, and the historical interactions, wherein the social interaction prompt provides a call to action for interaction within the relationship between the first user and the second user; identifying, in response to determining that the social interaction prompt is to be provided, the first user from among the first and second users as the recipient of the social interaction prompt; automatically transmitting, by the relationship concierge and without a request from either the first or second user, the social interaction prompt to a first computing device for the first user, wherein the social interaction prompt is only visible to the
- the social interaction prompt can include a question that is posed to the first user.
- the social interaction prompt can include the first user being directed to create a mediagram for the second user.
- the mediagram can include a personalized video segment that is automatically personalized to provide an emotionally impactful message that is particularly tailored to the relationship between the first and second users.
- the social interaction prompt can include an interactive game to be played by the first and second users.
- the first and second users can interact on the social platform via a private wall that is exclusive to the first and second users.
- the social interaction prompt can be initially only visible on the private wall to the first user.
- the social interaction prompt can become visible on the private wall to the second user in only after and in combination with the response to the social interaction prompt by the user.
- the second user can be delayed from replying to the response for at least a threshold period of time following the response and the social interaction prompt appearing to the second user on the private wall.
- the response and the social interaction prompt can be automatically deleted from the private wall after a threshold amount of time or interactions have elapsed since they appeared on the private wall.
- a computer-implemented method includes receiving from a first user a selection of a sub-portion of a music video that includes audio from a sub-portion of a song and video that corresponds to the audio; receiving from the first user personalization content entered into a template that designates particular types of media content to be combined with the music video at particular locations of the music video; providing, to a second user who was designated by the first user, an indication that the content is available for review by the second user; and providing, to the second user and in response to a second user confirmation of the provided indication, the sub-portion of the music video in combination with the personalization content.
- the method can further include previously determining portions in each of a plurality of music videos, sub-portions of each of the plurality of music videos that will have an increased impact on a viewer of the sub-portions as compared to other sub-portions of the music videos. Determining the portions can include manually reviewing the videos with trained human classifiers. Determining the portion can include identifying which portions of particular videos are played the most by visitors to one or more on-line video sites. Determining the portions can include performing automatic music analysis of a plurality of different music videos to identify musical patterns previously determined to have an emotional effect on a typical listener.
- the personalization content can include a textual message entered by the first user.
- the second user can be provided with one or more bumpers created by the first user and appended to the front, back, or both of the video sub-portion.
- a system for generating digital media files includes a digital media file repository, a front end system, a backend system, and a digital media distribution system.
- the digital media file repository stores a plurality of preselected digital video files that are excerpts of longer digital video files.
- the plurality of preselected digital video files are encoded in a common digital video codec and are stored with metadata that identifies times within the plurality of preselected digital video files at which emotionally impactful moments occur.
- the frontend system is in communication with client computing devices.
- the frontend system receives digital media file content generation requests from the client computing devices that include parameters identifying particular preselected digital video files to be combined with personal digital media files to generate personalized digital video files.
- the personal digital media files include personal digital video files, personal digital audio files, personal text, and personal digital image files that are uploaded to the frontend system by the client computing devices.
- the personal digital video files are encoded across a plurality of digital video codecs.
- the backend system generates the personalized digital video files using the particular preselected digital video files and the personal digital media files.
- the backend system being programmed to: convert the personal digital video files from the plurality of digital video codecs to the common video codec; retrieve personalization digital media templates that designate (i) particular types of media content to be combined with the particular preselected digital video files and (ii) particular times within particular preselected digital video files at which the particular types of media content are to be combined with the particular preselected digital video files, the particular times being relative to the times within the plurality of preselected digital video files at which the emotionally impactful moments occur; assemble digital media content for the personalized digital video files using particular preselected digital video files, the digital media templates, and the personal digital media files, the personal digital media files being (i) positioned at the particular times relative to the times at which the emotionally impactful moments occur in the particular preselected digital video files, (ii) visually combined with video tracks of the particular preselected digital video files so that digital images and videos from the personal digital media files
- the longer digital video files can be full-length music videos containing full-length songs and the plurality of preselected digital video files can be excerpts of the full-length music videos that include the emotionally impactful moments.
- the longer digital video files can be full-length movies and the plurality of preselected digital video files can be excerpts of the full-length movies that include the emotionally impactful moments.
- the personalized digital video files can comprise mediagrams that include a personalized message centered around the emotionally impactful moments in the particular preselected digital video files and the mediagrams can be configured to be digitally sent from one client computing device to another client computing device.
- the personal digital media files can have variable lengths of time.
- Assembling the digital media content can include adding one or more portions of digital filler content so as (i) to fit the personal digital media files with variable lengths of time at the particular times according to the digital media templates and (ii) ensure that the video tracks and the audio tracks for the particular preselected digital video files at the times at which the emotionally impactful moments occur remain unmodified in the assembled digital media content.
- the one or more portions of digital filler content can be loops of digital content derived from the particular preselected digital video files.
- the one or more portions of digital filler content can be preselected loops of digital content.
- a computer-implemented method includes receiving digital media file content generation requests from client computing devices that include parameters identifying particular preselected digital video files to be combined with personal digital media files to generate personalized digital video files, the personal digital media files including personal digital video files, personal digital audio files, personal text, and personal digital image files, the personal digital video files being encoded across a plurality of digital video codecs, the preselected digital video files being excerpts of longer digital video files, the preselected digital video files being encoded in the common digital video codec and being stored with metadata that identifies times within the preselected digital video files at which emotionally impactful moments occur.
- the method further includes converting the personal digital video files from the plurality of digital video codecs to a common video codec.
- the method further includes retrieving personalization digital media templates that designate (i) particular types of media content to be combined with the particular preselected digital video files and (ii) particular times within particular preselected digital video files at which the particular types of media content are to be combined with the particular preselected digital video files, the particular times being relative to the times within the plurality of preselected digital video files at which the emotionally impactful moments occur.
- the method further includes assembling digital media content for the personalized digital video files using particular preselected digital video files, the digital media templates, and the personal digital media files, the personal digital media files being (i) positioned at the particular times relative to the times at which the emotionally impactful moments occur in the particular preselected digital video files, (ii) visually combined with video tracks of the particular preselected digital video files so that digital images and videos from the personal digital media files replace the video tracks at the particular times, and (iii) audibly combined with audio tracks for the particular preselected digital video files so that audio from the personal digital media files are automatically mixed with the audio tracks at the particular times, wherein the video tracks and the audio tracks for the particular preselected digital video files at the times at which the emotionally impactful moments occur remain unmodified in the assembled digital media content.
- the method further includes encoding the assembled digital media content using the common video codec to generate the personalized digital video files.
- the method further includes storing the personalized digital video files.
- the method further includes transmitting the personalized digital video files to the client computing
- the longer digital video files can be full-length music videos containing full-length songs and the plurality of preselected digital video files can be excerpts of the full-length music videos that include the emotionally impactful moments.
- the longer digital video files can be full-length movies and the plurality of preselected digital video files can be excerpts of the full-length movies that include the emotionally impactful moments.
- the personalized digital video files can comprise mediagrams that include a personalized message centered around the emotionally impactful moments in the particular preselected digital video files and the mediagrams can be configured to be digitally sent from one client computing device to another client computing device.
- the personal digital media files can have variable lengths of time.
- Assembling the digital media content can include adding one or more portions of digital filler content so as (i) to fit the personal digital media files with variable lengths of time at the particular times according to the digital media templates and (ii) ensure that the video tracks and the audio tracks for the particular preselected digital video files at the times at which the emotionally impactful moments occur remain unmodified in the assembled digital media content.
- the one or more portions of digital filler content can be loops of digital content derived from the particular preselected digital video files.
- the one or more portions of digital filler content can be preselected loops of digital content.
- a computer program product encoded on a non-transitory storage medium comprises non-transitory, computer readable instructions for causing one or more processors to perform operations.
- the operations include receiving digital media file content generation requests from client computing devices that include parameters identifying particular preselected digital video files to be combined with personal digital media files to generate personalized digital video files, the personal digital media files including personal digital video files, personal digital audio files, personal text, and personal digital image files, the personal digital video files being encoded across a plurality of digital video codecs, the preselected digital video files being excerpts of longer digital video files, the preselected digital video files being encoded in the common digital video codec and being stored with metadata that identifies times within the preselected digital video files at which emotionally impactful moments occur.
- the operations further include converting the personal digital video files from the plurality of digital video codecs to a common video codec.
- the operations further include retrieving personalization digital media templates that designate (i) particular types of media content to be combined with the particular preselected digital video files and (ii) particular times within particular preselected digital video files at which the particular types of media content are to be combined with the particular preselected digital video files, the particular times being relative to the times within the plurality of preselected digital video files at which the emotionally impactful moments occur.
- the operations further include assembling digital media content for the personalized digital video files using particular preselected digital video files, the digital media templates, and the personal digital media files, the personal digital media files being (i) positioned at the particular times relative to the times at which the emotionally impactful moments occur in the particular preselected digital video files, (ii) visually combined with video tracks of the particular preselected digital video files so that digital images and videos from the personal digital media files replace the video tracks at the particular times, and (iii) audibly combined with audio tracks for the particular preselected digital video files so that audio from the personal digital media files are automatically mixed with the audio tracks at the particular times, wherein the video tracks and the audio tracks for the particular preselected digital video files at the times at which the emotionally impactful moments occur remain unmodified in the assembled digital media content.
- the operations further include encoding the assembled digital media content using the common video codec to generate the personalized digital video files.
- the operations further include storing the personalized digital video files.
- the operations further include transmitting the personalized digital video files to the client computing
- the longer digital video files can be full-length music videos containing full-length songs and the plurality of preselected digital video files can be excerpts of the full-length music videos that include the emotionally impactful moments.
- the longer digital video files can be full-length movies and the plurality of preselected digital video files can be excerpts of the full-length movies that include the emotionally impactful moments.
- the personalized digital video files can comprise mediagrams that include a personalized message centered around the emotionally impactful moments in the particular preselected digital video files and the mediagrams can be configured to be digitally sent from one client computing device to another client computing device.
- the personal digital media files can have variable lengths of time.
- Assembling the digital media content can include adding one or more portions of digital filler content so as (i) to fit the personal digital media files with variable lengths of time at the particular times according to the digital media templates and (ii) ensure that the video tracks and the audio tracks for the particular preselected digital video files at the times at which the emotionally impactful moments occur remain unmodified in the assembled digital media content.
- the one or more portions of digital filler content can be loops of digital content derived from the particular preselected digital video files.
- the one or more portions of digital filler content can be preselected loops of digital content.
- a system for providing a social media platform to enhance the quality of online social interactions among users including: first and second client computing devices that are running social media applications for the social media platform, each of the social media applications being programmed to provide a graphical user interface (GUI) that presents digital content retrieved over the internet from the social media platform and to receive user inputs via one or more graphical input elements in the GUI, the first client computing being associated with a first user and the second client computing device being associated with a second user; a digital profile repository storing (i) user profiles for a first user and a second user, and (ii) a relationship profile for a relationship between the first user and the second user; a relationship history database storing historical interactions among the first user and the second user on the social media platform; and a relationship concierge to facilitate meaningful social interactions among the first and second client computing devices, the relationship concierge being programmed to: retrieve the user profiles for the first user and the second user, and the relationship profile for the relationship between the first user and the second user from the digital profile repository, retrieve
- the social interaction prompt can include a question that is posed in the GUI on the first client computing device to the first user.
- the social interaction prompt can include the first user being directed to create a mediagram for the second user, wherein the mediagram comprises a personalized digital video segment that is automatically personalized to provide an emotionally impactful message that is particularly tailored to the relationship between the first and second users.
- the social interaction prompt can include an interactive game to be played by the first and second users.
- the GUI on the first client computing device and the GUI on the second client computing device can provide a private wall that is exclusive to the relationship between the first and second users, the social interaction prompt can initially be only visible on the private wall presented by the first client computing device to the first user, and the social interaction prompt can become visible on the private wall presented by the second client computing device to the second user only after and in combination with the response to the social interaction prompt by the user.
- the GUI in the second client computing device can delay the second user from replying to the response for at least a threshold period of time following the response and the social interaction prompt being presented on the private wall of the second client computing device.
- the GUI in the second client computing device (i) can inactivate the graphical input elements to receive a reply from the second user until after a delayed response period has elapsed, and (ii) can presents timing information identifying an amount of time remaining until the delayed response period has elapsed, and the GUI in the first client computing device can also present the timing information identifying an amount of time remaining until the delayed response period has elapsed for the second user to respond.
- the GUI in the second client computing device (i) can activate the graphical input elements to receive a reply from the second user during a delayed response period and (ii) can present timing information identifying an amount of time remaining until the delayed response period has elapsed and the reply from the second user will be transmitted to the first client computing device, and the GUI in the first client computing device can also present the timing information identifying an amount of time remaining until the delayed response period has elapsed for the second user's reply to be transmitted to the first client computing device.
- the response and the social interaction prompt can be automatically deleted from the private wall after a threshold amount of time or interactions have elapsed since they appeared on the private wall.
- a computer-implemented method for providing a social media platform to enhance the quality of online social interactions among users comprising: retrieving, from a digital profile repository storing (i) user profiles for a first user and a second user, and (ii) a relationship profile for a relationship between the first user and the second user, user profiles for the first user and the second user, and the relationship profile for the relationship between the first user and the second user, retrieving, from a relationship history database storing historical interactions among the first user and the second user on the social media platform, historical interactions among the first user and the second user on the social media platform, and facilitating meaningful social interactions among the first and second client computing devices that are running social media applications for the social media platform, each of the social media applications being programmed to provide a graphical user interface (GUI) that presents digital content retrieved over the internet from the social media platform and to receive user inputs via one or more graphical input elements in the GUI, the first client computing being associated with the first user and the second client computing device being
- GUI graphical user interface
- the social interaction prompt can include a question that is posed in the GUI on the first client computing device to the first user.
- the social interaction prompt can include the first user being directed to create a mediagram for the second user, wherein the mediagram comprises a personalized digital video segment that is automatically personalized to provide an emotionally impactful message that is particularly tailored to the relationship between the first and second users.
- the social interaction prompt can include an interactive game to be played by the first and second users.
- the GUI on the first client computing device and the GUI on the second client computing device can provide a private wall that is exclusive to the relationship between the first and second users, the social interaction prompt can initially be only visible on the private wall presented by the first client computing device to the first user, and the social interaction prompt can become visible on the private wall presented by the second client computing device to the second user only after and in combination with the response to the social interaction prompt by the user.
- the GUI in the second client computing device can delay the second user from replying to the response for at least a threshold period of time following the response and the social interaction prompt being presented on the private wall of the second client computing device.
- the GUI in the second client computing device (i) can inactivate the graphical input elements to receive a reply from the second user until after a delayed response period has elapsed, and (ii) can presents timing information identifying an amount of time remaining until the delayed response period has elapsed, and the GUI in the first client computing device can also present the timing information identifying an amount of time remaining until the delayed response period has elapsed for the second user to respond.
- the GUI in the second client computing device (i) can activate the graphical input elements to receive a reply from the second user during a delayed response period and (ii) can present timing information identifying an amount of time remaining until the delayed response period has elapsed and the reply from the second user will be transmitted to the first client computing device, and the GUI in the first client computing device can also present the timing information identifying an amount of time remaining until the delayed response period has elapsed for the second user's reply to be transmitted to the first client computing device.
- the response and the social interaction prompt can be automatically deleted from the private wall after a threshold amount of time or interactions have elapsed since they appeared on the private wall.
- a non-transitory computer-readable medium for providing a social media platform to enhance the quality of online social interactions among users and storing instructions, that when executed, cause one or more processors to perform operations including: retrieving, from a digital profile repository storing (i) user profiles for a first user and a second user, and (ii) a relationship profile for a relationship between the first user and the second user, user profiles for the first user and the second user, and the relationship profile for the relationship between the first user and the second user, retrieving, from a relationship history database storing historical interactions among the first user and the second user on the social media platform, historical interactions among the first user and the second user on the social media platform, and facilitating meaningful social interactions among the first and second client computing devices that are running social media applications for the social media platform, each of the social media applications being programmed to provide a graphical user interface (GUI) that presents digital content retrieved over the internet from the social media platform and to receive user inputs via one or more graphical input elements in the GUI, the first client computing being
- GUI
- the social interaction prompt can include a question that is posed in the GUI on the first client computing device to the first user.
- media content can be personalized in ways that ensure that synchronization between audio and video portions of the underlying media content are not disrupted.
- video and audio can get out of sync.
- media content personalization can be streamlined to provide novice users with the ability to readily create impactful personalized content.
- a user interface can be presented to users that narrows the field of options for personalization down to a limited number through the use of preselected media content excerpts, personalization templates, guided personalization steps, and other features to ensure emotionally impactful personalized content is created.
- social platforms can facilitate improved and more meaningful social interactions and relationships between users through a variety of features, such as private walls, relationship concierges, time delays for interactions, personalized media content distribution, time-limited social content, and/or combinations thereof.
- features such as private walls, relationship concierges, time delays for interactions, personalized media content distribution, time-limited social content, and/or combinations thereof.
- social interactions can be guarded and reserved.
- a primary mechanism for interacting with other users is either private walls or private group walls
- user interactions can be with smaller and more intimate sets of users, which can help users drop their guard and interact more naturally/honestly.
- relationship concierges can automate and assist users in building and maintaining strong relationships by prompting users with ways to interact with each other.
- personalized media content creation and distribution on a social platform can assist users in conveying emotionally impactful messages that may otherwise be difficult to express through traditional social media interactions (e.g., posts, images, text).
- mandatory time delays for interactions between users can alleviate the pressure, stress, and burden that users feel to promptly respond to interactions in order to avoid expressing disinterest with a late response or no response at all.
- time-limited social content can additionally promote more natural/honest social interactions (help users drop their guard) by ensuring that social interactions on the platform will not persist for perpetuity, but instead will be inaccessible to both users after a period of time or after a series of interactions.
- the app can force both senders and recipient to be reflective based on the app forcing them to have a delay before sending/receiving the message.
- the time delays introduce a component of “scarcity” which enforces reflection, anticipation and attention to detail, fostering better relationships.
- FIG. 1 is a block diagram of an example system for generating personalized media content.
- FIG. 2 is a block diagram showing an example technique for creating and delivering a personalized mediagram to a recipient.
- FIG. 3 is a block diagram of an example system for generating and consuming mediagrams.
- FIGS. 4A-F are screenshots that collectively show an example sequence of steps for creating and distributing a mediagram.
- FIGS. 5A-M are block diagrams showing example assemblies of mediagrams.
- FIG. 6 is a conceptual diagram of an example system for generating personalized media content.
- FIGS. 7A-B are a flowchart of an example technique for generating personalized videos.
- FIG. 8A is a conceptual diagram of an example social media platform for providing improved and more meaningful social interactions among users.
- FIG. 8B is a conceptual diagram of another example social media platform for providing improved and more meaningful social interactions among users.
- FIG. 9A is an example system for providing an improved social media platform with more meaningful social interactions among users.
- FIG. 9B is diagram of an example system for providing an improved social media platform with more meaningful social interactions among users.
- FIG. 9C depicts an example system for providing an improved social media platform with more meaningful social interactions among users.
- FIG. 10 is a flow chart with user interfaces and to establish an initial connection between users on a social media platform.
- FIGS. 11A-B are screenshots of example user interfaces on an example mobile computing device for interacting with other users via private walls on a social platform.
- FIGS. 11C-F present example specific user interface features that can be selected for presentation to users.
- FIGS. 12A-H are screenshots of an example process flow for a relationship concierge facilitating and improving social interactions among users via private walls on a social platform.
- FIGS. 13A-C is screenshot of an example user interface on a mobile computing device for viewing a user's friends and the corresponding interaction delays until another relationship concierge prompt is expected.
- FIG. 14A is a conceptual diagram of an example personal concierge system and algorithm for facilitating and improving user relationships on a social network.
- FIG. 14B is a diagram of an example system to vary content that is selected for presentation to users.
- FIG. 14C is a screenshot of an example “one-click” feedback interface in which content is presented with selectable graphical elements that the user can select with a single click/selection action to provide feedback related to the content.
- FIGS. 15A-D are screenshots of a relationship concierge being applied to other social platforms providing predominantly open communication among broad groups of users.
- FIG. 16 is a diagram depicting creation and use of a private group wall on a social platform to improve and enhance meaningful social interactions.
- FIGS. 17A-H are screenshots of an example user interface on a computing device for users to express and interact with others regarding their emotional well-being.
- FIGS. 18A-B are flowcharts of example techniques for determining and transmitting prompts to specific relationship private walls on a social platform.
- FIG. 19 is a flowchart of an example technique for determining and transmitting delays between interactions on a social platform.
- FIG. 20 is a flowchart of an example technique for determining relationship ratings on a social platform.
- FIGS. 21A-B are flowcharts of example techniques for creating and using private group walls on a social platform.
- FIG. 22 is a block diagram of example computing devices.
- FIG. 1 is a block diagram of an example system 100 for generating personalized media content, such as mediagrams.
- a mediagram can be personalized media content that is configured in a particular manner to convey an emotionally impactful message between users.
- Mediagrams can include, for example, underlying media content that is combined with other, personal media content to provide personalization to the underlying media content.
- Media content can include, for instance, music and/or video excerpts, movie excerpts, music files, images, television clips (ex. SNL skits), Viral videos (ex. Home videos that have been popularized), concert videos, and/or other types of media.
- a mediagram can be an excerpt from a music video that is personalized with text, images, audio, and video.
- a mediagram that is produced by the system 100 can be a ready-to-play presentation of media that is prepared by a sender 102 and sent to at least one recipient 104 .
- the mediagram can include, for example, music files and other media that are combined into the mediagram in a way that is personalized by the sender 102 .
- Personalization can include adding personalized messages and/or other elements to the media, including text, audio, images, video, etc., which can overlay and/or adjoin segments of the media.
- personalization can be a caption that precedes or accompanies an image, a video, or some other media segment.
- the system 100 can provide, for presentation to the sender 102 , different media segments that can be selected by the sender 102 for personalization.
- the sender 102 can select from among media content 106 a - 106 d (e.g., music videos, music, movies, videos).
- the media content 106 a - 106 b may be presented to the sender 102 , for example, upon execution of a search query (e.g., to find songs for specific artists, titles, subjects, genres, etc.).
- Selection of media content can be made, for example, from the sender's library of downloaded and/or owned media content, generated from a subscribed list of available songs, and/or in some other way.
- Presentation of the media content 106 a - 106 d, as well as other aspects of a user interface for creating mediagrams, can be presented on a user device 107 of the sender 102 .
- media content and/or excerpts of media content can be pre-selected to provide a chill moment, which can be a point in a song or other media that has been shown to provide a chill to a viewer of the media (e.g., moment that provides tingles, a chill running down one's spine, a significant emotional and/or physical response, or some other reaction by the recipient 104 ).
- Example chill moments in songs include a particular note or passage in a song, particular lyrics, and/or other features that are otherwise emotionally impactful upon users.
- Example chill moments in a movie or video can include a chase scene or an important scene, such as a celebratory moment, the death of a character, or some other significantly impactful segment.
- media content with chill moments can be identified in a catalog of chill moments, which may be stored in (and available from) a proprietary catalog.
- Each chill moment, identified for a particular song or movie for example, can identify a point in the particular song or movie that produces a “chill” reaction (in an audience of the media) that results in a sensation that is similar to feeling cold, getting goosebumps, having one's hair stand on end, or some other physiological reaction.
- Such reactions can include, for example, an increased heart rate, an increased respiration rate, in increased skin conductance (or “galvanic skin response”), or some other physiological response.
- Sources for chills can include, for example, music (e.g., most potent), visual arts, speeches (e.g., notable speechmakers), beauty (or other breath-taking appearance), or physical contact.
- the intensity and/or effect of chills can be affected by factors such as mode, volume, tempo, mood, tone, and pitch, or other factors that may help to convey or amplify emotion.
- chill moments can only exist for music or video that have already been experienced before, such as by at least one user.
- the sender 102 select the media content 106 c from among the media content 106 a - 106 d. While the selected media content 106 c can refer to the entire media content (e.g., the entire song, the entire movie, the entire music video), the sender 102 can select from among media content excerpts 106 c ′- 106 c ′′′ (segments of the media content 106 ), each of which may include the chill moment that the sender 102 wishes to share with the recipient 104 .
- the excerpts 106 c ′- 106 c ′′′ can be pre-designated for the media content 106 c (e.g., manually designated, crowd sourced) and/or automatically identified (e.g., waveform analysis).
- Each of the other media content 106 a - b and 106 d can additionally include one or more excerpts that are proposed to the sender 102 for selection.
- the sender 102 selects the excerpt 106 c ′ for personalization.
- the media content excerpt 106 c ′ has yet to be personalized (e.g., just a segment of a music video without personalization).
- the sender 102 can be prompted to identify personal media content to add to the selected excerpt 106 c ′, such as overlaying a portion of the excerpt 106 c ′ (e.g., photo overlaying a portion of a music video, text overlaying a portion of a movie), being presented adjacent to the selected excerpt 106 c ′ (e.g., video that is played before or after the excerpt 106 c ′), and/or other combinations with the excerpt 106 c ′.
- overlaying a portion of the excerpt 106 c ′ e.g., photo overlaying a portion of a music video, text overlaying a portion of a movie
- the selected excerpt 106 c ′ e.g., video that is played before or after the excerpt 106 c
- the sender 102 can be guided through selection of media content for the excerpt 106 c ′ by the sender's device 107 , which can be programmed, for example, to use one or more personalization templates to assist the sender 102 in the selection of personalized media content to provide maximize the emotional impact of the personalized media content.
- the sender 102 can be prompted to enter a textual message for the recipient 104 , then to provide up to 10 seconds of a personalized video message to the recipient 104 , and then to provide up to 3 photos that include both the sender 102 and the recipient 104 .
- the sender 102 selects the example personal media content 108 (e.g., photos, videos, audio, text) to be used to personalize the excerpt 106 c ′ for the mediagram.
- the example personal media content 108 e.g., photos, videos, audio, text
- a server system 112 can receive the selection of the media content excerpt 106 c ′ along with the personal media content 108 and can generate a mediagram to be delivered to the recipient 104 .
- Such generation can include, for example, referencing one or more personalization templates to determine how to combine the excerpt 106 c ′ with the personal media content 108 (as well as referencing particular instructions/designations for the mediagram made by the sender 102 ).
- the generation can also be completed using digital rights management code, which can be encoded into the resulting mediagram to manage aspects of copyrights and payment of royalties and/or other fees.
- the mediagram can be output, for example, in the form of deliverable media 110 (e.g., video file, audio file) that results from audio, video, images, text, and/or other media content items being assembled by a server 112 .
- deliverable media 110 e.g., video file, audio file
- the deliverable media 110 can be a single video file and that is transmitted to a computing device 114 for the recipient 104 .
- the component parts of the mediagram, including the associated media segments can be sent individually (e.g., not in the form of deliverable media 110 ) and assembled by the computing device 114 for presentation to the recipient 104 .
- the deliverable media 110 can be provided to the device 114 using one or more features to protect against piracy.
- the deliverable media 110 can be provided in a “lock box” to protect media and avoid piracy.
- the lock box may include a feature that prevents consumption of the mediagram unless the recipient 104 provides credentials or some other form of authentication.
- the mediagram can include features that control the number of times and/or a timeframe over which the mediagram is presented, such as a single time or a limited number of times, or an expiration time that limits presentation of the mediagram to a time-limited viewing.
- the mediagram deliverable 110 can include digital rights management (DRM) features and/or other techniques for copyright protection of digital media, including restricting copying of media and preventing unauthorized redistribution.
- DRM digital rights management
- the device 114 of the recipient 104 may be required to install/run/load a specialized/authorized media player (or an application providing similar functionality) to view content of the mediagram.
- the mediagram deliverable 110 can be distributed to the device 114 in any of a variety of ways, such through an account that the recipient 104 may have on the server system 112 (e.g., push notification provided to mobile app on the device 114 that is hosted by the server system 112 ), by transmitting a link to the deliverable 110 (e.g., sending an email including a uniform resource locator (URL) for the deliverable 110 , sending a text message including the URL for the deliverable).
- URL uniform resource locator
- Other ways of providing notification to the recipient 104 that the deliverable 110 is available and ready for him/her to access it are also possible.
- FIG. 2 is a block diagram 200 showing an example technique 202 for creating and delivering a personalized mediagram 204 to a recipient.
- the mediagram 204 is created using a music video as the underlying media content that is being personalized.
- a user can starts by selecting a song that will be personalized ( 206 ). For example, referring to FIG. 1 , the sender 102 selects a song from pre-made categories or uses a search feature to identify songs by artist, song title, or occasion/subject (e.g., Christmas songs, love songs, etc.), or in some other way. The user can then create a personal message through personal media content that the user selects ( 208 ). For example, referring to FIG. 1 , the sender 102 can designate text, audio, photos, videos, and/or other personal media content to be added and/or otherwise combined with the selected song to generate the mediagram 204 . As described above with regard to FIG.
- the user can be guided through the selection and designation of the personal media content for the mediagram, such as through the use of personalization templates that can identify specific types of media content that should be added to particular locations of the song excerpt to maximize the emotional impact of the mediagram.
- the relationship concierge can be used to identify and select personal media content for the mediagram.
- the relationship concierge can be used alone and/or in combination with other features guiding personal media content selection for the mediagram, such as the personalization temples, with the systems, techniques, and devices described throughout this document.
- the example mediagram 204 can include personalization that is added to an original music video 222 for the song selected by the user ( 206 ).
- the mediagram 204 may be for the entire music video 222 or just a portion of the music video 22 , such as an excerpt of the music video 222 that has a chill moment in the song as its focal point.
- the mediagram 204 includes audio and video tracks, one or both of which can be personalized by the user at various points in the mediagram 204 .
- the original audio track 220 (not personalized) from the music video run the entire length of the mediagram 204 , but the original video track (not personalized) for the music video 222 runs for only the middle portion 216 of the mediagram 204 .
- the video tracks for the beginning 214 and end 218 of the mediagram 204 in this example are personalized with personal media content designated by the user.
- the video track for the beginning portion 214 of the mediagram 204 includes a written message and a video 214 a
- the video track for the end portion 218 of the mediagram 204 includes photos 218 .
- the original audio 220 for the music video 222 runs the entire length of the mediagram 204 , it is combined (blended) with personalized audio 214 b that corresponds to the personalized video 214 a at the beginning of the mediagram 204 .
- the user can be guided through the process of selecting personal media content ( 214 a - b, 218 ) for personalizing the music video 222 , such as with a personalization template to assist the user in identifying the best type of media content to select to make the mediagram 204 emotionally impactful upon the recipient.
- the mediagram 204 can be automatically assembled so to generate a high-quality mediagram deliverable that combines the personal media content with the original music video 222 .
- These steps can be performed automatically by a computing device (e.g., client computing device, server system) without the user having to designate how the original or personal media content should be assembled, let alone go through the process of laying out audio and video tracks for the mediagram 204 .
- the personal media content ( 214 a - b, 218 ) can be automatically positioned at or around the chill moment in the music video 222 so that the mediagram 204 will be emotionally impactful for the recipient with regard to the sender and the relationship between the sender and recipient.
- the assembly of the mediagram 204 with original and personal media content is one example. Other configurations and arrangements of personal media content with regard to original media content are also possible.
- the mediagram 204 can be sent to the recipient ( 210 ), such as by specifying the recipient's contact information (phone number, email, social network identifier, etc.). Once specified, the mediagram 204 can be delivered either directly (e.g., file transmission) or indirectly (e.g., link transmission, notification) to the recipient along one or more communication channels ( 212 ), such as in-app communications, email, or text message. Depending on the delivery method, the recipient can be prompted to send a response (e.g., via a social platform), download an application (to render the mediagram), or to subscribe to a mediagram service, some or all of which may be free or have a cost to the recipient and/or sender.
- a response e.g., via a social platform
- download an application to render the mediagram
- subscribe to a mediagram service some or all of which may be free or have a cost to the recipient and/or sender.
- FIG. 3 is a block diagram of an example system 300 for generating and consuming mediagrams.
- the system 300 can host a mediagram creation and deliver service that can provide services to users, like the generation, storage, and distribution of mediagrams that are created by the users.
- the system 300 can use license agreements 302 that are held with licensors 304 , such as music, movie, and other media content copyright owners.
- license agreements 302 can be held with licensors 304 , such as music, movie, and other media content copyright owners.
- the system 300 can refer as needed to the license agreements 302 in order to remain protected under copyright and other restrictions and laws associated with the owners who license content to be used in mediagrams.
- License agreements with copyright holders can dictate what media content is provided by the mediagram server 308 for users to incorporate into their mediagrams.
- a media management system 306 can be dedicated to ensuring that all of the original media content in a library used by the mediagram server 308 is currently licensed with the licensors A-C ( 304 a - c ).
- the media management system 306 can maintain the license agreements 302 and a library of licensed original media content that have been downloaded from the licensors 304 a - c.
- the media management system 306 can additionally generate licensed media content excerpts that include chill moments and provide them to the mediagram server 308 , which can store it in a repository 312 a of licensed media content. Over time, as some media content falls out of license, the media management system 306 can purge unlicensed media content from the media file storage 312 a that is used by the mediagram server 308 .
- a mediagram server 308 can store and manage mediagrams generated by users 310 , who may subscribe to or otherwise be enrolled with the mediagram service.
- the mediagram server 308 can provide a server system through which users can create and distribute personalized mediagrams using preselected and licensed content from the media management system 306 .
- the mediagram server 308 can store excerpts of songs, videos and other content obtained from the media management system 306 .
- users 310 can request and/or access new music and other content from the media management system 306 , and use the content to create mediagrams.
- the users 310 can also stream and/or distribute their mediagrams at any time.
- the mediagram server 308 can include tools, including templates, that allow a user who is not proficient in media editing to easily create mediagrams
- the process of generating a mediagram can include identifying chill moments, “hooks,” and/or other features in the media content (e.g., punchline of a joke) that elicit one or more target emotional responses in a user, and then generating excerpts that include the chill moments, hooks, or other features eliciting emotional responses in users.
- a hook is a musical idea, often a short riff, passage, or phrase, that is used in popular music to make a song appealing and to “catch the ear of the listener.”
- the term “hook” generally applies to popular music, especially rock, R&B, hip hop, dance, and pop.
- chill moment identification can be done automatically using an algorithm based on a variety of different factors, such as changes in tempo, mode, volume, mood, tone, pitch, etc.
- Chill moment identification can also be done using crowd-sourced identification of popular moments in songs, such as using information from previous user selections of (or identification of) favorite song parts. Identification can be done manually by trained professionals. Hook identification and other emotion-eliciting feature identification can be performed in similar ways and can additionally and/or alternatively be used to generate excerpts in the systems, techniques, and devices described throughout this document.
- storage containers 312 can store media and associated files for use in generating mediagrams.
- media file storage 312 a can store the actual media used in the mediagrams (e.g., media content excerpts with chill moments), including media that is obtained from the media management system 306 .
- Custom user content storage 312 b can store personalizations that users have added to their mediagrams and/or the finalized mediagrams themselves.
- Template storage 312 c can store templates that can be used by users creating mediagrams, which can simplify the process of creating mediagrams and allow users having little or no experience in combining media to nonetheless generate mediagrams. Templates can identify and/or suggest (to the user) specific types of personalization to insert at various points in media, and suggestions on what to insert for different media content categories. Templates can be specific to media files, meaning that each media file (e.g., song, video, etc.) can have one or more templates that help coordinate a user-identified personalization to the specific chill moment in the media so that the chill moment is the most impactful/powerful. Additional templates can exist for different categories, such as romance, celebration, birthday, or other categories.
- Filler media storage 312 d can include snippets or other lengths of audio/visual media that can be inserted during some or all of the personalization to accommodate variable length personalized content.
- the media content excerpts that are used to generate mediagrams can be centered around chill moments in the media content excerpts, which can be designed to occur in a middle to middle-end portion of the excerpt. Accordingly, changing the length of a mediagram to accommodate variation on the length of the personal content to be added to the excerpt can throw off the positioning of the chill moment in the mediagram.
- Fillers can be added to the beginning and/or the end of the excerpt to accommodate for variation in the length of the mediagram due to variable personal content without disrupting the positioning of the chill moment within the mediagram.
- Filler content can be looped portions (e.g., one or two bars) of instrumentals in musical content, musical content with lyrics, or a few seconds of video content. Filler content can include copyright free content that matches up well with the media content.
- the mediagram server 308 can include web servers, various data stores, and services for mediagram instances 314 , which can be extensible and scalable (e.g., cloud computer system).
- web servers 315 can provide access to external entities that access mediagrams.
- a media metadata data store 314 a can include metadata associated with media stored in the media file storage 312 a.
- a user account database 314 b can identify users to the mediagram system and the users' account information.
- a mediagram detail database 314 c can include definitions of mediagrams that have been generated by users.
- the mediagram server 308 can include at least one cloud-based server 316 , such as implemented or provided by Amazon Elastic Load Balancers (ELB), that distributes mediagrams (or provides access) to user computing devices 324 .
- ELB Amazon Elastic Load Balancers
- user computing devices 324 that are mobile devices can be used by recipients 104 to receive distributed mediagrams, such as through one or more applications that reside on a mobile device.
- Desktop implementations of the user computing devices 324 for example, can access mediagram through a front end 318 , implemented as Amazon Web Services (AWS).
- AWS Amazon Web Services
- User computing devices 324 of users 310 can be used by both the sender and/or a recipient of a mediagram.
- FIGS. 4A-F are screenshots that collectively show an example sequence of steps for creating and distributing a mediagram.
- FIGS. 4A-E show screenshots captured during use of a mobile application for creating a mediagram, which can also be done through a web application through a web browser, and
- FIG. 4E shows an example mediagram presented in a social network.
- FIG. 4A shows an initial display of an example interface 400 used to identify a media excerpt to use for a mediagram.
- a search control 402 a user can search by song, album, artist, or in other ways.
- controls 404 can be used to browse songs of various categories, such as a romance category 406 that includes songs related to romance.
- the songs (or other media) that are searchable in the interface 400 can include songs (or other media) stored in the media management system 306 .
- the user selects the romance category 406 .
- FIG. 4B shows a display of the example interface 400 that presents media excerpts 410 that are in the romance category 406 .
- Each of the media excerpts 410 can include a chill moment that they are intended to leverage to be emotionally impactful as part of a mediagram.
- the media excerpt 408 for a Taylor Swift song is selected from a list 410 . Similar or different types of lists can be presented when the search control 402 is used.
- FIG. 4C shows a display of the example interface 400 in which the user can preview the song.
- the user can preview the media excerpt 408 .
- a create control 412 can be used to initially populate a new mediagram with the selected song.
- FIG. 4D shows a display of the example interface 400 in which the user can personalize the mediagram.
- the user can add text, photos, videos, and/or other types of personal media to the mediagram. Selecting a particular one of the controls 414 , for example, can result in the user being guided through selection, using a template, that is specific to the media excerpt 408 (e.g., Taylor Swift song) and/or the romance category 406 .
- the template may suggest that the user obtain a photo of the user with the recipient, then include a ten-second personalized message or text expressing the user's feelings.
- the system can then automatically insert the photo and personalized message in the right locations relative to the media excerpt 408 , generating a personalized mediagram without requiring user to have media editing knowledge or skills. For example, there is no need for the user to figure out where the photo and personalized message should go (e.g., relative to the chill moment), how to edit video/audio, or how to perform other tasks. As such, automatically inserting the photo and personalized message can create a compilation of a professional-looking video, with complicated details of video editing being handled automatically for the user. Additional controls in the interface 400 can allow the user to preview and view the mediagram once completed.
- FIG. 4E shows a display of the example interface 400 in which the user is distributing the mediagram.
- the user is sending the mediagram by email, but other distribution channels, including sending via social media, are available through controls 414 .
- the user can select the recipients of the user's mediagram from a contact list 416 .
- Contacts in the contact list 416 can be annotated, such as to differentiate between contacts who have mediagram accounts (e.g., designated with mobile phone/app icons 420 ) and other contacts who do not have mediagram accounts (e.g., designated by grayed out a respective user icon). If a particular recipient has a mediagram account, then the mediagram can be delivered via their account. Otherwise, if a particular recipient does not have a mediagram account, then the mediagram can be delivered via available contact options (e.g., email, text, social network, etc.).
- available contact options e.g., email, text, social network, etc.
- the user can optionally elect to send a gift with the mediagram.
- selection of a respective control from controls 422 can lead to additional user interface elements that allow the user to designate a monetary amount of or a selection of a gift that is included (e.g., using a link or an attachment) with the mediagram.
- Gifts can also be integrated into the mediagram, such as with a link that can navigate the recipient to a web page or other resource from which the gift can be redeemed.
- the user is also given the ability to download and/or purchase the song or video for their personal use.
- FIG. 4F shows a display of the example mediagram 424 displayed on a mobile device of a recipient.
- the mobile device can present a mediagram entry 426 on a social network page of a User B for the mediagram 424 created by a User A, as indicated by a mediagram social network header entry 428 .
- the mediagram entry 426 can be generated, for example, if the user selected a social network control from the controls 414 in order to share the mediagram with one or more recipients who are friends of the user in the social network.
- a mediagram song title 430 can identify the media excerpt 408 (e.g., Taylor Swift song) selected by the user.
- a mediagram description 432 can include, for example, a number of underlined portions that are links to the content, which can help drive cross-user promotion based on the mediagram content.
- the underlined portions can include, for example, an artist link 432 a, a song link 432 b for the media excerpt 408 , an album link 432 c, a gift link 432 d (e.g., if a gift was selected using controls 422 ), and an app link 432 e by which a mediagram application can be downloaded.
- FIGS. 5A-M are block diagrams showing example assemblies of mediagrams.
- the block diagrams depicted in FIGS. 5A-M present solutions to various technical problems for mediagram creation.
- a variable amount of user-supplied personalized content photos, video, text
- the length of the audio portion of the license music video playing prior to and after the primary licensed video clip can present a problem for presenting a chill moment at the right time within a mediagram.
- Audio pre-rolls can be used to account and solve for this.
- the goal is to provide an audio overlay for the user supplied personalized content that transitions seamlessly into and out of the licensed video clip.
- FIGS. 5A-M use pre-rolls to solve for variable personalized content.
- video and audio tracks for original content can be licensed together or separately, depending on contractual agreements with various licensors, which can create technical hurdles in generating a mediagram file that is compliant with licensing agreements.
- some agreements may permit the audio track from a video (e.g., movie, music video) to be licensed separately (and at a lower price point) than the price for licensing the audio and video together.
- some agreements may not permit such bifurcation of audio and video licensing rights.
- some agreements may grant licenses to master audio loops from a song that could be used for pre-roll fillers, but some agreements may not.
- Some agreements may also grant licenses to lead in or out of the licensed media content with other content (e.g., third party content, in-house generated content), but others may not.
- the block diagrams depicted in FIGS. 5A-M provide a variety of approaches and file formats for generating mediagrams to accommodate and comply with a wide variety of licensing restrictions imposed by agreements with licensors.
- audio and video track synchronization particularly when they are not licensed together throughout the entirety of the mediagram, can be problematic.
- several of the block diagrams depicted in FIGS. 5A-M insert blank video on the audio-only licensed portions so that a single video file can be generated and used for personalization. For example, if an excerpt for a music video includes a first portion that is licensed for audio only—meaning that the mediagram system has an audio file for the first portion—and a second portion that is licensed audio and video—meaning that the system has a video file for the second portion—there may be potential issues with synchronization if the first and second portions are adjoined to each other with various personalized user content.
- a blank video track can be combined with the audio file for the first portion of the excerpt to generate a video file for the first portion of the excerpt. Then, the video file for the first portion and the video file for the second portion can be assembled together to ensure proper synchronization between the first and second portions. After generating this singular video file, the personalization can be added and combined to generate the personalized mediagram.
- the mediagram includes a licensed video and audio excerpt 504 that is combined with personalization sections 502 and 506 consisting of instrumental music clip loops and audio chorus clips, respectively.
- a licensed video and audio excerpt 512 is combined with personalization sections 508 and 510 with audio files that are from the different sources.
- the audio files can consist of in-house created music and/or music from the same source as the excerpt 512 (e.g., the excerpt 512 and the audio 510 can be from the same movie).
- licensed audio 518 is combined with licensed visuals, which are then combined with user-designated audio clips used in the personalization sections 514 , 516 .
- the user-designated audio clips can be, for example, music that is uploaded by the user or selected using a mediagram template for the licensed audio 518 .
- the video excerpts can consist of label approved visuals 518 (e.g., album art, concert photos/pictures, concert video, photo shoots, etc.). Label approved visuals 518 can be used, for example, when there is not an available music video, or when label approved visuals are more appropriate than the music video.
- personalization section 520 consists of complete audio section
- personalization section 522 consists of looping segments of a song, e.g., associated with licensed video audio excerpt 524 .
- FIG. 5E there is a solid piece of licensed or user-generated audio 526 over personalization 528 .
- a licensed video audio excerpt 530 is preceded by personalizations 532 and 534 , each having a duration of a template-specified or template-aided length of time 536 , and each being generated using a template filter 538 that can automatically fit personalization content around a video excerpt.
- the personalized pictures and text can be a set phrase from the song, but depending on length of the phrase, this time can vary. For instance, if the phrase is three seconds long the app will use three second increments to determine the length of each personalized option. This can seamlessly lead into the excerpt, and can create a standard for the timing of the personalization sections.
- a picture can be three seconds and a text box can be three seconds long, which can cause a video to need to fit into a divisible of three seconds.
- the app can put the “extra” seconds at the beginning of the video with a template filler, which can solve a technological issue with fitting the personalization section.
- the loop can be either a continuous video or singular audio loops.
- single audio loops they can be programmatically assembled with personalized media content on the fly.
- the audio can be, for example, a looping phrase of the song and the visual can be a black screen.
- the loops can be pre-combined into a single file on the beginning and end of the excerpt, with a black screen over which personalization can be added.
- the program can be designed to start at the beginning of a loop, with a maximum number of permitted loops for a mediagram.
- Technological aspects of such a configuration include, at least, a portion of the audio from the licensed music video being chosen for use as looping audio clip.
- This clip can be pre-chosen and stored as a supplementary file to the licensed music video.
- the audio clip can be played over the user-supplied content in a loop and then stopped during the playback of the music video, for example.
- FIG. 5G shows an example mediagram in which the components of the media content being personalized in FIG. 5F (separate loops and video excerpt with audio) are combined into a single, seamless file 540 .
- Creating a seamless file (instead of using the separate component files (loops, video excerpt, video audio) that will be used for personalization), for example, can resolve technological issues associated with presenting individual files, such as audio and video files getting out of sync, and can create a seamless transition that improves the user experience.
- FIG. 5H shows an example mediagram that uses a solid stream of audio 542 that then transitions into a video excerpt 544 .
- the music and video tracks are licensed for the entire mediagram, even though the video is only presented during the excerpt 544 portion.
- the personalization in this example is placed on the video track before or after the excerpt 544 .
- the mediagram system can determine the start and end times for the beginning and end of the video, not the excerpt in the middle, which can provide users will have the ability to extend and contract the audio of the clip depending on how much personalization is added before or after the excerpt without having to use fillers or loops.
- a mediagram can have varying beginning and end times within the actual song.
- Personal images and text can have a preset duration while personal videos can vary in length.
- the entire music video (and accompanying audio layer) can be available within the app.
- the cue point for beginning to play the audio layer of the music video would be determined programmatically on-the-fly by counting the time-values (e.g. in seconds) associated with each piece of user supplied personalization.
- the licensed audio layer (no video) portion of the video file can be played starting from the calculated cue point.
- the video portion of the licensed music video can be masked during that time.
- the audio layer of the licensed music video can then continue to play, and the video layer can become visible for the licensed video playback portion of the presentation.
- FIG. 5I shows an example mediagram in which a user has uploaded a voice recording for a personalization 546 , and an audio clip is created and licensed by the system for a personalization 548 .
- FIG. 5J shows an example mediagram in which a video stream of audio 550 overlays a variety of personalization options, in which inclusion of a video excerpt 552 is optionally included.
- the audio and video portions of a music video excerpt 556 are licensed along with the audio tracks (not video) for the personalization sections 554 and 558 before and after the excerpt 556 .
- This scenario can involve the two audio only files and a video file being licensed and combined into the mediagram, and can be used, for example, when sync rights will not permit personalization on top of the video portion.
- personalization before and after the excerpt 556 can be variable in length.
- FIG. 5L depicts an example scenario in which a template for the video excerpt guides the user in how to best personalize the mediagram.
- the template guides the user to designate the most impactful, sentimental picture, which the mediagram system can automatically place immediately after the video excerpt.
- the mediagram system can permit the user to place personal media content at various locations and can identify the location where the most impactful picture should be placed, which can correspond with the chill moment.
- FIG. 5M depicts an example mediagram 562 that has an introduction 560 appended to the start of the mediagram 562 and, in some instances, an end 564 appended to the end of the mediagram 562 .
- the introduction 560 can include any of a variety of different combinations of visual content and audio content, such as the examples 566 a - c (other combinations are also possible).
- the introduction 560 can include a combination of preselected introductory visual content and preselected introductory audio content 566 a, a combination of personalized visual content and preselected introductory audio content 566 b, a combination of preselected introductory visual content and personalized audio content 566 c, and/or other combinations (e.g., combination of preselected and personalized visual content, combination of preselected and personalized audio content).
- Preselected introductory audio content can be, for example, an audio mark (e.g., music or other audio files that identify a good or service).
- Preselected introductory video content can be, for example, a visual mark (e.g., logo, animation, name, or other visual content that identifies a good or service).
- Personalized visual content can be, for example, videos and/or images selected by a user (e.g. user-generated photos and/or videos).
- Personalized audio content can be, for example, audio recordings selected by a user (e.g., user-recorded audio message).
- the personalized visual and/or audio content may extend into and be part of the mediagram 562 .
- the mediagram 562 can include the end 564 , which can be similar to the intro 560 in that it can include preselected and/or personalized audio and/or visual content.
- the end 564 can include a combination of preselected visual and audio content 568 a, a combination of personalized visual content and preselected audio content 568 b, a combination of preselected visual content and personalized audio content 568 c, and/or other combinations.
- FIG. 6 is a conceptual diagram of an example system 600 for generating personalized media content, such as mediagrams.
- the example system 600 can facilitate a user search of a video (e.g., a music video), and can present a custom list of videos to the user based on various specified search parameters.
- a video e.g., a music video
- the system 600 can integrate various user-provided media items (e.g., audio, video, text, and/or images) with an excerpt of the selected video, based on a personalization template associated with the selected video.
- various user-provided media items e.g., audio, video, text, and/or images
- the system 600 can present user interfaces, such as the user interface 602 , to assist the user in selecting a video for a particular recipient.
- user interfaces such as the user interface 602
- a variety of different features can be used to guide selection for the user's self-expression and for the best fit video for the recipient.
- the interface 602 can provide a set of questions to guide the user with selecting the best video to express themselves (e.g., provide engagement announcement), not particularly to a specified person.
- Libraries can be provided for the user on each of the personalized content types: text, photos, videos/animations.
- the style and tone of the library content presented to a user can be pre-filtered based on prior personalization choices presented within the app (interests, age, sex, etc.).
- the specific library content presented can also be based on the category of the chosen music video and/or specific tagged keywords applied to it by administrators of the app.
- the interface 602 can provide a set of questions to guide the user with picking the best video to tell a specific user a specified message they want to get across (e.g., provide well wishes to friend who lost a loved one).
- the system can use data associated with the user profile and/or supplied at the time of song selection such as occasion, age, nature of the relationship to guide song selection and suggest personalized content.
- the application can programmatically filter the song list based on tags applied to the song list at the database level. These tags can be similar to the allowed profile choices (age, relationship type, etc.).
- the interface 602 can prompt a user to answer questions about the user/or recipient to pick which song to use. This can create a personal list of songs to choose from. Such questions can be based on, for example, demographics of recipient (age, gender, occasion, favorite genre of music, relationship, message you want to get across). The answers and contact information (i.e. email address, or other unique identifier) can be stored to create a profile for the user if they sign up with the app. If the recipient responds to the mediagram and signs up, the profile that was saved can populate the recipient's profile.
- contact information i.e. email address, or other unique identifier
- cached profiles for recipients generated by other users can be leveraged in song selection.
- recipient/user profiles can be built based on information other users have provided. When finding what song to pick, stored answers and links to accounts/email addresses can be used to identify songs.
- the example system 600 can include a personalized video creation system 620 .
- the personalized video creation system 620 can be implemented using one or more computer servers.
- the computing server(s) can include various forms of servers, including, but not limited to a network server, a web server, an application server, or a server farm.
- the computing server(s) may be configured to execute application code associated with a variety of software components (e.g., modules, objects, libraries, services, etc.) and/or hardware components.
- Two or more software components may be implemented on the same computing device, or on different devices, such as devices included in a computer network, a peer-to-peer network, or on a special purpose computer or special purpose processor. Operations performed by each of the components may be performed by a single computing device, or may be distributed to multiple devices.
- the personalized video creation system 620 can provide various user interfaces (e.g., web interfaces, client/server interfaces, etc.) for presenting information to users through various types of user devices (e.g., laptop or desktop computers, tablet computers, smartphones, personal digital assistants, or other stationary or portable devices), and for receiving input from the user devices in regard to generating personalized videos.
- the user devices can communicate with the personalized video creation system 620 , for example, over one or more networks, which may include a local area network (LAN), a WiFi network, a mobile telecommunications network, an intranet, the Internet, or any other suitable network or any appropriate combination thereof.
- FIG. 7A is a flowchart of an example technique 700 for generating personalized videos.
- the example technique 700 can be performed by any of a variety of video generating systems, such as the personalized video creation system 620 (shown in FIG. 6 ).
- a video search interface 602 can be presented at a user device, the interface including a set of controls (e.g., text input controls, option selection controls, etc.) through which a user can specify values for one or more parameters to facilitate a search and selection of a video.
- the video search interface 602 includes a control for specifying an age, a control for specifying a gender, a control for specifying a relationship, a control for specifying a message to be sent, a control for specifying a preferred genre of music, a control for specifying a favorite artist, a control for specifying a favorite song, and a control for specifying an emotion to be expressed.
- a user of the personalized video creation system 620 may want to send personalized video that incorporates a friendly/romantic/upbeat message, for example.
- the user can select one or more appropriate values using one or more corresponding controls in the video search interface 602 , for example, and can submit the selected values to the personalized video creation system 602 as user input 604 .
- Video options can optionally be displayed ( 704 ).
- the personalized video creation system 620 can identify one or more videos that match the user selected values of the various search parameters.
- a corpus of videos (not shown), for example, may be indexed per the search parameters to facilitate subsequent searches.
- a custom list of video options 606 can be presented at the user device.
- the custom list of video options 606 includes Katy Perry's “Firework,” Outkast's “Hey Ya,” and the Romantics' “What I Like About You.”
- a user selection of a video can be received ( 706 ). For example, a user can select one of the videos presented in the custom list of video options 606 (shown in FIG. 6 ) presented at the user device. As another example, a user can submit a video title, a video title and an artist name, or another sort of video identifier. In the present example, the user selects the Romantics' “What I Like About You,” as indicated by user selection 608 .
- each of the videos included in an indexed corpus of videos may be associated with a corresponding preselected excerpt of the video.
- a video excerpt for example, can be a portion of the video, and can be of a duration that is less than a duration of the video itself.
- the video excerpt can include one or more impactful moments, such as moments for which the video and/or associated music are generally recognized, such as a chorus of a song, a popular scene of a video, or another sort of impactful moment.
- Video excerpts can be manually and/or automatically generated.
- an excerpt of the selected video can be generated ( 710 ), as will be discussed in further detail with regard to FIG. 7B .
- the preselected excerpt of video can be retrieved ( 712 ).
- a video excerpt 610 shown in FIG. 6
- the video excerpt 610 can be of a duration that is less than that of the video, such as fifteen seconds, thirty seconds, a minute, or another suitable length of time.
- a duration of a video excerpt may be based at least in part on video and/or musical elements of the video.
- beginning and/or end points of a video excerpt may occur during scene transitions of a corresponding video, musical transitions (e.g., transitions between a chorus and a verse, transitions to and from solo portions) of the corresponding video, or other appropriate transition points.
- video excerpts can include a continuous audio track from an original video, and can include a segmented video track which includes one or more portions of the original video, and one or more personalization locations for user provided media. The portion(s) of the original video and the personalization location(s) can occur at any position within a video excerpt, such as at the beginning, middle, or end of the excerpt.
- the video excerpt 610 includes a first personalization location 612 at the beginning of the excerpt, a portion 614 of the original video in the middle of the excerpt, and a second personalization location 616 at the end of the excerpt.
- the video excerpt 610 of the present example also includes a continuous audio track 618 from the original video, such that the audio track is synchronized with the portion 614 of the original video.
- a prompt can be provided for the user to provide personalized media ( 714 ).
- each video excerpt can be associated with one or more corresponding personalization templates, which can be retrieved by the personalized video creation system 620 from a data store of personalization templates 624 (shown in FIG. 6 ).
- personalization templates can be used by the personalized video creation system 620 to place user provided media in appropriate personalization locations of a video excerpt.
- a personalization template for the video excerpt can include locations for user provided text, user provided video (e.g., including audio), and a user provided image.
- the user can be prompted to provide each media item in accordance with the personalization template, such as through a prompt to “type a hello message for the recipient,” a prompt to “upload a short video telling the recipient what you like about them,” a prompt to “upload a funny picture,” and a prompt to “type a goodbye message.”
- the user can provide one or more media items of the user's choice, and the personalized video creation system 620 can match the provided media items by media type to a suitable personalization template for the video excerpt.
- the personalized video creation system 620 can select a suitable personalized template for the video excerpt that is configured to accept media items of the received type (e.g., images). Templates can suggest stored media content that is appropriate to include with a particular video, such as famous quotes, canned “helper” text/templates for different types of mediagrams, and/or libraries of artwork, graphics, and pre-generated text.
- User-provided media can be automatically placed in one or more designated personalization locations in the excerpt of video ( 716 ).
- the personalized video creation system 620 can place the user provided media 622 in designated personalization locations in the video excerpt 610 (shown in FIG. 6 ).
- text provided by the user e.g., in response to the prompt to “type a hello message for the recipient” is placed in a designated personalization location 622 a.
- a video provided by the user e.g., in response to the prompt to “upload a short video telling the recipient what you like about them”
- Audio associated with the provided video is integrated with (e.g., overlaid on) the continuous audio track 618 from the original video, for example, at a designated personalization location 622 e.
- the image provided by the user e.g., in response to the prompt to “upload a funny picture”
- Additional text provided by the user e.g., in response to the prompt to “type a goodbye message”
- a preview can be provided to the user ( 718 ).
- the personalized video creation system 620 shown in FIG. 6
- the preview can be provided to the user at the user's device, and the user can be given an option to modify and resubmit the media items.
- a personalized video (e.g., mediagram) can be finalized ( 720 ).
- the personalized video creation system 620 (shown in FIG. 6 ) can finalize the personalized video, including generating a file of a selected type (e.g., .AVI, .FLV, .GIF, .MOV, .WMV, .MP4, etc.).
- the file can be sent to one or more selected recipients and/or posted to one or more social media platforms.
- a proprietary file format for the video 610 can be used for personalizing the video to allow for the smooth and accurate playback of the various elements involved (photos, looping audio, licenses video, text, etc.) a single video file should be constructed in a special fashion.
- the video file can include three segments.
- a lead segment ( 612 ) can include a blank/black video track and looping audio repeated in the audio track. This lead section can be standardized to be at least of a certain length capable of playing during the time the personalized content would be displayed.
- the middle segment ( 614 ) of the proprietary file can include the licensed music video and the accompanying licensed track.
- the last segment ( 616 ) of the proprietary file format can again include blank/black video as looping audio played on the audio track.
- this file configuration can be to programmatically determine the length of time the personalized content needed to be displayed and cue the video file (programmatically) at the proper point within the blank video and looping audio segment, over which, the photos and text is displayed.
- the personalized content display period has completed the looping audio track can seamlessly transition (within the underlying video file format) to play the licensed music and audio segment of the file.
- a smooth transition can occur to the looping audio once again (located in the final segment of the proprietary file) as additional personalized content can be displayed over the blank video portion of the file.
- the looping audio could alternately be replaced by a seamless audio track which transitions into and out of the license video segment.
- the format is flexible to allow for both kinds of audio content to play during the personalized content—looping or seamless.
- a benefit of this approach is to reduce the complexity of synchronizing content programmatically and push the organization work into the content editing and preparation process, and also circumventing potential issues that sequencing the audio together as individual files could introduce such as small gaps or glitches in the audio playback moving from one file to the next.
- FIG. 7B is a flowchart of an example technique 750 for generating an excerpt of a selected video.
- the example technique 700 can be performed by any of a variety of video generating systems, such as the personalized video creation system 620 (shown in FIG. 6 ).
- a video can be retrieved ( 752 ).
- the video creation system 620 can retrieve a video from a corpus of videos. Retrieving the video, for example, can be performed in response to a user selection of a video, when a corresponding preselected excerpt of the video is unavailable. As another example, one or more videos can be selected for automatic generation of corresponding excerpts, and the selected video(s) can be retrieved for further processing.
- the video can be automatically analyzed to identify one or more emotionally impactful moments ( 754 ).
- the video creation system 620 can automatically analyze the retrieved video to identify emotionally impactful moment(s) in the video, such as portions of the video and/or associated music which are generally recognized as causing an emotional impact.
- automatically analyzing the retrieved video can include performing an automatic analysis of the video content and/or associated audio content.
- video analysis of the video content can be performed to identify portions of the video which include a close up of a performer in the video of a particular duration (e.g., several seconds), which may be associated with an emotionally impactful moment.
- audio analysis of audio content associated with the video can be performed to identify portions of the video which include various musical transitions (e.g., significant volume level changes, key changes, transitions between solo instrumentation and singing, etc.), which may be associated with an emotionally impactful moment.
- text analysis of time indexed lyrics associated with the video can be performed to identify portions of the video which include lyrics that correspond with a song title associated with the video, a particular topic (e.g., love, happiness, etc.), or another sort of lyric that may indicate an emotionally impactful moment.
- automatically analyzing the retrieved video can include performing an automatic analysis of user interaction data associated with the retrieved video.
- user interaction data may include video play data for the retrieved video from various users. Identifying emotionally impactful moments, for example, can include identifying portions of the video which are frequently replayed by users.
- a video presentation platform may provide users with an option for generating video clips, and identifying emotionally impactful moments can include identifying portions of the video which have frequently been included in user generated video clips.
- a video presentation platform may provide users with an option for indicating a point in time in the video to commence playback, and identifying emotionally impactful moments can include identifying the point in time that has frequently been selected.
- emotionally impactful moments can be identified by using one or more of the following features:
- Chill moments within media content can be based on a variety of factors, such as mode, mood, tone, pitch, tempo, and/or volume. Many of these factors couple together and are used in tandem, and a combination of these factors (2 or more factors) can provide chill moments. To fully capture a chill moment in a way that emotionally impactful, an excerpts can be shorter in length, such as from 10 seconds up to over a minute for the music video portion of the song.
- chill moments also known as goose bumps and shivers down the spine
- the autonomic responses can include increased heart rate, increased respiration, forearm muscle activity, increased skin conductance and forearm pilo-erection (hair-raising).
- Chills induced by music are evoked by temporal phenomena, such as expectations, delay, tension, resolution, prediction, surprise and anticipation. Chills are evidence of the human brain's ability to extract specific kinds of emotional meaning from music.
- Neural mechanisms of chills involve increased blood flow in the brain regions responsible for movement planning and reward, specifically the nucleus accumbens, left ventral striatum, dorsomedial midbrain, insula, thalamus, anterior cingulate, supplementary motor area and bilateral cerebellum. Decreased blood flow in the brain during chill moments has been observed in areas known to process emotions and visual stimuli, namely the amygdala, left hippocampus and posterior cortex.
- Chill moments are most often generated by stark musical contrasts, e.g. dramatic changes in mode (minor to major), loudness (soft to loud), tempo (slow to fast), mood (sad to happy), tone (dull to bright) and pitch (low to high). Lyrical passages can trigger chill moments; however, the effect is secondary to the musical effect.
- Chill moment identification and use in mediagrams to provide emotionally impactful messages to recipients can be specifically accomplished using the techniques and systems described throughout this document.
- a starting point for a video excerpt can be identified, based at least on part on a target emotionally impactful moment ( 756 ).
- the target emotionally impactful moment can be identified based at least in part using automatic analysis.
- the starting point for the video excerpt can be designated as occurring at the beginning of the target emotionally impactful moment, or can be designated as occurring at a point in time before the beginning of the moment.
- the point in time before the beginning of the moment can be a predetermined amount of time (e.g., 15 seconds, 30 seconds, etc.).
- the point of time before the beginning of the moment can be automatically selected based at least in part on video and/or musical elements of the video. For example, the beginning point of a video excerpt may occur during a scene transition, a musical transition, or at another suitable transition point.
- An ending point for the video excerpt can be identified, based at least on part on the target emotionally impactful moment ( 758 ).
- the ending point for the video excerpt can be designated as occurring at the end of the target emotionally impactful moment, or can be designated as occurring at a point in time after the ending of the moment.
- the point in time after the ending of the moment can be a predetermined amount of time (e.g., 15 seconds, 30 seconds, etc.).
- the point of time after the ending of the moment can be automatically selected based at least in part on video and/or musical elements of the video. For example, the ending point of a video excerpt may occur during a scene transition, a musical transition, or at another suitable transition point.
- designating the starting and ending point can include automatically identifying natural and seamless entrances and exits of the excerpt.
- the automatic identification can avoid jarring, altering pitch, dead air, off beat, unnatural entrances and exits to the excerpt.
- the automatic identification can establish complete messages and sentiments, thoughts, ideas, phrases, etc. for the excerpt (not truncating messages).
- One or more portions of the video excerpt can be designated for personalization ( 760 ).
- the target emotionally impactful moment can be associated with a duration (e.g., based on automatic analysis of the retrieved video and/or user interaction data associated with the video), and portions of the video excerpt that occur outside of the duration of the moment can be designated for personalization.
- the video excerpt can be finalized for personalization ( 762 ). For example, portions of the video excerpt that are designated for personalization can be removed, transition effects can be applied such that video and/or audio appropriately fades in and out, and other suitable finalization techniques can be applied to the video excerpt. After finalizing the video excerpt, for example, it can be added to a corpus of preselected video excerpts.
- the techniques 700 and 750 can allow for users to contribute to the selection and incorporation of videos and other media content into the catalogue of media content that is offered for personalization.
- record searches (song, artist, genre, occasion, mood, version of the song) can be used to identify media content that is of interest to users. Aggregation of the record searches can be used as implicit requests for media content to be added to the library.
- users can provide explicit requests for particular songs by filling out fields (song, artist, etc.) that identify, for example, the lyrics of the portion of the song they are interested in, and/or notation of which verse they are requesting. Users submitting requests can be notified when songs are added.
- streaming media services e.g., SPOTIFY, PANDORA
- SPOTIFY e.g., SPOTIFY, PANDORA
- Such services may present various disadvantages, such as copyright issues, song version variation, spoofed titles, and improper truncation that would have to happen on the fly and decrease mediagram quality.
- a user's device song library could be used as a source of new media content for incorporation into mediagrams and, possibly, into the system library.
- FIG. 8A is a conceptual diagram of an example social media platform 800 for providing improved and more meaningful social interactions among users.
- the platform 800 can include a variety of features that provide a variety of benefits over conventional social platforms.
- features of the platform 800 aim to build better relationships among users by promoting sincere social interactions (no shallow interactions), to put emotion and meaning back into social media, to provide both sender and receiver with insights (whether realized or not), to offer private person-person communication (as opposed to communication in front of a broader audience), to provide a fun/playful tone that makes relationships easier to maintain and more rewarding, to assist users in conveying and sending sentiments more accurate to actual feelings (versus free form text), and to provide social media that can be useful and uplifting to all users, including introverts.
- the platform 800 improves upon social platforms in a number of ways.
- the platform 800 uses gamification to create scarcity within the platform, such as scarcity with the number of mediagrams that can be generated and distributed on the platform 800 , and scarcity with regard to the frequency of interactions among users.
- a relationship concierge can also employ scarcity in providing prompts at a deliberate pace (e.g., one prompt per day), which can cause users to wait for the chance to use “high value” prompts (i.e., scarcity can cause users to use these high value prompts less frequently).
- Scarcity on the platform 800 can also mimic real-world interactions (e.g., receiving cards/gifts/sentiments) among users that are less frequent than on social platforms and, in general, more meaningful. Scarcity on the platform 800 can also reduce burn out among users and can promote regular schedule of usages.
- the platform 800 can also draw on game theory to improve social interactions and relationships. For example, subtle visual and audio cues (e.g., message being concealed and unwrapped like a gift) can be used when viewing/responding to delivered prompts to enhance the emotional state of the user when viewing/receiving the delivered items.
- Rewards can be used to increase behavior, such as rewards for improvements in a relationship (e.g., more frequent interactions, more meaningful interactions) and/or establishing new relationships.
- rewards can include, for example, points, ratings, icons, symbols, branding methods, emojis and/or other features to represent relationship states.
- the platform 800 can also incorporate a relationship concierge that can help users improve variety, depth and frequency of communication within relationships.
- a relationship concierge can use artificial intelligence (AI) algorithms and systems to predict interests, supply content and guide the user towards more meaningful relationships.
- AI artificial intelligence
- the relationship concierge can understand who the people involved in a relationship by creating a smart wall that prompts the users to interact with each other on the wall in particular ways to improve and maintain their relationship.
- the relationship concierge can be fed information about users (e.g., interests, demographic information) and their relationship (e.g., common interests, type of relationship), and can churn on that information with its AI techniques to determine and provide, for example, to insert prompts directly into the user's shared private wall to facilitate continued and improved communication.
- relationship concierge To avoid annoying some users and to allow for varied interest in the relationship concierge, its involvement can be adapted to match user preference (e.g., increase or restrict involvement in relationship).
- interest in the relationship concierge can be explicit (e.g., user-designated concierge settings) and/or implicit (e.g., user liking or disliking certain prompts from the concierge).
- the platform 800 can decrease the anxiety associated with social network interactions being in front of a broader audience, which can cause users' interactions to be more guarded and less authentic, through the use of private walls that one-on-one between users. With private walls, only the participants in the wall are able to view/contribute to the conversation. Prompts from the relationship concierge can be presented on private walls.
- prompts can include questions (e.g., individual questions, instructions, ideas for topics, joint questions that are asked to both users with the answers only being presented if both users answer), drawing pictures (e.g., draw pictures and send to each other), games (e.g., creating a story line by line, hang man, 20 questions), challenges (e.g., can take a snap of yourself doing something fun/unique, user-designated challenges), articles, pictures (e.g., creating memes and comment on pictures, Rorschach test, photo hunt), other creative options (make picture, memes, art, jokes), and/or other options.
- questions e.g., individual questions, instructions, ideas for topics, joint questions that are asked to both users with the answers only being presented if both users answer
- drawing pictures e.g., draw pictures and send to each other
- games e.g., creating a story line by line, hang man, 20 questions
- challenges e.g., can take a snap of yourself doing something fun/unique, user-designated
- Private walls can use extrinsic stimulation (e.g., using colors, movement and sound to keep users attention) and intrinsic stimulation (e.g., creating an environment that fosters an intimate connection) to engage users.
- extrinsic stimulation e.g., using colors, movement and sound to keep users attention
- intrinsic stimulation e.g., creating an environment that fosters an intimate connection
- Such private walls can, for example, create environments that fosters communication among both extroverted and introverted individuals who are looking for social media that is more protective and thoughtful than traditional social media, that includes more intimate communication using media, and that offers protection, reassurance, and control over messages (e.g., knowledge of who sees the messages, who can see the messages, time-limited duration).
- the platform 800 can use temporal aspects to reduce anxiety and uncertainty.
- messages can have a lifespan and will be inaccessible to users/deleted from the server once they expire.
- users can only view messages for a limited number of viewings, a limited number of times, can only be viewed after a specified period, etc.
- time limits, view limits can be controlled and designated by the user.
- the platform 800 can permit users to create messages that are sent at a predetermined time. (e.g., sent next Thursday at 10 am) and/or after an event has occurred (e.g., user returns from vacation).
- the platform 800 can provide and security measures in place to provide assurances and protection to user privacy, such as private walls being restricted to the participants and/or controls restricting and notifications regarding screenshots taken of content.
- the platform 800 and mobile apps running on client devices can prohibit forwarding messages outside of the app, alteration of a shared wall to being accessible to the participants, the taking of screenshots (recipient is notified if screenshot is attempted), the device's ability to copy and paste text, and/or images, downloading pictures and messages, and/or forwarding content to other users.
- the platform 800 can provide group walls that, similarly, are restricted to only the participants within the group.
- Group walls can be shared by more than two members and can creates a venue to share thoughts, ideas, commentary on topics, as well as a place to share pictures, videos, and other media content with specific people.
- Each user who is part of a group can view all comments/postings in the group.
- Each group can have an organizer who controls the group through group membership, topics, lifespan, moderation, and/or other group parameters. Members of a group can contribute to conversations, but are not permitted to control group parameters. The organizer can be identified to the group.
- group walls can also have relationship concierges that help supply and insert content into the group, such as topics of common interest (either explicitly identified by the group or implicitly determined from user preferences).
- the relationship concierge can prompt a group wall with different media types, such as pictures, questions, games, current news articles, memes, “good news” stories (e.g., stories that are relevant and positive. Aimed at creating thought provoking and inspirational dialogue), pop culture questions, themes, and/or other features.
- group walls can include temporally limited content as well as having a time-limited existence.
- the organizer and/or system can set a lifespan for the group, which can be noted to the group, after which the group will automatically dissolve and all of the content from the group wall will be deleted.
- Group walls can foster an environment for “self-regulated” discussion/sharing groups, which can permit the organizer and/or group members to remove users from the group, either through organizer admin approval, a vote of the users, and/or other features.
- Content within the group wall can be automatically analyzed, flagged, and deleted if deemed inappropriate (e.g., trolling, hate speech).
- the system 800 includes a social and media platform 802 that provides a social platform, as described in the preceding paragraphs, as well as a media personalization platform, as described above with regard to FIGS. 1-7 .
- the platform 802 operates using a variety of different data sources, including video excerpts 804 , personalized videos 806 , personalization templates 808 , user profiles 810 , relationship profiles 812 , and social data 814 .
- the user profiles 810 can include user information (e.g., demographics, interests, location) and can model user behavior.
- the relationship profiles 812 can include relationship information (e.g., users involved in relationship, type of relationship, duration of relationship) and can model the relationship (e.g., state of the relationship).
- the social data 814 can include the data on social interactions between users (e.g., messages, posts, prompts, responses to prompts, content views) and other data for the social platform 802 .
- user A associated with computing device 816
- user B associated with computing device 846
- a relationship concierge running on the platform 802 can periodically determine whether and when a prompts should be provided to one or more of the users A and B to help facilitate their relationship.
- the relationship between the users A and B can be analyzed (step A, 826 ).
- Such analysis can include evaluation of a variety of factors and data, including the profiles for the users A and B, the profile for the relationship between users A and B, analysis of historical interactions between the users A and B (e.g., determining a rating for the relationship), and/or other factors.
- the platform 802 determines that a prompt should be provided to user A (step B, 828 ).
- the prompt is provided to the device 816 for user A (step C, 830 ) and is presented ( 822 ) on the private wall 818 in sequential order with other interactions 820 .
- the private wall 818 includes an interface 824 for the user to respond, user input is received and provided to the platform (step D, 831 - 832 ).
- the platform 802 can receive and store the response (step E, 834 ) and can determine a minimum time delay for user B to respond (step F, 836 ).
- the time delay can vary depending on a variety of factors, such as the state of the relationship, a current trend of the relationship (e.g., becoming closer, becoming more distant), and/or other factors.
- the response 840 and the time delay 842 can be transmitted (step G, 838 ).
- the device 846 for user B can receive and present the response in the private wall, which includes the earlier message 820 , the relationship concierge prompt 848 , and the response 850 . Based on the delay instructions 842 , the device 846 can automatically restrict input being provided (via the input interface 852 ) to reply to the response 850 until after the delay has expired.
- the platform 800 and the devices 816 and 846 can repeatedly perform these operations A-I in the back and forth communication between the user A-B, which is configured in such a way by the platform 800 so as to enhance the quality of the social interactions.
- FIG. 8B is a conceptual diagram of another example social media platform 860 for providing improved and more meaningful social interactions among users.
- the platform 860 can include a variety of features that provide a variety of benefits over conventional social platforms, and can be similar to the platform 800 .
- features of the platform 800 aim to build better relationships among users by promoting sincere social interactions (no shallow interactions), to put emotion and meaning back into social media, to provide both sender and receiver with insights (whether realized or not), to offer private person-person communication (as opposed to communication in front of a broader audience), to provide a fun/playful tone that makes relationships easier to maintain and more rewarding, to assist users in conveying and sending sentiments more accurate to actual feelings (versus free form text), and to provide social media that can be useful and uplifting to all users, including introverts.
- the platform 860 includes a text messaging system 868 that permits free form, direct messaging between users 862 - 864 (one-to-one messaging and group messaging).
- the users 862 - 864 can generate and distribute the content between each other using the text messaging system 868 , such as through entering text (e.g., SMS message), providing multimedia content (e.g., mediagram, videos, photos, MMS message), and/or other content.
- the platform 860 can also include algorithms and an artificial intelligence (AI) system 870 that can use AI and algorithmic logic to dynamically infuse the text-messaging interface 868 between two users 862 - 864 with special selected and curated content 866 that can help facilitate more meaningful interactions and relationships.
- AI artificial intelligence
- the algorithms and AI system 870 can model a relationship between the two users 862 - 864 and use that model to select particular content from the curated content 866 , and can present that selected content at particular points in time to facilitate the relationship between the users 862 - 864 .
- the selected content can be injected into the text messaging system 868 in any of a variety of ways, such as by presenting the content to one of the users 862 - 864 to prompt that user's interaction with the other user, presenting the content to both of the users 862 - 864 to facilitate interactions among the users, and/or other mechanisms.
- the algorithms and AI system 870 can continually improve upon and refine its relationship model for the users 862 - 864 based on interactions between the users on the text messaging system 868 and their response to content injected into the text messaging system 868 by the algorithms and AI system 870 , which can allow the platform to deliver content and experiences that enrich the relationship.
- the content that is distributed by the users 862 - 864 and/or selected from the curate content 866 can be any of a variety content, including content excerpts (e.g., 2-6 second clips from mediagrams, 2-6 second clips from videos and/or music).
- content excerpts can be extracted from any of a variety of different content sources, such as music videos, television shows, live television shows (award shows), media (news), movies, sound bites from various media, mediagrams, and/or other content.
- Content excerpts can be sent as quick self-contained messages and/or in conjunction with other messages, for example, to enhance the overall impact of the messages. Similar to a mediagram, content excerpts can be sent without attaching the underlying content from which the excerpt is extracted and/or without associated user messages. In some instances, words and lyrics of the content excerpts can be included and/or transmitted with the content excerpts.
- FIG. 9A is an example system 900 for providing an improved social media platform with more meaningful social interactions among users.
- the example system 900 includes a social and media platform 902 (similar to the social and media platform 802 ), the databases 804 - 814 described above with regard to FIG. 8A , and user computing device 924 .
- the platform 902 can include one or more computer servers, such as cloud computing systems.
- the platform 902 includes a media personalization system 904 with a media analyzer 908 that analyzes media content to identify excerpts for personalization, a personalization assistant 910 that guides users through the personalization processes described above with regard to FIGS. 1-7 , and a media finalizer 912 that assembles personalized media content (e.g., mediagrams).
- a media personalization system 904 with a media analyzer 908 that analyzes media content to identify excerpts for personalization
- a personalization assistant 910 that guides users through the personalization processes described above with regard to FIGS. 1-7
- a media finalizer 912
- the platform 902 also includes a social media system 906 , which includes a relationship analyzer 914 to determine the state and rating for relationships (see example technique 2000 in FIG. 20 , which can be performed by the relationship analyzer 914 ), a relationship concierge 916 to prompt and facilitate meaningful social interactions among users (see FIGS. 11A-F , 12 A-H, 15 A-D, and 18 A-B, and corresponding description below), a group relationship manager 918 that regulates group walls and group interactions (see FIGS. 16 and 21A -B, and corresponding description below), interactive games 920 that are used on private and group walls to allow users to play games alone or together on the platform (see FIG. 11B and corresponding description of option 1132 ), and an interactive wellness app manager 922 that provides features for users to self-rate their wellness state and to allow for wellness rating-related interactions among users (see FIGS. 17A-H and corresponding description).
- a social media system 906 which includes a relationship analyzer 914 to determine the state and rating for relationships (see example technique 2000
- the relationship concierge 916 evaluates whether and how to prompt users using a variety of relationship data representation and analysis techniques. For example, various values attached to questions (or other prompting in the database 814 ) can be stored, updated, and evaluated in a hierarchy to determine timing and nature of prompting delivered to the user. Promptings can have various database values, such as one or more of the following:
- the user computing devices 924 can each include a mobile app, native app, and/or web application 926 running on the devices 924 that provide interfaces and features for the social platform (e.g. private walls, group walls, personalized media content) directly to users.
- the application 926 includes a media personalizer client 928 that presents media personalization features in a user interface and that communicates with the platform 902 to create media personalization.
- the application 926 also includes a media player (e.g., generic media player, special/secure media player) and a social media client 932 that implements client-side of the features for the platforms 802 and 902 described above (e.g., implements time delayed responses, private wall sharing prohibitions).
- the application 926 also includes an interactive games client 934 to provide interfaces for interactive games and an interactive wellness client 936 to provide an interface for a wellness rating and interaction service provided by the social media system 906 .
- FIG. 9B is diagram of an example system 940 for providing an improved social media platform with more meaningful social interactions among users.
- the example system 940 is similar to the system 900 and includes components that can be used to implement the social and media platform 902 (similar to the social and media platform 802 ) and the databases 804 - 814 described above with regard to FIG. 8A .
- the system 940 includes a concept model 942 , a data model 946 , and a content model 950 that are used to model processes, user relationships, content, and other details that are used to provide the improved social media platform.
- the concept model 942 can include data, rules, coding, logic, algorithms, computer systems, and/or other features to implement and provide enhanced social interactions among the users, which can be provided by, for example, the algorithms and AI system 870 described above with regard to FIG. 8B .
- the concept model 942 can be programmed to implement various psychological principles that are incorporated into the underlying rationale of feature and mechanism design and the AI and algorithmic frameworks that support them.
- the concept model 942 can provide a feature set that addresses the emotive opportunities and issues, or affective benefits and costs, associated with remote communications, with the end goal being to maximize benefit metrics and minimize or mitigate cost metrics.
- Benefit metrics can include an emotional expressiveness metric (e.g., metric assessing ease with which platform permit users to express emotional states to others and/or to perceive feelings expressed by others), engagement and playfulness metric (e.g., metric assessing whether platform facilitates communication that is fun and exciting to participants), presence-in-absence metric (e.g., metric assessing whether platform fosters feeling of closeness and/or connectedness to others even though separated by time or space), opportunity for social support metric (e.g., metric assessing platform's ability to facilitate social support without being physically present, such as providing a general sense of the other person “being there” for you, reducing negative affect (such as soothing anxiety), and increasing positive affect (such as feeling “special” or loved)), and/or other metrics.
- emotional expressiveness metric e.g., metric assessing ease with which platform permit users to express emotional states to others and/or to perceive feelings expressed by others
- engagement and playfulness metric e.g., metric assessing whether platform facilitates communication that is fun and
- Cost metrics can include a feeling obligated metric (e.g., metric assessing to what extent a platform creates an unwanted obligation to connect, such as creating unwanted feelings of obligation or guild to communicate), unmet expectations metric (e.g., metric assessing platform's propensity to create expectations for communication with others that will not be met and, as a result, have a negative impact on participants), threat to privacy metric (e.g., metric assessing platform's propensity to unexpectedly exposing private information to others, concerns that others are eavesdropping on private communication, and concerns that actions may be invading privacy of others), and/or other metrics.
- a feeling obligated metric e.g., metric assessing to what extent a platform creates an unwanted obligation to connect, such as creating unwanted feelings of obligation or guild to communicate
- unmet expectations metric e.g., metric assessing platform's propensity to create expectations for communication with others that will not be met and, as a result, have a negative impact on participants
- the concept model 942 includes algorithms 944 a and AI 944 b (e.g., algorithms and AI 870 ), interaction mechanisms 944 c (e.g., routines and/or subsystems to permit and facilitate user interactions), relationship psychology rulebase 944 d (e.g., rules outlining different relationship models that can be used to categorize relationships among users), and process and data flows 944 e (e.g., processes and data flows to facilitate improved social interactions among users, including obtaining implicit relationship feedback from user interactions, refining relationship modeling, and identifying content and timing to deliver the content to the users). Examples of these content selections and prompts provided to users are described below with regard to FIGS. 11A-F and 12 A-H.
- One the primary goals of the concept model 942 is to help users stay connected over the long-term.
- the periodicity of prompts provides a cadence (commensurate with the user's wishes for a given relationship) of interactions that otherwise might languish due to ordinary circumstances of life and the typical dynamics of psychological tendencies.
- Mechanisms which support this goal can include:
- the data model 946 corresponds to the structure and storage of data for the system 940 , including the structure and storage of content, user profile data, relationship profile data, histories, and/or other data.
- the data model 946 includes database schemas 948 a (e.g., table definitions, cloud storage database schemas and distribution), data storage, maintenance, and management procedures 948 b (e.g., data storage policies, cloud based storage policies), and application programming interfaces (APIs) 948 c (e.g., APIs to handle server and user device requests).
- database schemas 948 a e.g., table definitions, cloud storage database schemas and distribution
- data storage, maintenance, and management procedures 948 b e.g., data storage policies, cloud based storage policies
- APIs application programming interfaces
- the content model 950 corresponds to the content that is delivered to users on the social platform.
- the content model includes content sourcing and creation 952 a (e.g., user-generated content, preselected content, content models that can be adapted to personalize prompts to users), content psychology rulebase 952 b (e.g., rules defining different types of content and their appropriateness to different users), and/or content management procedures 952 c (e.g., processes for curating content over time).
- the content model 950 and the data model 946 can be used to provide content classifications 958 , which can be used to identify relevant content to deliver to users at various points in time depending on any of a variety of factors, such as relationship profiles, user profiles, and/or other relevant details.
- the content classifications 958 can include content definitions 960 a (e.g., definitions for different types of content) and content taxonomies 960 b (e.g., hierarchical organization of relationships of different types of content).
- the classification of content into different types of content can include different configurations of media and data, and can rely on multiple different taxonomies across different data dimensions that function together to more accurately classify content for selection and delivery to users.
- Taxonomies can include, for instance, a modal taxonomy (e.g., classification of content delivery which considers the combination of structural mechanism of delivery and general purpose behind the delivery), topical taxonomy (e.g., hierarchical classification of the content itself, i.e.
- topical metadata e.g., non-hierarchical meta-data grouping method to allow retrieval and sorting by criteria such as, descriptive (e.g., fun, serious, cultural, academic, controversial), quantitative (e.g., locality, time sensitivity, age appropriateness, complexity level within topic)), and/or others.
- FIG. 9C depicts an example system 970 for providing an improved social media platform with more meaningful social interactions among users.
- the example system 970 is similar to the systems 900 and 940 , and includes components that can be used to implement the social and media platform 902 (similar to the social and media platform 802 ) and the databases 804 - 814 described above with regard to FIG. 8A .
- the system 970 includes a content store 972 (similar to the curated content 866 ) from which content is selected and served to users 990 .
- the content store 972 illustrates an example modal classification (example of content classifications 958 ), as described above.
- This diagram represents a simplified example of modal classification of content, but other different and/or more complex classifications are also possible, such as different modes consisting of different data/media configurations and can each have different handling in terms of the data modeling and client-side presentation.
- the example modal classification includes example data elements 974 a - f.
- a first element is a shared subjects of interest data element 974 a, which represents content that is specifically relevant to a relationship based on a shared interest in a given area of subject matter. This data element can be provided at varying levels of specificity, with more specific categorization aiding in selecting more relevant content for the users in a relationship.
- the shared subjects of interest element 974 can included fields corresponding to, for example, facts, news, articles, media, and/or other fields.
- a second element is a concierge data element 974 b that stores data values designed to promote or assist the users' interactions in a pragmatic way, which are used by AI logic 976 to select relevant content for users.
- Examples fields for the concierge data element 974 b include reminders (e.g., personal dates such as birthdays, graduations, and anniversaries, holidays such as Christmas, Mother's Day, Father's Day, and Valentine's Day), activity suggestions (e.g., enumerated data field including designations such as, simple, involved, random, fun, or context-specific suggestions of activities for the users), emergent (e.g., prompts resulting from AI/Algorithmic analysis, such as identifying a keyword from natural language processing such as “dinner” prompting links to local restaurants of shared favorite food types), and/or other data fields.
- reminders e.g., personal dates such as birthdays, graduations, and anniversaries, holidays such as Christmas, Mother's Day, Father's Day, and Valentine's Day
- a third element is a data gather element 974 c that directs in-line prompts that ask about a user, about other users, about relationships, and/or offer the ability to provide quick feedback about the content being delivered.
- a normalized data prompts field that is part of the element 974 c can include prompts that can be stored as key-value pairs, such as “How long have you known each other?”—(Number of Years); “What is the relationship—Brother, sister, mother, father, uncle, nephew, good friend, new friend, co-worker, wife, significant other?”—(Options List); “What interests do you share?”—(Options List); “Would you like more or less prompts with this connection?”—(More/Less); “Did you like the last mediagram message?”—(Y/N).
- a fourth element is a programs data element 974 d that stores values representing thematic sequences of content and/or prompts.
- a data element 974 d can include any of a variety of fields, such as interpersonal fields that identify current positions along or settings for a sequence of prompts that are more aggressively designed to help learn about the other person, for example a series of prompts on politics, or a helpful series of prompts to help in troubled relationships.
- Such a field may be configurable by users to have varying levels of controversial or difficult questions/content, such as being configured to have a higher likelihood of generating controversial or difficult questions or content.
- User interface features can be output across multiple different user computing devices so that setting configurations are purposefully, voluntarily and mutually requested/agreed upon by the users, as opposed to, for example, being independently instantiated by AI or algorithms.
- the data element 974 d can additionally include informative fields that correspond to sequences of content and/or prompts that are presented to users.
- informative fields that correspond to sequences of content and/or prompts that are presented to users.
- sequences of content and/or prompts pertaining specific subject matter.
- different sequences of content and/or prompts can pertain to the history of the French Revolution, basic car maintenance facts, a biography of Steven Spielberg, and/or others.
- the data element 974 d can also include entertainment fields that correspond to sequences of media content that are presented to users.
- content sequences can include sequences of short stories, sequences of illustrated series, short comic novels, and/or others.
- a fifth element can be an interactive data element 974 e that stores programmatic elements (e.g., applications, programs, routines) that can be run to promote interactions between users at particular points in time.
- the interactive data elements 974 e can include, for example, drawing programmatic elements (e.g., interactive drawing programs, such as collaborative drawing programs), game programmatic elements (e.g., interactive games, such as chess or other strategy games), touch points (e.g., features promoting simple user interactions, such as interactive images), entertainment programmatic elements (e.g., videos, music), and/or other programmatic elements.
- a sixth element can be a promoted data element 974 f, which can include promoted content, such as paid advertising content that can be targeted to users based on relationship profiles, user profiles, and/or other information/factors.
- Promoted data elements 974 f can include, for example, text, links, images, videos, interactive media elements, and/or other types of content containing one or more promotional messages.
- the concept model 942 and the data model 946 can be used to provide profiles 954 , such as profiles modeling individual users (e.g., user profiles) and to model relationships between multiple users (e.g., relationship profiles for relationships between two users and/or relationships between groups of users (more than two users).
- profiles 954 can include any of a variety of different types, such as user profiles 956 a, relationship profiles 956 b, relationship histories 956 c, relationship fingerprints 956 d, and/or other profiles.
- Profiles 954 can be used to identify content that is relevant for presentation to users based on any of a variety of factors, such as user preferences (as represented by the user profiles 956 a ), the user relationships (as represented by the relationship profiles 956 b ), historical context for user relationships (as represented by the relationship histories 956 c ), and relationship fingerprints (as represented by the relationship fingerprints 956 d ).
- FIG. 9C an example of the profiles 956 a - d being used by an AI logic system 976 to select content from the content store 972 for dissemination ( 980 ) to user devices 982 is depicted.
- content can be selected from the content store 972 using relationship profiles 956 b, which can include personal user data shared between users as well as all particular information about the relationship.
- Relationship profiles 956 b can be created by gathering data ( 984 ) from the user devices 982 . For example, users can have full control over viewing, adding to, and editing the content of each of their relationship profiles 956 b.
- relationship profiles 956 b can be used to gather data 984 to build relationship profiles 956 b, such as the user devices 982 presenting a profile user interface for direct user viewing and editing of relationship profiles 956 b (e.g., viewing and editing various fields/parameters), in-line social network prompts to obtain quick relationship feedback (e.g., prompts designed to unobtrusively allow the user to alter profile settings on-the-fly through quick feedback, such as through one-click responses), indirect relationship feedback from the user devices 982 (e.g., user reaction (or lack thereof) to selected and presented content), and/or other data.
- quick relationship feedback e.g., prompts designed to unobtrusively allow the user to alter profile settings on-the-fly through quick feedback, such as through one-click responses
- indirect relationship feedback from the user devices 982 e.g., user reaction (or lack thereof) to selected and presented content
- other data e.g., user reaction (or lack thereof) to selected and presented content
- Relationship profiles 956 b can include a variety of relationship-related data, such as data identifying shared interests, relationship length, relationship type (e.g., brother, sister, friend, co-worker), relationship nature (e.g., serious, light-hearted, romantic, platonic), desired frequency of interaction (e.g., daily, weekly, monthly).
- relationship-related data such as data identifying shared interests, relationship length, relationship type (e.g., brother, sister, friend, co-worker), relationship nature (e.g., serious, light-hearted, romantic, platonic), desired frequency of interaction (e.g., daily, weekly, monthly).
- the user computing devices 982 can present user interfaces designed to allow the user to view data and other inputs being used to build relationship profiles 956 b.
- Such user interfaces can, for example, present condensed relationship information on a relationship profile dashboard screen, present graphical visualization based on factors such as number of shared interests, activity level, number of prompts responded to, etc., and/or other relationship-related graphical elements. Examples of user interface features to visualize relationships are depicted with regard to FIGS. 13A-C .
- the user and relationship profiles 956 b can be generated using the data gathering 984 from the user computing devices 982 , through direct and indirect feedback from the users.
- profile building input can be directly gathered through participation by the user as they populate user and relationship profiles with information.
- User interfaces will allow users to supply data in a variety of ways, such as information supplied about user, information supplied about relationships, and information supplied about other users.
- direct data prompts can be provided to users directly asking information, such as small portions of information that can, in some instance, be provide through a “one-click” response, and is easily dismissible by the user in order to be unobtrusive.
- FIG. 14C is a screenshot of an example “one-click” feedback interface in which content 1472 is presented with selectable graphical elements 1474 - 1476 that the user can select with a single click/selection action to provide feedback related to the content 1472 .
- user answers to content prompts e.g., prompts selected a messaging store 978
- the profiles 956 a - d can be used to construct the profiles 956 a - d, such as answers to the questions “what color is your favorite?” presented as clickable grid of colors, or “which historic figure do you admire most?” presented as a selection of photos.
- usage data indicating how users access and use the system 970 can be recorded and stored, such as usage data indicating when users message in a relationship, how often, how quickly they respond to prompts, where (GPS) do they usually interact with the app, and/or other usage information.
- GPS global positioning system
- NLP natural language processing
- user interface features 962 can be provided by combining the concept model 942 , the data model 946 , and the content model 950 .
- the user interface features can be selected based on, for example, the profiles 954 and the content classifications 958 .
- Example user interface features are presented in FIGS. 11C-F and 13 B-C.
- FIGS. 11A-B show general user interface features
- FIGS. 11C-F present example specific user interface features that can be selected for presentation to users.
- FIGS. 13B-C present example timing indicators for relationships.
- a touch point is a brief interaction prompted by the system (identified by “Mora”) that can be more fun than demanding or thoughtful and that can typically have a basic level of interactivity.
- the example touch point in this example is a prompt to draw a picture of the other user who is part of the relationship (“Anne”).
- Other examples of touch points are “tapping one of three emoji-style faces,” “tapping a photo of Kirk or Picard,” and “drawing a sketch (in-line, in-app) of a Mora suggested subject.” Touch points can target a presence-in-absence objective.
- a shared experience feature can prompt people in a relationship to share a regular but punctuated, periodic and sequenced media experience over an extended period of time. Content can be based primarily on shared subjects of interest.
- the shared experience feature that is presented (identified by “Mora”) regards Steven Spielberg films, which is an interest shared by the users.
- shared experience features include “viewing the biography of a favorite historical figure in a series of bite-size content delivered at a cadence desirable by the user(s),” “playing a game of chess within the interface over the course of weeks or months,” and “sharing thematic sequences of content within the interface is like slowly watching a TV series together.”
- Shared experiences can target a presence-in-absence objective and an engagement & playfulness objective. Shared experiences can have a low risk of negative affective costs.
- an example relationship concierge feature 1170 is presented.
- the relationship concierge can provide reminders, suggestions, assistance, and/or other prompts to assist users with maintaining and improving their relationships.
- a user reminded that it is his/her friends birthday and with a suggestion to send a mediagram (also called a MoraGram), and then the user acting on that suggestion by sending a mediagram.
- Other example relationship concierge features can include “it's your uncle's birthday next week. He likes fishing and camping—how about a related gift?” and “it looks like you're planning dinner—would you like suggestions?” Relationship concierges can target opportunities for social support, engagement, and playfulness objectives.
- an example enrichment feature 1180 is presented.
- An enrichment feature can provide prompts and content which encourage learning about the other person on a meaningful level.
- Enrichment features can have an unassuming approach and can avoids perception of being “clinical.”
- the enrichment feature is prompting the user to share a favorite memory with the others user.
- Enrichment features can target emotional expressiveness objectives.
- FIGS. 13B-C present user interfaces with example relationship timing indictors.
- Timing indicators are a visual display which surfaces the underlying engagement frequency mechanic, the delivery buffering mechanic, or both. The variations below describe benefits and risks that could result from implementation of these mechanics.
- FIG. 13B shows an example user interface with a single timing indicator and
- FIG. 13C shows an example user interface with dual timing indicators. Other quantities of timing indicators are also possible.
- the timing indictors can present visualization for timing related to one or more of the following relationship features:
- content programming and sequencing 964 combines the concept model 942 and the content model 950 .
- Delivering users content that is relevant and specific can be a significant challenge. If content is too general the perception by the user may be that they appear to be advertisements. For example, if the user provides a general interest in sports when, unknown to the system, the user has a specific interest in the Boston Red Sox, attempts to deliver relevant content falling under the general “sports” classification can cause disengagement and frustration by the user (e.g., serving content related to the NFL, other baseball teams).
- the breadth and depth of the content model 950 can have a significant impact on the relevance of content that is selected for presentation to users, and ultimately on user engagement with the system and other users.
- the content programming and sequencing 964 can include a variety of data elements that are being tracked and used to determine when and what content to serve to users, such as engagement frequency, delivery buffering, ephemerality, privacy of shared information, and/or others.
- Engagement frequency relates to the level of involvement the system has with the user and, more specifically, to particular relationships.
- a user may set a default value for the desired frequency for prompts and content delivered to the user, and can do this individually for each relationship. For example, the user may choose to set a high frequency (e.g. daily) for a significant other while setting a very low frequency for an old acquaintance (e.g. monthly or quarterly).
- Frequency setting can be adjusted through direct and/or indirect user feedback, such as adjusting the timing of these feedback prompts based on analytic data of actual user behavior. For example, if the user regularly delays a response to prompts in a given relationship, a frequency adjustment prompt would be delivered to the user.
- Delivery buffering is a mechanism which purposely delays the sending and receiving of messages (e.g., prompt responses by users) by a certain amount of time (e.g., hours, days). Delivery buffering is contrary to conventional social media systems which seek to speed up the pace of user interactions. Delivery buffering can provide a variety of benefits, such as allowing users the ability to recall messages, as needed, and to build anticipation during which both senders and recipients are thinking about each other (e.g., incoming message buffering is visually communicated in the UI, such as FIGS. 13B-C ).
- Ephemerality refers to messages and content sent between users that will be “removed” after a certain period of time.
- the window of time that elapses before content is removed can be controlled by the user(s) per relationship.
- This feature can help preserve user privacy.
- Privacy of shared information relates to features that purposely limit a user's ability to distribute information shared on the platform will be implemented.
- the features include disallowing the ability to copy and paste content from the app to other applications, and discouraging the capture of screen contents by use of a devices screen shot feature. Where possible, this device feature would be disabled while using the app. However, device manufacturers have typically not allowed the disabling of the screen shot feature. A method of informing the user that the message-sender is notified of screen shots being taken will be employed as discouragement.
- FIG. 10 is a flow chart 1000 with user interfaces 1002 and 1016 to establish an initial connection between users on a social media platform.
- the interfaces 1002 and 1016 can be presented by the social media client 932 on the user computing devices 924 , for example.
- an initial social connection is established between two users.
- the users provide information and answer questions about each other and about their relationship, which the system 906 uses to create and/or improve upon user and relationship profiles that are used by the system 906 (e.g., used by the relationship concierge 916 ).
- a series of information requests and questions 1006 - 1012 are posed to the user for his/her new relationship with the user 1004 .
- usernames can include any of a variety of ASCII characters (including non-alphanumeric characters, such as symbols and operators) as well as icons/emojis/graphics (as indicated by the seashell icon).
- the user is prompted to provide the user's desired prompt frequency for the relationship 1006 , the type of relationship 1008 , common interests among the users 1010 , and types of prompts that the users are interested in 1012 . Responses to this information assists in initializing the relationship ( 1014 ).
- the user is again presented with a series of information requests and questions 1018 - 1024 .
- the user is prompted to designate a desired level of concierge involvement in the relationship 1018 (e.g., heavy involvement can cause all interactions to pass through the concierge—meaning no freeform exchange outside of concierge prompts, minimal involvement can permit many interactions outside of the concierge), whether the concierge should prompt one or both users at a time 1020 , the prompt types 1022 that the user is interested in, and desired minimal delay for users to interact with each other on the wall 1024 .
- the concierge can be initialized ( 1026 ) and the users can begin socially interaction in the platform ( 1028 ).
- FIGS. 11A-B are screenshots of example user interfaces 1102 and 1114 on an example mobile computing device 1100 for interacting with other users via private walls on a social platform.
- an example home screen interface 1102 is provides a list 1112 the user's friends on the platform along with relationship information for each of the friends.
- Each of the friends is identified by a username 1106 , a relationship status icon 1104 (status of the relationship between the user of the device 1100 and the friend), a relationship rating 1108 (rating of the relationship between the user of the device 1100 and the friend), and information on the last interaction between the users 1110 . More stars for the ratings 1108 indicates a stronger relationship, and fewer stars indicates a weaker relationship. Relationship ratings 1108 can be determined based on a variety of factors, such as points for questions and aggregate point summaries over time.
- the relationships are sorted in the list 1112 in reverse order so that the relationships most in need of attention by the user are seen at the top of the list 1112 .
- the relationships in the list 1112 can be selected to navigate to a private wall for the relationship.
- a private wall for a relationship with user 1142 is presented in the interface 1114 , which includes a variety of different options 1118 - 1132 and 1140 for the user to interact with the other user 1142 .
- the private wall 1114 also includes a chronological view of recent interactions between the users, which includes unprompted messages 1134 and 1136 , as well as a prompt 1138 that has been provided to the user. As indicated by the timestamps of the messages 1134 - 1136 and the prompt 1138 , over two weeks had elapsed since the users had interacted, which is what likely triggered the relationship concierge to provide the prompt 1138 to continue communication between the users.
- the user can respond to the prompt 1138 through the interface 1140 and/or through one or more of the interaction options 1118 - 1132 .
- the user can also elect to ignore the prompt 1138 and/or to indicate dislike of the prompt 1138 .
- the interaction options 1118 - 1132 include an interactive wellness feature 1118 (see FIGS. 17A-H ), a prompt feature 1120 to request for another or different prompt (e.g., step outside of current art of prompts from the relationship concierge), questions 1122 , an interactive drawing feature 1124 , a picture sharing feature 1126 , a mediagram creation and sharing feature 1128 , a photo/video sharing feature 1130 , and a games feature 1132 .
- an interactive wellness feature 1118 see FIGS. 17A-H
- a prompt feature 1120 to request for another or different prompt e.g., step outside of current art of prompts from the relationship concierge
- questions 1122 e.g., an interactive drawing feature 1124 , a picture sharing feature 1126 , a mediagram creation and sharing feature 1128 , a photo/video sharing feature 1130 , and a games feature 1132 .
- the interface 1114 also includes relationship status information 1144 (rating for the relationship with user 1142 ) and options 1146 to modify settings for the relationship.
- FIGS. 12A-H are screenshots of an example process flow for a relationship concierge facilitating and improving social interactions among users via private walls on a social platform.
- the user device 1200 for Anne includes the interface 1204 for a private wall between Anne and David on Monday at 3:00, which is when Anne receives a prompt 1206 from the relationship concierge.
- the prompt 1206 is accompanied by a field 1208 through which Anne can respond to the prompt.
- the user device 1202 for David does not present any prompts in the interface 1205 , including not presenting the prompt 1206 just given to Anne.
- This scenario represents an initial state where ho prior prompt history is visible in David's view.
- Anne enters and submits an answer 1208 to the prompt 1206 given by the relationship concierge, as indicated by the sent status 1210 for the prompt and answer ( 1206 - 1208 ).
- David receives Anne's message 1212 - 1216 .
- the question (or directive) given by the relationship concierge is visible to David ( 1214 ), in addition to the content of her reply ( 1216 ).
- the prompt response can be an icon that, once selection, opens up like a gift with animation.
- Such animations features could additionally be used as ways to present electronic gifts and/or donations to other users and/or organizations (e.g., charitable donations to disaster victims).
- David receives a new prompt 1218 from the relationship concierge, which includes a field 1220 to provide a response.
- the prior history of sent and received prompts is visible in the interface 1205 , but may be removed after a default or user-set amount of time.
- Anne receives a message 1224 - 1228 from David which displays both the prompt 1226 given to David and the content of his reply 1228 .
- FIG. 13A is screenshot of an example user interface 1300 on a mobile computing device for viewing a user's friends and the corresponding interaction delays until another relationship concierge prompt is expected.
- the user interface 1300 presents a list of friends across a number of different categories, including a “Msg” column 1302 that indicates whether the user of the device presenting the interface 1300 has a message waiting from one of his/her friends.
- Such messages can include, for example, any type of prompt, a mediagram (personalized music video message), etc.
- the unopened gift icons 1308 indicate that the user has not viewed the waiting message yet.
- the opened gift icons 1310 indicate that the user has already viewed all messages sent from the corresponding friend.
- the “Name” column 1304 displays the name of the friend(s) with whom you (the user) are having a private conversation.
- the “Time Until Next” column 1306 indicates an amount of time, which could be either approximate window of time or a precise amount of time, until the next prompt will arrive for that relationship from the relationship concierge.
- the “Time Until Next” column 1306 could be used to represent additional and/or alternative relationship metrics.
- the “Time Until Next” column 1306 could indicate timers (bars) representing how much time has passed since the user last communication with a given contact. In such a scenario, the longest bar would be shown on top of the list to highlight the relationship in greatest need of attention. Color distinctions in the timer bars can indicate an “overdue” state where too much time has passed (according to default values or user set values).
- FIG. 14A is a conceptual diagram of an example personal concierge system and algorithm 1400 for facilitating and improving user relationships on a social network.
- the relationship concierge 1402 is a programmed logic that is designed to interpret and understand the nature of a relationship between two people, tendencies, and behaviors of each individual, and to formulate a forward looking program of prompts based on those factors.
- the relationship concierge 1402 is largely algorithm-based, using historical user data and user inputs to determine the content of the prompts given to a user.
- the relationship concierge 1402 also incorporates one or more AI techniques and platforms to allow for decisions to be made that are not pre-programmed into the algorithm or pre-determined.
- the relationship concierge 1402 can be allowed to make choices for the users based on emerging patterns of usage and user input.
- the relationship concierge 1402 can use a variety of different data sources 1404 - 1408 to determine and provide prompts to users.
- the relationship concierge 1402 can use historical user behavior data 1404 , which can include, for example, answers to prompts, how long the user takes to responds, the times of day the user responds, how quickly the respond to certain categories of prompts, how often they dismiss (reject) certain types of prompts, and/or other relevant data representing historical user behavior.
- the relationship concierge 1402 can use user adjustment data 1406 which indicates changes in relationships over time. Users are provided with options to directly supply feedback and information on the nature of their relationships with others. For example, the user could indicate that the relationship for a contact is intimate/romantic in nature, and further that there has been a recent breakup in the relationship, and further that they either want to re-kindle the relationship or to ease it into a platonic relationship. In another example, the user could indicate that the contact is an old friend that they would simply like to stay in touch with but is not interested in delving into deep conversations. Other forms of ongoing direct user input can be indicated via in-line feedback options within the ongoing conversation that they liked or disliked a given prompt type, or that they would like the speed up or slow down the rate at which prompts are supplied.
- the relationship concierge 1402 can use one or more default program of prompts 1408 based on the standard parameters of input (described above).
- specialized sets of prompts can be centered around a theme that can be chosen by the users. These special sets of prompts, if chosen, can be weighted above the standard parameters.
- the set of special prompts centered on a theme can have a discrete quantity and start/end date, not necessarily known to the users.
- Examples of the special themed programs that can be delivered by the Relationship Concierge can be a series of prompts that have an aim to, for example: reconcile political viewpoint differences, patch up a failing relationship, deeply explore the memoires and life of an individual (e.g., a grandmother and granddaughter relationship), explore a specific subject such as philosophy, religious beliefs, and/or lighter subjects such as movies, art, music, sports, etc.
- FIG. 14B is a diagram of an example system 1450 to vary content that is selected for presentation to users.
- Content delivery to users can seek to balance user-reported desirability with natural variation to avoid both extremes of content irrelevance and overly predictable consistency.
- the example system 1450 can be implemented as part of the example platforms/systems 800 , 860 , 900 , 940 , 970 , 1000 , and/or 1400 described above.
- the system 1450 can effectively utilize a feedback loop to provide content and, based on user feedback, to refine the selection of future content that is selected for delivery to users.
- Relationship profiles and history can be used to select content ( 1454 ) for presentation to users.
- the relationship profiles can include data that describes relevant matching characteristics learned by various methods including self-reported data contained in individual profiles, data gathered from algorithms and analytics, and user-reported data about the specific relationship.
- a history of content delivered to the relationship can be stored in order to track, regulate, and plan the flow of content.
- the content can be, for example, taxonomically organized content is stored in the system which is then queried and retrieved based on relevance to the specific relationship.
- Topical interests can be used to refine the content selection (e.g., pare down a large set of content to a smaller subset of content).
- Topical interests are qualitative measures of both implicit relevance (e.g. some content is of more general relevance to a married couple than friends or co-workers) and explicit relevance (e.g. user-supplied data indicating a shared interest in baseball or the Boston Red Sox.). The more specific the domain of interest is defined the greater the value of topical interest.
- the classification of content is stored as part of the topical taxonomy described in a later section.
- Intensity ( 1458 ) and frequency ( 1460 ) parameters can be used to further refine the content selection.
- Intensity generically refers to where in the spectrum of casual to intimate (or personal) the nature of the content belongs.
- the intensity of any given piece of content is an attribute applied and stored in the metadata taxonomy.
- Frequency can be, for example, a quasi-mutually agreed upon value between two users regulating how often the system will deliver content. For example, if one user sets the initial desired frequency at daily and the other sets the desired frequency at weekly then the system may set the starting point for the actual delivery frequency at every three days, and is thus a de facto negotiated interaction.
- the selected content can then be delivered via one or more modules ( 1462 ) (e.g., FIGS. 11C-F ).
- Modules are how content is manifest in the user interface.
- Each module type is a specific combination of media formatting (e.g. text only, text with image, etc.), interactivity characteristics, and categorical purpose (e.g. a reminder, a question prompt, an element of a thematic sequence (see programs), or a promoted ad).
- User feedback can be obtained ( 1464 ) from the UI and used to further refine the relationship profile and history ( 1452 ). For example, as content is delivered, the opportunity for users to provide quick “one-click” feedback will be presented. Occasionally buttons that allow users to tap Less Often/More Often/No Change or More Like This/Less Like This will be attached to a piece of system-delivered content. This feedback is used to adjust the relationship profile data accordingly.
- FIGS. 15A-D are screenshots of a relationship concierge being applied to other social platforms providing predominantly open communication among broad groups of users, such as FACEBOOK, TWITTER, LINKEDIN, and/or other social platforms.
- the interface 1500 can be a news feed, for example.
- the interface 1500 includes a post field 1502 through which the user can create and submit a post for distribution across a broad group of users (e.g., friends, fans, followers, public).
- the interface 1500 includes a relationship concierge prompt 1504 that is presented to the user.
- the prompt 1504 is identified as being from the relationship concierge and that it is presently only visible to the user ( 1506 ).
- the prompt 1504 identifies that it pertains to the User A (user of the interface 1500 ) not having interacted with User B in over one month and suggests a number of options ( 1508 ).
- the example options include a first option 1510 to like or comment on a recently popular post 1512 of User B. At least a portion of the post 1512 is presented in the interface along with interactive features 1514 and 1516 through which the user can directly interact within the feed. These interactions with the post 1512 , as facilitated by the relationship concierge, will be viewable by and potentially broadcast to a broader audience than just User A and User B.
- a second option 1518 is to publicly post a message on User B's wall. Again, this option includes an interactive feature 1520 to perform the action from within the feed. Also again, this interaction will be viewable by and potentially broadcast to a broader audience than just User A and User b.
- a third option 1522 is to answer a question for User B regarding User A's favorite movie over the past year. Again, this option includes an interactive feature 1524 to perform the action from within the feed. This option, however, provides a select box 1526 through which the User A can designate whether the answer to this question should be delivered as a private message (not initially viewable beyond User A and User B, unless forwarded or shared with other users) or posted to a broader audience. In this example, the User A enters an answer to the question in the third option 1522 and does not select the box 1526 .
- an interface 1528 for User B on the social platform presents a post 1532 for the relationship concierge prompt 1504 and answer 1524 from User A.
- the post 1532 includes information identifying that the User A answered a question posed by the relationship concierge for User B, the question and answer 1536 , and features 1538 - 1540 through which the User A, the User B, and other users can interact with the post 1532 .
- the news feed 1528 for the User B also includes a field for the User B to create a post 1530 and a post from another user 1542 .
- the interface 1500 for the User A on the social platform is again presented with the prompt 1504 from the relationship concierge. However, in this example the user selects the box 1526 to deliver the answer 1524 to the question 1522 privately to User B.
- a private messaging interface 1550 (e.g., FACEBOOK MESSENGER) for the User B on the social platform is presented.
- the interface 1550 depicts the private message 1552 from the User A as well as the question and answer 1554 to the prompt 1504 from the relationship concierge.
- the private message 1552 is presented among other private and group messages 1556 - 1566 for the User B on the social platform.
- FIG. 16 is a diagram depicting creation and use of a private group wall 1600 on a social platform to improve and enhance meaningful social interactions.
- the example group wall 1600 has multiple users 1602 who are members of the group and who are permitted to contribute to the wall 1600 .
- the group organizer is identified at the top of the list with the notation “organizer.”
- the organizing user can designate a variety of parameters for the group, including who is invited/permitted to be a member, permissions for other members to add new members (e.g., friends of original members are able to be added), time limits on the existence of the group wall (e.g., 2 month expiration date), roles for different group members to play within the group (e.g., rock band roles—band member, groupie, fan), and/or other features.
- the group wall can be initiated with conversation starter, which can be facilitated by the relationship concierge.
- the conversation starter can include, for example, pictures, drawings, memes, videos, news stories, questions, etc.
- the group organizer needs help finding a topic of common interest, they can use the relationship concierge 1604 to create a custom list 1604 of common interests (which can automatically be identified from user profile analysis) and can choose a topic 1606 most of the participants have in common.
- the selected topic 1606 can be used to insert initial content 1608 into the wall 1600 that pertains to the selected topic 1606 .
- the initial content 1608 includes news articles relevant to the topic 1606 .
- FIGS. 17A-H are screenshots of an example user interface 1702 on a computing device 1700 for users to express and interact with others regarding their emotional well-being.
- the interface 1702 is a visual aid for users to better understand their feelings and improve their mental states.
- the three corners of the interface 1702 (a triangle) represent emotional extremes.
- the top corner (yellow/white) is selfless compassion, a bright ideal to strive for.
- the left corner stands for passion and the right corner represents depression.
- the center circle 1704 represents normality, the sphere of daily emotions.
- a movable pin icon 1706 can be placed at different positions throughout the interface 1702 by the user to represent his/her current emotional state.
- the user can adjust the positioning of the icon 1706 as frequently or infrequently as he/she wants (e.g., hourly, daily, weekly). Increased frequency of use can assist users in understanding and tracking the change in their emotional state over time, and can help them work to improve their moods.
- the three corners 1710 , 1714 , and 1718 of the interface 1702 represent calm, understanding, enlightened, generous, compassionate (top corner 1710 , which can be colored yellow-white); anger, agitated, irritated, passionate (left corner 1718 , which can be colored red); and sad, depressed, down, bored, dispassionate (right corner 1714 , which can be colored blue).
- the three sides 1708 , 1712 , and 1716 of the interface 1702 represent optimistic, enthusiastic, upbeat, joyful (left side 1708 , which can be colored orange); friendly, sociable, agreeable, cool (right side 1712 , which can be colored green); and anxious, upset, concerned, fearful (bottom 1716 , which can be colored purple).
- the top half of the interface 1702 can represent positive, healthy emotions, whereas the bottom half represents negative, less-healthy emotions
- Three different walls on the social platforms 802 and 902 can depict the current mood of the user of the device 1700 —a private wall that is only accessible to the user of the device 1700 and the relationship concierge (see FIG. 17B ), a shared private wall for a relationship between two users (see FIG. 17C ), and a private group wall for more than two users (see FIG. 17D ).
- the pin 1706 indicates the user's current mood.
- the pin 1706 can be positioned, for example, by the user with the three sliders 1720 - 1724 —Calm/Anxious, Friendly/Angry, Optimistic/Depressed.
- the user may also choose to include input from the relationship concierge and/or other users.
- the user has moved the pin 1706 in response to a sad event occurring (e.g., user's pet just died). The user does this by moving their Optimistic/Depressed slider 1724 to the right into the blue area.
- This action can update the user's interface in other shared walls and/or group walls for the user, for example, in response to the user providing permission for it to be shared in that manner. Sharing the interface 1702 can be a way for users to share their emotional state with others when it may otherwise be difficult to express their emotions. In this example, other users who see the user's current state in the interface may be prompted to respond by sending appropriate mediagrams to the user to help improve his/her mood.
- a shared wall is depicted in which the pin 1706 for the user of the device 1700 is superimposed on the same interface 1702 as another pin 1726 for the other user of the shared wall.
- a group wall is depicted in which the pin 1706 for the user of the device 1700 is superimposed on the same interface 1702 other pins 1726 - 1730 for the other users who are members of the group wall.
- the current mood of every member in the group can be displayed on the interface 1702 .
- Different group walls can address feelings about different topics, for example.
- Members of the group, including the user of the device 1700 may choose to use mediagrams or other interactive/social features to interact with other group to improve their moods. Users who are able to successfully improve the mood of other users through various actions on the social platform can receive positive relationship points, which can factor into relationship ratings.
- different mood goals can be designated for the corners of the interface 1702 . If, for example, the corners represent Compassionate ( 1732 ), Impassionate ( 1736 ), and Dispassionate ( 1734 ), the user can decide to meditate, reflect on relationships, and/or reach out to other members in order to move their icon ( 1706 ) upwards toward the Compassionate ( 1732 ) corner.
- the interface 1702 can be used to represent different strategy vectors 1738 - 1742 .
- users can imagine altering their moods along three Strategy Vectors—Engaged/Detached ( 1740 ), Caring/Selfish ( 1738 ), and Calm/Agitated ( 1742 ).
- Activities for improving mental states with these strategy vectors can include, for example, interacting more with other users, helping to solve another user's problems, and self-help (e.g., meditation, exercise, listening to music, etc.).
- the interface 1702 can be used to represent conflict resolution goals 1744 - 1750 , such as Ultimatum ( 1746 ), Surrender ( 1748 ), Compromise ( 1750 ), and Contentment ( 1744 ). Users can use the interface 1702 to resolve conflict by first choosing an approach—Ultimatum, Surrender, or Compromise—and then adopting a strategy that will lead to Contentment.
- conflict resolution goals 1744 - 1750 such as Ultimatum ( 1746 ), Surrender ( 1748 ), Compromise ( 1750 ), and Contentment ( 1744 ).
- Users can use the interface 1702 to resolve conflict by first choosing an approach—Ultimatum, Surrender, or Compromise—and then adopting a strategy that will lead to Contentment.
- the interface 1702 can be used to assist users coping with grief. For example, a user can follow their progress through the 5 (suggested) stages of grief (Disbelief 1752 , Anger 1754 , Bargaining 1756 , Depression 1758 , and Acceptance 1760 ), eventually improving their moods through understanding 1762 .
- FIGS. 18A-B are flowcharts of example techniques 1800 and 1850 for determining and transmitting prompts to specific relationship private walls on a social platform.
- the example technique 1800 can be for determining and transmitting prompts to users who share a private wall corresponding to their relationship part of a relationship concierge.
- the user profiles for the users sharing the wall and the relationship profile between the users can be accessed ( 1802 ).
- Historical interactions between the users via the private wall can be analyzed ( 1804 ).
- the user profiles, the relationship profile, and/or the historical interactions between the users can be used to determine a current state for the relationship between the users ( 1806 ).
- a relationship state can be, for example, a relationship rating or score that is provided to quantify aspects of a user relationship, such as the quality, closeness, and/or other relationship aspects.
- the current relationship state can be compared with other relationship states for other relationships that one or both of the users have ( 1808 ). For example, a comparison can be made to determine whether the current relationship under evaluation is better, the same as, or worse than other relationships.
- the trend of the relationship over time can also be determined by evaluating time sequence relationship states for the users ( 1810 ). For example, an assessment can be performed to determine whether the relationship is improving (i.e., users are becoming closer), staying the same, or declining (i.e., users are becoming more distant). Evaluation of current and trending wellness states that the users have self-reported (e.g., via the interface 1702 ) can also be performed ( 1813 ). For example, the emotional state of each user may be affecting the relationship between the users and may provide insight into corrective actions via prompts that could be taken to improve both the user's wellness state and the relationship.
- a determination of the type of prompt that should be provided to the selected user can be made ( 1818 ). Extending the previous example, in the case of a depressed user, the prompt may be for the positive user to provide something more impactful to the depressed user, like a mediagram. Once the user to receive the prompt has been selected and the prompt type has been identified, the prompt can be transmitted ( 1820 ).
- the example technique 1850 can be for determining and transmitting prompts to a personal wall for the user and the personal concierge alone (no other users permitted on the private wall).
- the user's profile can be accessed ( 1852 ) and can be used to determine whether any upcoming events exist for the user or the user's friends ( 1854 ). At appropriate times, reminders for such upcoming events can be provided on the personal wall for the user ( 1856 ). A determination can be made as to whether any user-set reminders are upcoming ( 1858 ). At appropriate times, reminders for such user-set reminders can be provided on the personal wall for the user ( 1860 ).
- FIG. 19 is a flowchart of an example technique 1900 for determining and transmitting delays between interactions on a social platform.
- the user profiles and the relationship profile for the users can be accessed ( 1902 ) and, along with historical data for the users and the relationship, can be used to determine a historical cadence of interactions between the users ( 1904 ).
- the current type of interaction that would be delayed can be identified ( 1906 ), a current status of the relationship between the users can be determined ( 1908 ), and the current relationship trend for the users can be determined ( 1910 ).
- a determination can be made as to whether or not a response to the current interactions between the users should be delayed ( 1912 ).
- a delay in the response may be appropriate.
- the relationship is currently strong and the current type of interaction is a mediagram, then a delay in the response may be appropriate.
- a delay in the response may be appropriate.
- the relationship is currently weaker and is trending in decline, then either no delay or a minimal delay may be instructed.
- Other ways and outcomes for determining whether a delay is appropriate are also possible.
- instructions can be provided to permit the user of the client device to respond without a delay ( 1915 ). If a delayed is determined to be needed, then the delay length can be determined based on one or more of the factors determined in 1902 - 1910 ( 1914 ). For example, if the user relationship is trending upward and the users typically have a lengthier cadence of interactions, then a longer delay can be determined. In another example, if the relationship is trending downward, them a shorter delay may be determined. Once the delay and the delay length has been determined, it instructions for instituting the delay on a client device can be transmitted ( 1916 ).
- FIG. 20 is a flowchart of an example technique 2000 for determining relationship ratings on a social platform.
- the user profiles and the relationship profile for the users can be accessed ( 1902 ) and the historical interactions between the users can be accessed ( 1904 ).
- Relationship points which can be at least one of the metrics by which relationships are rated, can be allocated for each of the interactions ( 1906 ).
- Allocated points can then be weighted more heavily for interactions that indicate relationship strength (e.g., smaller time gaps between interactions, improved wellness evaluations following interactions, more significant interactions (e.g., mediagrams sent frequently)), and weighted less for interactions that indicated relationship weakness (e.g., longer time gaps between interactions, decreased or flat wellness evaluations following interactions, less significant interactions). For example, allocated points can be weighted based on time intervals between interactions ( 2008 ) and allocated points can be weighted based on correlations between wellness ratings and interactions ( 2010 ). Other weighting schemes are also possible.
- the trend of weighted point allocations over time can be determined by evaluating a time series of weighted points for the relationship ( 2012 ). If the relationship is trending toward improvement—meaning that the weighted point allocations generally increase over time—then additional positive trend points can be awarded to the relationship ( 2014 ).
- Weighted points can be aggregated ( 2016 ) and used to determine a relationship rating ( 2018 ). For example, the aggregate weighted points can be evaluated over the time period within which they occur to determine one or more normalized statistics for the relationship (e.g., average weighted points per time unit (e.g., day, week, month), median point value, standard deviation of point values).
- the relationship rating can be output and used to infer the state of the relationship ( 2020 ).
- FIGS. 21A-B are flowcharts of example techniques 2100 and 2150 for creating and using private group walls on a social platform.
- the technique 2100 is one in which user-initiated group creation takes place.
- a user selects an option to create a group wall ( 2102 ) and the user (now the group creator) designates users to be a part of the group ( 2104 ).
- a prompt can be determined for the group based on the users in the group ( 2106 ). For example, a prompt to initialize social interactions on the group wall can be determined based on interests for users in the group.
- the prompt can be provided to one, some, or all of the users in the group ( 2108 ).
- users can transmit responses and other interactions that are inserted into the group wall as well ( 2110 ).
- the group wall can include self-policing features by which group members can flag inappropriate content and/or inappropriate members for the group. Flagged content can be provided to the creator and/or other users in the group for review and possible deletion ( 2112 ). Similarly, flagged users can be provided to the creator and/or other users in the group for review and possible removal from the group. Suggestions for additional or new users to be added to the group can also be provided to the creator and/or other users in the group for approval ( 2114 ). The steps 2160 - 2114 can repeat for a threshold period of time, after which the group wall can automatically end ( 2116 ).
- the technique 2150 is one in which automatic (non-user-initiated) group creation takes place.
- user profiles and relationship profiles can be analyzed to identify users to automatically include in the group ( 2152 ). For example, users with common interests between each other and one or more preexisting connections to one or more other people in the pool of candidates for the group can be added to the group (each member of the group does not need a preexisting connection with each other member of the group).
- a concierge-created group wall can be created with users who are fans of a sports team that recently won a big game or championship.
- the group can be automatically created and the members of the group can be notified ( 2154 ).
- the concierge organizing the automatic group can seed the automatically created group wall with a starting prompt (and subsequent follow-on prompts) ( 2156 ). Users can interact with each other on the group wall in response to the prompt ( 2158 ). One or more users of the group can be designated to moderate the group wall ( 2160 ). In some implementations, the group wall does not allow invitation of random or connected additional contacts. After a pre-set expiration time (e.g., 24 hours, 2 days, 7 days), the group can end automatically ( 2162 ).
- a pre-set expiration time e.g., 24 hours, 2 days, 7 days
- FIG. 22 is a block diagram of example computing devices 2200 , 2250 that may be used to implement the systems and methods described in this document, as either a client or as a server or plurality of servers.
- Computing device 2200 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers.
- Computing device 2200 is further intended to represent any other typically non-mobile devices, such as televisions or other electronic devices with one or more processers embedded therein or attached thereto.
- Computing device 2250 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, and other computing devices.
- the components shown here, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the inventions described and/or claimed in this document.
- Computing device 2200 includes a processor 2202 , memory 2204 , a storage device 2206 , a high-speed controller 2208 connecting to memory 2204 and high-speed expansion ports 2210 , and a low-speed controller 2212 connecting to low-speed bus 2214 and storage device 2206 .
- Each of the components 2202 , 2204 , 2206 , 2208 , 2210 , and 2212 are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate.
- the processor 2202 can process instructions for execution within the computing device 2200 , including instructions stored in the memory 2204 or on the storage device 2206 to display graphical information for a GUI on an external input/output device, such as display 2216 coupled to high-speed controller 2208 .
- multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory.
- multiple computing devices 2200 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
- the memory 2204 stores information within the computing device 2200 .
- the memory 2204 is a computer-readable medium.
- the memory 2204 is a volatile memory unit or units.
- the memory 2204 is a non-volatile memory unit or units.
- the storage device 2206 is capable of providing mass storage for the computing device 2200 .
- the storage device 2206 is a computer-readable medium.
- the storage device 2206 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations.
- a computer program product is tangibly embodied in an information carrier.
- the computer program product contains instructions that, when executed, perform one or more methods, such as those described above.
- the information carrier is a computer- or machine-readable medium, such as the memory 2204 , the storage device 2206 , or memory on processor 2202 .
- the high-speed controller 2208 manages bandwidth-intensive operations for the computing device 2200 , while the low-speed controller 2212 manages lower bandwidth-intensive operations. Such allocation of duties is an example only.
- the high-speed controller 2208 is coupled to memory 2204 , display 2216 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 2210 , which may accept various expansion cards (not shown).
- low-speed controller 2212 is coupled to storage device 2206 and low-speed bus 2214 .
- the low-speed bus 2214 (e.g., a low-speed expansion port), which may include various communication ports (e.g., USB, Bluetooth Ethernet, wireless Ethernet), may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
- input/output devices such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
- the computing device 2200 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 2220 , or multiple times in a group of such servers. It may also be implemented as part of a rack server system 2224 . In addition, it may be implemented in a personal computer such as a laptop computer 2222 . Alternatively, components from computing device 2200 may be combined with other components in a mobile device (not shown), such as computing device 2250 . Each of such devices may contain one or more of computing devices 2200 , 2250 , and an entire system may be made up of multiple computing devices 2200 , 2250 communicating with each other.
- Computing device 2250 includes a processor 2252 , memory 2264 , an input/output device such as a display 2254 , a communication interface 2266 , and a transceiver 2268 , among other components.
- the computing device 2250 may also be provided with a storage device, such as a micro-drive or other device, to provide additional storage.
- a storage device such as a micro-drive or other device, to provide additional storage.
- Each of the components 2250 , 2252 , 2264 , 2254 , 2266 , and 2268 are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
- the processor 2252 can process instructions for execution within the computing device 2250 , including instructions stored in the memory 2264 .
- the processor may also include separate analog and digital processors.
- the processor may provide, for example, for coordination of the other components of the computing device 2250 , such as control of user interfaces, applications run by computing device 2250 , and wireless communication by computing device 2250 .
- Processor 2252 may communicate with a user through control interface 2258 and display interface 2256 coupled to a display 2254 .
- the display 2254 may be, for example, a TFT LCD display or an OLED display, or other appropriate display technology.
- the display interface 2256 may comprise appropriate circuitry for driving the display 2254 to present graphical and other information to a user.
- the control interface 2258 may receive commands from a user and convert them for submission to the processor 2252 .
- an external interface 2262 may be provided in communication with processor 2252 , so as to enable near area communication of computing device 2250 with other devices. External interface 2262 may provide, for example, for wired communication (e.g., via a docking procedure) or for wireless communication (e.g., via Bluetooth® or other such technologies).
- the memory 2264 stores information within the computing device 2250 .
- the memory 2264 is a computer-readable medium.
- the memory 2264 is a volatile memory unit or units.
- the memory 2264 is a non-volatile memory unit or units.
- Expansion memory 2274 may also be provided and connected to computing device 2250 through expansion interface 2272 , which may include, for example, a subscriber identification module (SIM) card interface.
- SIM subscriber identification module
- expansion memory 2274 may provide extra storage space for computing device 2250 , or may also store applications or other information for computing device 2250 .
- expansion memory 2274 may include instructions to carry out or supplement the processes described above, and may include secure information also.
- expansion memory 2274 may be provide as a security module for computing device 2250 , and may be programmed with instructions that permit secure use of computing device 2250 .
- secure applications may be provided via the SIM cards, along with additional information, such as placing identifying information on the SIM card in a non-hackable manner.
- the memory may include for example, flash memory and/or MRAIVI memory, as discussed below.
- a computer program product is tangibly embodied in an information carrier.
- the computer program product contains instructions that, when executed, perform one or more methods, such as those described above.
- the information carrier is a computer- or machine-readable medium, such as the memory 2264 , expansion memory 2274 , or memory on processor 2252 .
- Computing device 2250 may communicate wirelessly through communication interface 2266 , which may include digital signal processing circuitry where necessary. Communication interface 2266 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through transceiver 2268 (e.g., a radio-frequency transceiver). In addition, short-range communication may occur, such as using a Bluetooth WiFi, or other such transceiver (not shown). In addition, GPS receiver module 2270 may provide additional wireless data to computing device 2250 , which may be used as appropriate by applications running on computing device 2250 .
- transceiver 2268 e.g., a radio-frequency transceiver
- short-range communication may occur, such as using a Bluetooth WiFi, or other such transceiver (not shown).
- GPS receiver module 2270 may provide additional wireless data to computing device 2250 , which may be used as
- Computing device 2250 may also communicate audibly using audio codec 2260 , which may receive spoken information from a user and convert it to usable digital information. Audio codec 2260 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of computing device 2250 . Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on computing device 2250 .
- Audio codec 2260 may receive spoken information from a user and convert it to usable digital information. Audio codec 2260 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of computing device 2250 . Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on computing device 2250 .
- the computing device 2250 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone 2280 . It may also be implemented as part of a smartphone 2282 , personal digital assistant, or other mobile device.
- implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof.
- ASICs application specific integrated circuits
- These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
- the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer.
- a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
- a keyboard and a pointing device e.g., a mouse or a trackball
- Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
- the systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components.
- the components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet.
- LAN local area network
- WAN wide area network
- the Internet the global information network
- the computing system can include clients and servers.
- a client and server are generally remote from each other and typically interact through a communication network.
- the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Computer Security & Cryptography (AREA)
- Software Systems (AREA)
- Information Transfer Between Computers (AREA)
- User Interface Of Digital Computer (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
- This application is a continuation of U.S. application Ser. No. 15/860,545, filed Jan. 2, 2018, which claims the benefit of priority to U.S. Provisional Application Ser. No. 62/441,081, filed Dec. 30, 2016, entitled MEDIA PERSONALIZATION AND SOCIAL MEDIA PLATFORM, the entire contents of which are hereby incorporated by reference
- This specification relates to technology for efficiently generating digital video files to include personalized content.
- Media personalization software, such as video and audio editing software, provides users with features that can be used to combine media content (e.g., videos, audio, images, text) in various ways. For example, video editing software can allow a user to trim video clips, combine video clips, add audio tracks, and to add graphics, images, and text. Video editing software can rely on users to retrieve and identify video clips for editing, the manner and timing with which video clips are combined, and the ultimate composition of the final video.
- While social media platforms (e.g., FACEBOOK, TWITTER, LINKEDIN, INSTAGRAM) vary in their approach to online interactions between users, social media platforms generally provide features through which users can share information and interact with a broader collection of users on the platform. For example, users on social media platforms can post content that is then distributed to other users on the social media platform, such as friends, followers, or fans of the user posting the content. Such distribution of content among users can be non-private in that it is broadcast among a broad group of users, which can sometimes be people without any sort of social connection to the posting user.
- This document generally describes improved technology for personalizing media content to more consistently and efficiently generate emotionally impactful personalized media content. Computer systems, techniques, and devices are described for automating the personalization of media content to ensure that personalized content is presented at appropriate times and places on the underlying media content that is being personalized, and to ensure that the quality of the underlying media content is undisturbed by the personalization. For example, music and videos often have “chill” moments that are emotionally impactful for listeners/viewers, such as the chorus in the song “Let It Go” from the movie Frozen. The technology described in this document can automate generation of a personalized “mediagram” (personalized media content conveying a message) based on an excerpt of the song “Let It Go” with personalization (e.g., text, images, video, audio) at appropriate times and locations around the song's chorus to provide an emotionally impactful message that leverages the chill moment from the song.
- This document also generally describes an improved social platform to enhance the quality of social interactions and relationships among users. Such a social platform can include a variety of features, such as private communication channels between users centered around the user relationships, relationship concierge features to facilitate and improve the quality of social interactions, time delays between social interactions to alleviate the pressure and stress on users of needing to respond quickly, temporary social interactions that are inaccessible to users involved in the interactions after a threshold period of time, private group communication channels and group relationship concierge features, relationship scoring features, personalized media content creation and distribution features, interactive and social emotional well-being meters through which users can identify their own emotional state and the states of other users, and/or combinations thereof. Such features can assist users in building and maintaining strong relationships with other users.
- In one implementation, a method for automatically generating personalized videos includes outputting, in a user interface on a client computing device, information identifying a plurality of preselected videos, wherein each of the plurality of preselected videos (i) are excerpts of longer videos and (ii) include at least one emotionally impactful moment; receiving, through an input subsystem on the client computing device, selection of a particular video from the plurality of preselected videos; retrieving, by the client computing device and in response to receiving the selection, the particular video and a personalization template for the particular video, wherein the personalization template designates particular types of media content to be combined with the video at particular locations to maximize the at least one emotionally impactful moment for an intended recipient; outputting, by the client computing device, a plurality of input fields prompting the user to select the particular types of media content from one or more personal repositories of media content; automatically retrieving, in response to user selections through the plurality of input fields, personal media content from the one or more personal repositories of media content; automatically assembling, by the client computing device and without user involvement beyond the user selections, a personalized video that includes the particular video and the personal media content at the particular locations in the video; and outputting, by the client computing device, the personalized video.
- Such an implementation can optionally include one or more of the following features. The longer videos can be full-length music videos that include audio tracks containing full-length songs. The plurality of preselected videos can include audio tracks containing excerpts of the full-length songs. An audio track for the particular video, in its entirety, can be an excerpt of the audio track for a particular longer video. A video track for the particular video can include (i) a first portion that is an excerpt of a video track for the particular longer video and (ii) one or more second portions that are filler video not from the particular longer video. The one or more second portions of the particular video can be locations in the particular video where personalized video tracks derived from the personal media content are automatically inserted. The video tracks can be derived from the personal media content are not inserted at or over the first portion. The first portion of the video track can correspond to an emotionally impactful moment in the particular video. Personal media content can be designated as being the most emotionally impactful from among the personal media content is automatically positioned immediately following the first portion. The longer videos can be full-length movies that include audio tracks containing full-length movie sound tracks. The plurality of preselected videos can include audio tracks containing excerpts of the full-length movie sound tracks.
- The personal media content can include one or more of: digital photos, digital videos, and personalized text. The method can further include automatically analyzing, by the client computing device, waveforms for another longer video to automatically identify an emotionally impactful moment; determining, by the client computing device, starting and end points within the other longer video based on intro and outro transition points within a threshold timestamp from the emotionally moment in the other longer video; automatically generating, by the client computing device, a video excerpt from the other longer video using the starting and end points; and adding the video excerpt from the other longer video to the plurality of preselected videos. The method can further include generating a personalization template for the video excerpt from the other longer video based, at least in part, on the location of the emotionally impactful moment within the video excerpt. The automatic waveform analysis can be performed based on one or more of the following waveform characteristics: mode, volume, tempo, mood, tone, and pitch. The personalized video can include a mediagram that is intended to provide an emotionally impactful message that is specifically tailored to a relationship between a sender and recipient.
- In another implementation, a method for providing a social media platform for enhancing and improving social interactions among users includes retrieving, by a relationship concierge running on a social media system, (i) user profiles for a first user and a second user, and (ii) a relationship profile for a relationship between the first user and the second user; retrieving, by the relationship concierge, historical interactions among the first user and the second user on the social media platform; determining, by the relationship concierge, whether provide a social interaction prompt to one or more of the first user and the second user based on the user profiles, the relationship profile, and the historical interactions, wherein the social interaction prompt provides a call to action for interaction within the relationship between the first user and the second user; identifying, in response to determining that the social interaction prompt is to be provided, the first user from among the first and second users as the recipient of the social interaction prompt; automatically transmitting, by the relationship concierge and without a request from either the first or second user, the social interaction prompt to a first computing device for the first user, wherein the social interaction prompt is only visible to the first user until the first user responds to the prompt; receiving, at the relationship concierge, a response from the social interaction prompt; automatically transmitting, by the relationship concierge and without a request from either the first or second user, the response with the social interaction prompt to a second computing device for the second user, wherein the social interaction prompt and the response are presented to the second user by the second computing device based on the first user having provided the response.
- Such an implementation can optionally include one or more of the following features. The social interaction prompt can include a question that is posed to the first user. The social interaction prompt can include the first user being directed to create a mediagram for the second user. The mediagram can include a personalized video segment that is automatically personalized to provide an emotionally impactful message that is particularly tailored to the relationship between the first and second users. The social interaction prompt can include an interactive game to be played by the first and second users. The first and second users can interact on the social platform via a private wall that is exclusive to the first and second users. The social interaction prompt can be initially only visible on the private wall to the first user. The social interaction prompt can become visible on the private wall to the second user in only after and in combination with the response to the social interaction prompt by the user. The second user can be delayed from replying to the response for at least a threshold period of time following the response and the social interaction prompt appearing to the second user on the private wall. The response and the social interaction prompt can be automatically deleted from the private wall after a threshold amount of time or interactions have elapsed since they appeared on the private wall.
- In another implementation, a computer-implemented method includes receiving from a first user a selection of a sub-portion of a music video that includes audio from a sub-portion of a song and video that corresponds to the audio; receiving from the first user personalization content entered into a template that designates particular types of media content to be combined with the music video at particular locations of the music video; providing, to a second user who was designated by the first user, an indication that the content is available for review by the second user; and providing, to the second user and in response to a second user confirmation of the provided indication, the sub-portion of the music video in combination with the personalization content.
- Such an implementation can optionally include one or more of the following features. The method can further include previously determining portions in each of a plurality of music videos, sub-portions of each of the plurality of music videos that will have an increased impact on a viewer of the sub-portions as compared to other sub-portions of the music videos. Determining the portions can include manually reviewing the videos with trained human classifiers. Determining the portion can include identifying which portions of particular videos are played the most by visitors to one or more on-line video sites. Determining the portions can include performing automatic music analysis of a plurality of different music videos to identify musical patterns previously determined to have an emotional effect on a typical listener. The personalization content can include a textual message entered by the first user. The second user can be provided with one or more bumpers created by the first user and appended to the front, back, or both of the video sub-portion.
- In another implementation, a system for generating digital media files includes a digital media file repository, a front end system, a backend system, and a digital media distribution system. The digital media file repository stores a plurality of preselected digital video files that are excerpts of longer digital video files. The plurality of preselected digital video files are encoded in a common digital video codec and are stored with metadata that identifies times within the plurality of preselected digital video files at which emotionally impactful moments occur.
- The frontend system is in communication with client computing devices. The frontend system receives digital media file content generation requests from the client computing devices that include parameters identifying particular preselected digital video files to be combined with personal digital media files to generate personalized digital video files. The personal digital media files include personal digital video files, personal digital audio files, personal text, and personal digital image files that are uploaded to the frontend system by the client computing devices. the personal digital video files are encoded across a plurality of digital video codecs.
- The backend system generates the personalized digital video files using the particular preselected digital video files and the personal digital media files. The backend system being programmed to: convert the personal digital video files from the plurality of digital video codecs to the common video codec; retrieve personalization digital media templates that designate (i) particular types of media content to be combined with the particular preselected digital video files and (ii) particular times within particular preselected digital video files at which the particular types of media content are to be combined with the particular preselected digital video files, the particular times being relative to the times within the plurality of preselected digital video files at which the emotionally impactful moments occur; assemble digital media content for the personalized digital video files using particular preselected digital video files, the digital media templates, and the personal digital media files, the personal digital media files being (i) positioned at the particular times relative to the times at which the emotionally impactful moments occur in the particular preselected digital video files, (ii) visually combined with video tracks of the particular preselected digital video files so that digital images and videos from the personal digital media files replace the video tracks at the particular times, and (iii) audibly combined with audio tracks for the particular preselected digital video files so that audio from the personal digital media files are automatically mixed with the audio tracks at the particular times, wherein the video tracks and the audio tracks for the particular preselected digital video files at the times at which the emotionally impactful moments occur remain unmodified in the assembled digital media content; encode the assembled digital media content using the common video codec to generate the personalized digital video files; and store the personalized digital video files. The digital media distribution system is configured to transmit the personalized digital video files to the client computing devices.
- Such an implementation can optionally include one or more of the following features. The longer digital video files can be full-length music videos containing full-length songs and the plurality of preselected digital video files can be excerpts of the full-length music videos that include the emotionally impactful moments. The longer digital video files can be full-length movies and the plurality of preselected digital video files can be excerpts of the full-length movies that include the emotionally impactful moments. The personalized digital video files can comprise mediagrams that include a personalized message centered around the emotionally impactful moments in the particular preselected digital video files and the mediagrams can be configured to be digitally sent from one client computing device to another client computing device. The personal digital media files can have variable lengths of time. Assembling the digital media content can include adding one or more portions of digital filler content so as (i) to fit the personal digital media files with variable lengths of time at the particular times according to the digital media templates and (ii) ensure that the video tracks and the audio tracks for the particular preselected digital video files at the times at which the emotionally impactful moments occur remain unmodified in the assembled digital media content. The one or more portions of digital filler content can be loops of digital content derived from the particular preselected digital video files. The one or more portions of digital filler content can be preselected loops of digital content.
- In another implementation, a computer-implemented method includes receiving digital media file content generation requests from client computing devices that include parameters identifying particular preselected digital video files to be combined with personal digital media files to generate personalized digital video files, the personal digital media files including personal digital video files, personal digital audio files, personal text, and personal digital image files, the personal digital video files being encoded across a plurality of digital video codecs, the preselected digital video files being excerpts of longer digital video files, the preselected digital video files being encoded in the common digital video codec and being stored with metadata that identifies times within the preselected digital video files at which emotionally impactful moments occur. The method further includes converting the personal digital video files from the plurality of digital video codecs to a common video codec. The method further includes retrieving personalization digital media templates that designate (i) particular types of media content to be combined with the particular preselected digital video files and (ii) particular times within particular preselected digital video files at which the particular types of media content are to be combined with the particular preselected digital video files, the particular times being relative to the times within the plurality of preselected digital video files at which the emotionally impactful moments occur. The method further includes assembling digital media content for the personalized digital video files using particular preselected digital video files, the digital media templates, and the personal digital media files, the personal digital media files being (i) positioned at the particular times relative to the times at which the emotionally impactful moments occur in the particular preselected digital video files, (ii) visually combined with video tracks of the particular preselected digital video files so that digital images and videos from the personal digital media files replace the video tracks at the particular times, and (iii) audibly combined with audio tracks for the particular preselected digital video files so that audio from the personal digital media files are automatically mixed with the audio tracks at the particular times, wherein the video tracks and the audio tracks for the particular preselected digital video files at the times at which the emotionally impactful moments occur remain unmodified in the assembled digital media content. The method further includes encoding the assembled digital media content using the common video codec to generate the personalized digital video files. The method further includes storing the personalized digital video files. The method further includes transmitting the personalized digital video files to the client computing devices.
- Such an implementation can optionally include one or more of the following features. The longer digital video files can be full-length music videos containing full-length songs and the plurality of preselected digital video files can be excerpts of the full-length music videos that include the emotionally impactful moments. The longer digital video files can be full-length movies and the plurality of preselected digital video files can be excerpts of the full-length movies that include the emotionally impactful moments. The personalized digital video files can comprise mediagrams that include a personalized message centered around the emotionally impactful moments in the particular preselected digital video files and the mediagrams can be configured to be digitally sent from one client computing device to another client computing device. The personal digital media files can have variable lengths of time. Assembling the digital media content can include adding one or more portions of digital filler content so as (i) to fit the personal digital media files with variable lengths of time at the particular times according to the digital media templates and (ii) ensure that the video tracks and the audio tracks for the particular preselected digital video files at the times at which the emotionally impactful moments occur remain unmodified in the assembled digital media content. The one or more portions of digital filler content can be loops of digital content derived from the particular preselected digital video files. The one or more portions of digital filler content can be preselected loops of digital content.
- In another implementation, a computer program product encoded on a non-transitory storage medium comprises non-transitory, computer readable instructions for causing one or more processors to perform operations. The operations include receiving digital media file content generation requests from client computing devices that include parameters identifying particular preselected digital video files to be combined with personal digital media files to generate personalized digital video files, the personal digital media files including personal digital video files, personal digital audio files, personal text, and personal digital image files, the personal digital video files being encoded across a plurality of digital video codecs, the preselected digital video files being excerpts of longer digital video files, the preselected digital video files being encoded in the common digital video codec and being stored with metadata that identifies times within the preselected digital video files at which emotionally impactful moments occur. The operations further include converting the personal digital video files from the plurality of digital video codecs to a common video codec. The operations further include retrieving personalization digital media templates that designate (i) particular types of media content to be combined with the particular preselected digital video files and (ii) particular times within particular preselected digital video files at which the particular types of media content are to be combined with the particular preselected digital video files, the particular times being relative to the times within the plurality of preselected digital video files at which the emotionally impactful moments occur. The operations further include assembling digital media content for the personalized digital video files using particular preselected digital video files, the digital media templates, and the personal digital media files, the personal digital media files being (i) positioned at the particular times relative to the times at which the emotionally impactful moments occur in the particular preselected digital video files, (ii) visually combined with video tracks of the particular preselected digital video files so that digital images and videos from the personal digital media files replace the video tracks at the particular times, and (iii) audibly combined with audio tracks for the particular preselected digital video files so that audio from the personal digital media files are automatically mixed with the audio tracks at the particular times, wherein the video tracks and the audio tracks for the particular preselected digital video files at the times at which the emotionally impactful moments occur remain unmodified in the assembled digital media content. The operations further include encoding the assembled digital media content using the common video codec to generate the personalized digital video files. The operations further include storing the personalized digital video files. The operations further include transmitting the personalized digital video files to the client computing devices.
- Such an implementation can optionally include one or more of the following features. The longer digital video files can be full-length music videos containing full-length songs and the plurality of preselected digital video files can be excerpts of the full-length music videos that include the emotionally impactful moments. The longer digital video files can be full-length movies and the plurality of preselected digital video files can be excerpts of the full-length movies that include the emotionally impactful moments. The personalized digital video files can comprise mediagrams that include a personalized message centered around the emotionally impactful moments in the particular preselected digital video files and the mediagrams can be configured to be digitally sent from one client computing device to another client computing device. The personal digital media files can have variable lengths of time. Assembling the digital media content can include adding one or more portions of digital filler content so as (i) to fit the personal digital media files with variable lengths of time at the particular times according to the digital media templates and (ii) ensure that the video tracks and the audio tracks for the particular preselected digital video files at the times at which the emotionally impactful moments occur remain unmodified in the assembled digital media content. The one or more portions of digital filler content can be loops of digital content derived from the particular preselected digital video files. The one or more portions of digital filler content can be preselected loops of digital content.
- In another implementation, a system for providing a social media platform to enhance the quality of online social interactions among users, the system including: first and second client computing devices that are running social media applications for the social media platform, each of the social media applications being programmed to provide a graphical user interface (GUI) that presents digital content retrieved over the internet from the social media platform and to receive user inputs via one or more graphical input elements in the GUI, the first client computing being associated with a first user and the second client computing device being associated with a second user; a digital profile repository storing (i) user profiles for a first user and a second user, and (ii) a relationship profile for a relationship between the first user and the second user; a relationship history database storing historical interactions among the first user and the second user on the social media platform; and a relationship concierge to facilitate meaningful social interactions among the first and second client computing devices, the relationship concierge being programmed to: retrieve the user profiles for the first user and the second user, and the relationship profile for the relationship between the first user and the second user from the digital profile repository, retrieve the historical interactions among the first user and the second user on the social media platform from the relationship history database, determine whether to provide a social interaction prompt to one or more of the first client computing device and the second client computing device based on the user profiles, the relationship profile, and the historical interactions, wherein the social interaction prompt provides a call to action for interaction within the relationship between the first user and the second user, identify, in response to determining that the social interaction prompt is to be provided, the first client computing device from among the first and second users as the recipient of the social interaction prompt, automatically transmit, without a request from either the first client computing device or the second client computing device, the social interaction prompt to a first client computing device, wherein the social interaction prompt is only presented in the GUI on the first client computing device for the first user until the first user responds to the prompt, and the social interaction prompt is not transmitted to the second client computing device or presented in the GUI on the second client computing device, receive a response to the social interaction prompt from the first client computing device, and automatically transmit, without a request from either the first client computing device or the second client computing device, the response with the social interaction prompt to the second client computing device for presentation in the GUI on the second client computing device, wherein the social interaction prompt and the response are presented in the GUI on the second client computing device based on the first user having provided the response.
- Such an implementation can optionally include one or more of the following features. The social interaction prompt can include a question that is posed in the GUI on the first client computing device to the first user. The social interaction prompt can include the first user being directed to create a mediagram for the second user, wherein the mediagram comprises a personalized digital video segment that is automatically personalized to provide an emotionally impactful message that is particularly tailored to the relationship between the first and second users. The social interaction prompt can include an interactive game to be played by the first and second users. The GUI on the first client computing device and the GUI on the second client computing device can provide a private wall that is exclusive to the relationship between the first and second users, the social interaction prompt can initially be only visible on the private wall presented by the first client computing device to the first user, and the social interaction prompt can become visible on the private wall presented by the second client computing device to the second user only after and in combination with the response to the social interaction prompt by the user. The GUI in the second client computing device can delay the second user from replying to the response for at least a threshold period of time following the response and the social interaction prompt being presented on the private wall of the second client computing device. The GUI in the second client computing device (i) can inactivate the graphical input elements to receive a reply from the second user until after a delayed response period has elapsed, and (ii) can presents timing information identifying an amount of time remaining until the delayed response period has elapsed, and the GUI in the first client computing device can also present the timing information identifying an amount of time remaining until the delayed response period has elapsed for the second user to respond. The GUI in the second client computing device (i) can activate the graphical input elements to receive a reply from the second user during a delayed response period and (ii) can present timing information identifying an amount of time remaining until the delayed response period has elapsed and the reply from the second user will be transmitted to the first client computing device, and the GUI in the first client computing device can also present the timing information identifying an amount of time remaining until the delayed response period has elapsed for the second user's reply to be transmitted to the first client computing device. The response and the social interaction prompt can be automatically deleted from the private wall after a threshold amount of time or interactions have elapsed since they appeared on the private wall.
- In another implementation, a computer-implemented method for providing a social media platform to enhance the quality of online social interactions among users, the computer-implemented method comprising: retrieving, from a digital profile repository storing (i) user profiles for a first user and a second user, and (ii) a relationship profile for a relationship between the first user and the second user, user profiles for the first user and the second user, and the relationship profile for the relationship between the first user and the second user, retrieving, from a relationship history database storing historical interactions among the first user and the second user on the social media platform, historical interactions among the first user and the second user on the social media platform, and facilitating meaningful social interactions among the first and second client computing devices that are running social media applications for the social media platform, each of the social media applications being programmed to provide a graphical user interface (GUI) that presents digital content retrieved over the internet from the social media platform and to receive user inputs via one or more graphical input elements in the GUI, the first client computing being associated with the first user and the second client computing device being associated with the second user, the facilitating including: determining whether to provide a social interaction prompt to one or more of the first client computing device and the second client computing device based on the user profiles, the relationship profile, and the historical interactions, wherein the social interaction prompt provides a call to action for interaction within the relationship between the first user and the second user, identifying, in response to determining that the social interaction prompt is to be provided, the first client computing device from among the first and second users as the recipient of the social interaction prompt, automatically transmitting, without a request from either the first client computing device or the second client computing device, the social interaction prompt to a first client computing device, wherein the social interaction prompt is only presented in the GUI on the first client computing device for the first user until the first user responds to the prompt, and the social interaction prompt is not transmitted to the second client computing device or presented in the GUI on the second client computing device, receiving a response to the social interaction prompt from the first client computing device, and automatically transmitting, without a request from either the first client computing device or the second client computing device, the response with the social interaction prompt to the second client computing device for presentation in the GUI on the second client computing device, wherein the social interaction prompt and the response are presented in the GUI on the second client computing device based on the first user having provided the response.
- Such an implementation can optionally include one or more of the following features. The social interaction prompt can include a question that is posed in the GUI on the first client computing device to the first user. The social interaction prompt can include the first user being directed to create a mediagram for the second user, wherein the mediagram comprises a personalized digital video segment that is automatically personalized to provide an emotionally impactful message that is particularly tailored to the relationship between the first and second users. The social interaction prompt can include an interactive game to be played by the first and second users. The GUI on the first client computing device and the GUI on the second client computing device can provide a private wall that is exclusive to the relationship between the first and second users, the social interaction prompt can initially be only visible on the private wall presented by the first client computing device to the first user, and the social interaction prompt can become visible on the private wall presented by the second client computing device to the second user only after and in combination with the response to the social interaction prompt by the user. The GUI in the second client computing device can delay the second user from replying to the response for at least a threshold period of time following the response and the social interaction prompt being presented on the private wall of the second client computing device. The GUI in the second client computing device (i) can inactivate the graphical input elements to receive a reply from the second user until after a delayed response period has elapsed, and (ii) can presents timing information identifying an amount of time remaining until the delayed response period has elapsed, and the GUI in the first client computing device can also present the timing information identifying an amount of time remaining until the delayed response period has elapsed for the second user to respond. The GUI in the second client computing device (i) can activate the graphical input elements to receive a reply from the second user during a delayed response period and (ii) can present timing information identifying an amount of time remaining until the delayed response period has elapsed and the reply from the second user will be transmitted to the first client computing device, and the GUI in the first client computing device can also present the timing information identifying an amount of time remaining until the delayed response period has elapsed for the second user's reply to be transmitted to the first client computing device. The response and the social interaction prompt can be automatically deleted from the private wall after a threshold amount of time or interactions have elapsed since they appeared on the private wall.
- In another implementation, a non-transitory computer-readable medium for providing a social media platform to enhance the quality of online social interactions among users and storing instructions, that when executed, cause one or more processors to perform operations including: retrieving, from a digital profile repository storing (i) user profiles for a first user and a second user, and (ii) a relationship profile for a relationship between the first user and the second user, user profiles for the first user and the second user, and the relationship profile for the relationship between the first user and the second user, retrieving, from a relationship history database storing historical interactions among the first user and the second user on the social media platform, historical interactions among the first user and the second user on the social media platform, and facilitating meaningful social interactions among the first and second client computing devices that are running social media applications for the social media platform, each of the social media applications being programmed to provide a graphical user interface (GUI) that presents digital content retrieved over the internet from the social media platform and to receive user inputs via one or more graphical input elements in the GUI, the first client computing being associated with the first user and the second client computing device being associated with the second user, the facilitating including: determining whether to provide a social interaction prompt to one or more of the first client computing device and the second client computing device based on the user profiles, the relationship profile, and the historical interactions, wherein the social interaction prompt provides a call to action for interaction within the relationship between the first user and the second user, identifying, in response to determining that the social interaction prompt is to be provided, the first client computing device from among the first and second users as the recipient of the social interaction prompt, automatically transmitting, without a request from either the first client computing device or the second client computing device, the social interaction prompt to a first client computing device, wherein the social interaction prompt is only presented in the GUI on the first client computing device for the first user until the first user responds to the prompt, and the social interaction prompt is not transmitted to the second client computing device or presented in the GUI on the second client computing device, receiving a response to the social interaction prompt from the first client computing device, and automatically transmitting, without a request from either the first client computing device or the second client computing device, the response with the social interaction prompt to the second client computing device for presentation in the GUI on the second client computing device, wherein the social interaction prompt and the response are presented in the GUI on the second client computing device based on the first user having provided the response.
- Such an implementation can optionally include one or more of the following features. The social interaction prompt can include a question that is posed in the GUI on the first client computing device to the first user.
- Particular implementations may realize none, one or more of the following advantages. For example, media content can be personalized in ways that ensure that synchronization between audio and video portions of the underlying media content are not disrupted. When editing media content, particularly when overlaying video and/or audio tracks across different devices, video and audio can get out of sync.
- In another example, media content personalization can be streamlined to provide novice users with the ability to readily create impactful personalized content. In particular, a user interface can be presented to users that narrows the field of options for personalization down to a limited number through the use of preselected media content excerpts, personalization templates, guided personalization steps, and other features to ensure emotionally impactful personalized content is created.
- In another example, social platforms can facilitate improved and more meaningful social interactions and relationships between users through a variety of features, such as private walls, relationship concierges, time delays for interactions, personalized media content distribution, time-limited social content, and/or combinations thereof. For instance, given the open nature of many social platforms (meaning that posts are viewable to a broad audience of users), social interactions can be guarded and reserved. By providing a social platform in which a primary mechanism for interacting with other users is either private walls or private group walls, user interactions can be with smaller and more intimate sets of users, which can help users drop their guard and interact more naturally/honestly. In another instance, relationship concierges can automate and assist users in building and maintaining strong relationships by prompting users with ways to interact with each other. In a further instance, personalized media content creation and distribution on a social platform can assist users in conveying emotionally impactful messages that may otherwise be difficult to express through traditional social media interactions (e.g., posts, images, text). In another instance, mandatory time delays for interactions between users can alleviate the pressure, stress, and burden that users feel to promptly respond to interactions in order to avoid expressing disinterest with a late response or no response at all. In a further instance, time-limited social content can additionally promote more natural/honest social interactions (help users drop their guard) by ensuring that social interactions on the platform will not persist for perpetuity, but instead will be inaccessible to both users after a period of time or after a series of interactions. The app can force both senders and recipient to be reflective based on the app forcing them to have a delay before sending/receiving the message. The time delays introduce a component of “scarcity” which enforces reflection, anticipation and attention to detail, fostering better relationships.
- The details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
-
FIG. 1 is a block diagram of an example system for generating personalized media content. -
FIG. 2 is a block diagram showing an example technique for creating and delivering a personalized mediagram to a recipient. -
FIG. 3 is a block diagram of an example system for generating and consuming mediagrams. -
FIGS. 4A-F are screenshots that collectively show an example sequence of steps for creating and distributing a mediagram. -
FIGS. 5A-M are block diagrams showing example assemblies of mediagrams. -
FIG. 6 is a conceptual diagram of an example system for generating personalized media content. -
FIGS. 7A-B are a flowchart of an example technique for generating personalized videos. -
FIG. 8A is a conceptual diagram of an example social media platform for providing improved and more meaningful social interactions among users. -
FIG. 8B is a conceptual diagram of another example social media platform for providing improved and more meaningful social interactions among users. -
FIG. 9A is an example system for providing an improved social media platform with more meaningful social interactions among users. -
FIG. 9B is diagram of an example system for providing an improved social media platform with more meaningful social interactions among users. -
FIG. 9C depicts an example system for providing an improved social media platform with more meaningful social interactions among users. -
FIG. 10 is a flow chart with user interfaces and to establish an initial connection between users on a social media platform. -
FIGS. 11A-B are screenshots of example user interfaces on an example mobile computing device for interacting with other users via private walls on a social platform. -
FIGS. 11C-F present example specific user interface features that can be selected for presentation to users. -
FIGS. 12A-H are screenshots of an example process flow for a relationship concierge facilitating and improving social interactions among users via private walls on a social platform. -
FIGS. 13A-C is screenshot of an example user interface on a mobile computing device for viewing a user's friends and the corresponding interaction delays until another relationship concierge prompt is expected. -
FIG. 14A is a conceptual diagram of an example personal concierge system and algorithm for facilitating and improving user relationships on a social network. -
FIG. 14B is a diagram of an example system to vary content that is selected for presentation to users. -
FIG. 14C is a screenshot of an example “one-click” feedback interface in which content is presented with selectable graphical elements that the user can select with a single click/selection action to provide feedback related to the content. -
FIGS. 15A-D are screenshots of a relationship concierge being applied to other social platforms providing predominantly open communication among broad groups of users. -
FIG. 16 is a diagram depicting creation and use of a private group wall on a social platform to improve and enhance meaningful social interactions. -
FIGS. 17A-H are screenshots of an example user interface on a computing device for users to express and interact with others regarding their emotional well-being. -
FIGS. 18A-B are flowcharts of example techniques for determining and transmitting prompts to specific relationship private walls on a social platform. -
FIG. 19 is a flowchart of an example technique for determining and transmitting delays between interactions on a social platform. -
FIG. 20 is a flowchart of an example technique for determining relationship ratings on a social platform. -
FIGS. 21A-B are flowcharts of example techniques for creating and using private group walls on a social platform. -
FIG. 22 is a block diagram of example computing devices. - Like reference numbers and designations in the various drawings indicate like elements.
-
FIG. 1 is a block diagram of anexample system 100 for generating personalized media content, such as mediagrams. A mediagram can be personalized media content that is configured in a particular manner to convey an emotionally impactful message between users. Mediagrams can include, for example, underlying media content that is combined with other, personal media content to provide personalization to the underlying media content. Media content can include, for instance, music and/or video excerpts, movie excerpts, music files, images, television clips (ex. SNL skits), Viral videos (ex. Home videos that have been popularized), concert videos, and/or other types of media. For example, a mediagram can be an excerpt from a music video that is personalized with text, images, audio, and video. - A mediagram that is produced by the
system 100 can be a ready-to-play presentation of media that is prepared by asender 102 and sent to at least onerecipient 104. The mediagram can include, for example, music files and other media that are combined into the mediagram in a way that is personalized by thesender 102. Personalization can include adding personalized messages and/or other elements to the media, including text, audio, images, video, etc., which can overlay and/or adjoin segments of the media. For example, personalization can be a caption that precedes or accompanies an image, a video, or some other media segment. - As an example, the
system 100 can provide, for presentation to thesender 102, different media segments that can be selected by thesender 102 for personalization. For example, thesender 102 can select from among media content 106 a-106 d (e.g., music videos, music, movies, videos). The media content 106 a-106 b may be presented to thesender 102, for example, upon execution of a search query (e.g., to find songs for specific artists, titles, subjects, genres, etc.). Selection of media content can be made, for example, from the sender's library of downloaded and/or owned media content, generated from a subscribed list of available songs, and/or in some other way. Presentation of the media content 106 a-106 d, as well as other aspects of a user interface for creating mediagrams, can be presented on auser device 107 of thesender 102. - In some implementations, media content and/or excerpts of media content (e.g., portions of a song, portions of a movie, portions of a music video) can be pre-selected to provide a chill moment, which can be a point in a song or other media that has been shown to provide a chill to a viewer of the media (e.g., moment that provides tingles, a chill running down one's spine, a significant emotional and/or physical response, or some other reaction by the recipient 104). Example chill moments in songs include a particular note or passage in a song, particular lyrics, and/or other features that are otherwise emotionally impactful upon users. Example chill moments in a movie or video can include a chase scene or an important scene, such as a celebratory moment, the death of a character, or some other significantly impactful segment.
- In some implementations, media content with chill moments can be identified in a catalog of chill moments, which may be stored in (and available from) a proprietary catalog. Each chill moment, identified for a particular song or movie, for example, can identify a point in the particular song or movie that produces a “chill” reaction (in an audience of the media) that results in a sensation that is similar to feeling cold, getting goosebumps, having one's hair stand on end, or some other physiological reaction. Such reactions can include, for example, an increased heart rate, an increased respiration rate, in increased skin conductance (or “galvanic skin response”), or some other physiological response. Sources for chills can include, for example, music (e.g., most potent), visual arts, speeches (e.g., notable speechmakers), beauty (or other breath-taking appearance), or physical contact. The intensity and/or effect of chills can be affected by factors such as mode, volume, tempo, mood, tone, and pitch, or other factors that may help to convey or amplify emotion. Generally, it can be assumed that chill moments can only exist for music or video that have already been experienced before, such as by at least one user.
- In the depicted example, the
sender 102 select themedia content 106 c from among the media content 106 a-106 d. While the selectedmedia content 106 c can refer to the entire media content (e.g., the entire song, the entire movie, the entire music video), thesender 102 can select from amongmedia content excerpts 106 c′-106 c′″ (segments of the media content 106), each of which may include the chill moment that thesender 102 wishes to share with therecipient 104. Theexcerpts 106 c′-106 c′″ can be pre-designated for themedia content 106 c (e.g., manually designated, crowd sourced) and/or automatically identified (e.g., waveform analysis). Each of the other media content 106 a-b and 106 d can additionally include one or more excerpts that are proposed to thesender 102 for selection. - In the depicted example, the
sender 102 selects theexcerpt 106 c′ for personalization. However, at this point themedia content excerpt 106 c′ has yet to be personalized (e.g., just a segment of a music video without personalization). Thesender 102 can be prompted to identify personal media content to add to the selectedexcerpt 106 c′, such as overlaying a portion of theexcerpt 106 c′ (e.g., photo overlaying a portion of a music video, text overlaying a portion of a movie), being presented adjacent to the selectedexcerpt 106 c′ (e.g., video that is played before or after theexcerpt 106 c′), and/or other combinations with theexcerpt 106 c′. Thesender 102 can be guided through selection of media content for theexcerpt 106 c′ by the sender'sdevice 107, which can be programmed, for example, to use one or more personalization templates to assist thesender 102 in the selection of personalized media content to provide maximize the emotional impact of the personalized media content. For example, thesender 102 can be prompted to enter a textual message for therecipient 104, then to provide up to 10 seconds of a personalized video message to therecipient 104, and then to provide up to 3 photos that include both thesender 102 and therecipient 104. In the depicted example, thesender 102 selects the example personal media content 108 (e.g., photos, videos, audio, text) to be used to personalize theexcerpt 106 c′ for the mediagram. - A server system 112 (e.g., cloud computer system) can receive the selection of the
media content excerpt 106 c′ along with thepersonal media content 108 and can generate a mediagram to be delivered to therecipient 104. Such generation can include, for example, referencing one or more personalization templates to determine how to combine theexcerpt 106 c′ with the personal media content 108 (as well as referencing particular instructions/designations for the mediagram made by the sender 102). The generation can also be completed using digital rights management code, which can be encoded into the resulting mediagram to manage aspects of copyrights and payment of royalties and/or other fees. The mediagram can be output, for example, in the form of deliverable media 110 (e.g., video file, audio file) that results from audio, video, images, text, and/or other media content items being assembled by aserver 112. For example, thedeliverable media 110 can be a single video file and that is transmitted to acomputing device 114 for therecipient 104. - In some implementations, the component parts of the mediagram, including the associated media segments, can be sent individually (e.g., not in the form of deliverable media 110) and assembled by the
computing device 114 for presentation to therecipient 104. - The
deliverable media 110 can be provided to thedevice 114 using one or more features to protect against piracy. For example, thedeliverable media 110 can be provided in a “lock box” to protect media and avoid piracy. The lock box may include a feature that prevents consumption of the mediagram unless therecipient 104 provides credentials or some other form of authentication. In another example, the mediagram can include features that control the number of times and/or a timeframe over which the mediagram is presented, such as a single time or a limited number of times, or an expiration time that limits presentation of the mediagram to a time-limited viewing. In another example, the mediagram deliverable 110 can include digital rights management (DRM) features and/or other techniques for copyright protection of digital media, including restricting copying of media and preventing unauthorized redistribution. In another example, thedevice 114 of therecipient 104 may be required to install/run/load a specialized/authorized media player (or an application providing similar functionality) to view content of the mediagram. - The mediagram deliverable 110 can be distributed to the
device 114 in any of a variety of ways, such through an account that therecipient 104 may have on the server system 112 (e.g., push notification provided to mobile app on thedevice 114 that is hosted by the server system 112), by transmitting a link to the deliverable 110 (e.g., sending an email including a uniform resource locator (URL) for the deliverable 110, sending a text message including the URL for the deliverable). Other ways of providing notification to therecipient 104 that the deliverable 110 is available and ready for him/her to access it are also possible. -
FIG. 2 is a block diagram 200 showing anexample technique 202 for creating and delivering apersonalized mediagram 204 to a recipient. In the depicted example, themediagram 204 is created using a music video as the underlying media content that is being personalized. - To create the
mediagram 204, a user can starts by selecting a song that will be personalized (206). For example, referring toFIG. 1 , thesender 102 selects a song from pre-made categories or uses a search feature to identify songs by artist, song title, or occasion/subject (e.g., Christmas songs, love songs, etc.), or in some other way. The user can then create a personal message through personal media content that the user selects (208). For example, referring toFIG. 1 , thesender 102 can designate text, audio, photos, videos, and/or other personal media content to be added and/or otherwise combined with the selected song to generate themediagram 204. As described above with regard to FIG.1, the user can be guided through the selection and designation of the personal media content for the mediagram, such as through the use of personalization templates that can identify specific types of media content that should be added to particular locations of the song excerpt to maximize the emotional impact of the mediagram. Additionally and/or alternatively, for senders and receiver pairs who have relationships modeled by a relationship concierge (e.g., users of the social platforms described below with regard toFIGS. 8-21 that use relationship concierges to facilitate and improve social interactions), the relationship concierge can be used to identify and select personal media content for the mediagram. The relationship concierge can be used alone and/or in combination with other features guiding personal media content selection for the mediagram, such as the personalization temples, with the systems, techniques, and devices described throughout this document. - For instance, the
example mediagram 204 can include personalization that is added to anoriginal music video 222 for the song selected by the user (206). Themediagram 204 may be for theentire music video 222 or just a portion of the music video 22, such as an excerpt of themusic video 222 that has a chill moment in the song as its focal point. Themediagram 204 includes audio and video tracks, one or both of which can be personalized by the user at various points in themediagram 204. In this example, the original audio track 220 (not personalized) from the music video run the entire length of themediagram 204, but the original video track (not personalized) for themusic video 222 runs for only themiddle portion 216 of themediagram 204. The video tracks for the beginning 214 and end 218 of themediagram 204 in this example are personalized with personal media content designated by the user. For example, the video track for the beginningportion 214 of themediagram 204 includes a written message and avideo 214 a, and the video track for theend portion 218 of themediagram 204 includesphotos 218. Although the original audio 220 for themusic video 222 runs the entire length of themediagram 204, it is combined (blended) withpersonalized audio 214 b that corresponds to thepersonalized video 214 a at the beginning of themediagram 204. The user can be guided through the process of selecting personal media content (214 a-b, 218) for personalizing themusic video 222, such as with a personalization template to assist the user in identifying the best type of media content to select to make themediagram 204 emotionally impactful upon the recipient. - Although not made explicit in the
technique 202, once the personal message (214 a-b, 218) has been obtained from the user, themediagram 204 can be automatically assembled so to generate a high-quality mediagram deliverable that combines the personal media content with theoriginal music video 222. These steps can be performed automatically by a computing device (e.g., client computing device, server system) without the user having to designate how the original or personal media content should be assembled, let alone go through the process of laying out audio and video tracks for themediagram 204. Additionally, the personal media content (214 a-b, 218) can be automatically positioned at or around the chill moment in themusic video 222 so that themediagram 204 will be emotionally impactful for the recipient with regard to the sender and the relationship between the sender and recipient. - The assembly of the
mediagram 204 with original and personal media content is one example. Other configurations and arrangements of personal media content with regard to original media content are also possible. - Once the
mediagram 204 has been created, it can be sent to the recipient (210), such as by specifying the recipient's contact information (phone number, email, social network identifier, etc.). Once specified, themediagram 204 can be delivered either directly (e.g., file transmission) or indirectly (e.g., link transmission, notification) to the recipient along one or more communication channels (212), such as in-app communications, email, or text message. Depending on the delivery method, the recipient can be prompted to send a response (e.g., via a social platform), download an application (to render the mediagram), or to subscribe to a mediagram service, some or all of which may be free or have a cost to the recipient and/or sender. -
FIG. 3 is a block diagram of anexample system 300 for generating and consuming mediagrams. Thesystem 300 can host a mediagram creation and deliver service that can provide services to users, like the generation, storage, and distribution of mediagrams that are created by the users. Thesystem 300 can uselicense agreements 302 that are held with licensors 304, such as music, movie, and other media content copyright owners. When used for generating and consuming mediagrams, for example, thesystem 300 can refer as needed to thelicense agreements 302 in order to remain protected under copyright and other restrictions and laws associated with the owners who license content to be used in mediagrams. - License agreements with copyright holders can dictate what media content is provided by the
mediagram server 308 for users to incorporate into their mediagrams. Amedia management system 306 can be dedicated to ensuring that all of the original media content in a library used by themediagram server 308 is currently licensed with the licensors A-C (304 a-c). Themedia management system 306 can maintain thelicense agreements 302 and a library of licensed original media content that have been downloaded from the licensors 304 a-c. Themedia management system 306 can additionally generate licensed media content excerpts that include chill moments and provide them to themediagram server 308, which can store it in arepository 312 a of licensed media content. Over time, as some media content falls out of license, themedia management system 306 can purge unlicensed media content from themedia file storage 312 a that is used by themediagram server 308. - A
mediagram server 308 can store and manage mediagrams generated byusers 310, who may subscribe to or otherwise be enrolled with the mediagram service. Themediagram server 308 can provide a server system through which users can create and distribute personalized mediagrams using preselected and licensed content from themedia management system 306. For example, themediagram server 308 can store excerpts of songs, videos and other content obtained from themedia management system 306. Using themediagram server 308, for example,users 310 can request and/or access new music and other content from themedia management system 306, and use the content to create mediagrams. Theusers 310 can also stream and/or distribute their mediagrams at any time. Themediagram server 308 can include tools, including templates, that allow a user who is not proficient in media editing to easily create mediagrams - The process of generating a mediagram can include identifying chill moments, “hooks,” and/or other features in the media content (e.g., punchline of a joke) that elicit one or more target emotional responses in a user, and then generating excerpts that include the chill moments, hooks, or other features eliciting emotional responses in users. A hook is a musical idea, often a short riff, passage, or phrase, that is used in popular music to make a song appealing and to “catch the ear of the listener.” The term “hook” generally applies to popular music, especially rock, R&B, hip hop, dance, and pop. In some implementations, chill moment identification can be done automatically using an algorithm based on a variety of different factors, such as changes in tempo, mode, volume, mood, tone, pitch, etc. Chill moment identification can also be done using crowd-sourced identification of popular moments in songs, such as using information from previous user selections of (or identification of) favorite song parts. Identification can be done manually by trained professionals. Hook identification and other emotion-eliciting feature identification can be performed in similar ways and can additionally and/or alternatively be used to generate excerpts in the systems, techniques, and devices described throughout this document.
- As shown in an expanded view of
media management system 306,storage containers 312 can store media and associated files for use in generating mediagrams. For example,media file storage 312 a can store the actual media used in the mediagrams (e.g., media content excerpts with chill moments), including media that is obtained from themedia management system 306. Customuser content storage 312 b can store personalizations that users have added to their mediagrams and/or the finalized mediagrams themselves. -
Template storage 312 c can store templates that can be used by users creating mediagrams, which can simplify the process of creating mediagrams and allow users having little or no experience in combining media to nonetheless generate mediagrams. Templates can identify and/or suggest (to the user) specific types of personalization to insert at various points in media, and suggestions on what to insert for different media content categories. Templates can be specific to media files, meaning that each media file (e.g., song, video, etc.) can have one or more templates that help coordinate a user-identified personalization to the specific chill moment in the media so that the chill moment is the most impactful/powerful. Additional templates can exist for different categories, such as romance, celebration, birthday, or other categories. -
Filler media storage 312 d, can include snippets or other lengths of audio/visual media that can be inserted during some or all of the personalization to accommodate variable length personalized content. For example, the media content excerpts that are used to generate mediagrams can be centered around chill moments in the media content excerpts, which can be designed to occur in a middle to middle-end portion of the excerpt. Accordingly, changing the length of a mediagram to accommodate variation on the length of the personal content to be added to the excerpt can throw off the positioning of the chill moment in the mediagram. Fillers can be added to the beginning and/or the end of the excerpt to accommodate for variation in the length of the mediagram due to variable personal content without disrupting the positioning of the chill moment within the mediagram. Filler content can be looped portions (e.g., one or two bars) of instrumentals in musical content, musical content with lyrics, or a few seconds of video content. Filler content can include copyright free content that matches up well with the media content. - The
mediagram server 308 can include web servers, various data stores, and services formediagram instances 314, which can be extensible and scalable (e.g., cloud computer system). For example,web servers 315 can provide access to external entities that access mediagrams. A mediametadata data store 314 a can include metadata associated with media stored in themedia file storage 312 a. Auser account database 314 b can identify users to the mediagram system and the users' account information. Amediagram detail database 314 c can include definitions of mediagrams that have been generated by users. - The
mediagram server 308 can include at least one cloud-basedserver 316, such as implemented or provided by Amazon Elastic Load Balancers (ELB), that distributes mediagrams (or provides access) touser computing devices 324. For example,user computing devices 324 that are mobile devices can be used byrecipients 104 to receive distributed mediagrams, such as through one or more applications that reside on a mobile device. Desktop implementations of theuser computing devices 324, for example, can access mediagram through afront end 318, implemented as Amazon Web Services (AWS).User computing devices 324 ofusers 310 can be used by both the sender and/or a recipient of a mediagram. -
FIGS. 4A-F are screenshots that collectively show an example sequence of steps for creating and distributing a mediagram. For example,FIGS. 4A-E show screenshots captured during use of a mobile application for creating a mediagram, which can also be done through a web application through a web browser, andFIG. 4E shows an example mediagram presented in a social network. -
FIG. 4A shows an initial display of anexample interface 400 used to identify a media excerpt to use for a mediagram. For example, using asearch control 402, a user can search by song, album, artist, or in other ways. Further, controls 404 can be used to browse songs of various categories, such as aromance category 406 that includes songs related to romance. The songs (or other media) that are searchable in theinterface 400 can include songs (or other media) stored in themedia management system 306. In the depicted example, the user selects theromance category 406. -
FIG. 4B shows a display of theexample interface 400 that presentsmedia excerpts 410 that are in theromance category 406. Each of themedia excerpts 410 can include a chill moment that they are intended to leverage to be emotionally impactful as part of a mediagram. In the depicted example, themedia excerpt 408 for a Taylor Swift song is selected from alist 410. Similar or different types of lists can be presented when thesearch control 402 is used. -
FIG. 4C shows a display of theexample interface 400 in which the user can preview the song. For example, using apreview control 411, the user can preview themedia excerpt 408. If the user then decides to use the media excerpt 408 (e.g., Taylor Swift song) in a mediagram, then a createcontrol 412 can be used to initially populate a new mediagram with the selected song. -
FIG. 4D shows a display of theexample interface 400 in which the user can personalize the mediagram. For example, usingvarious controls 414, the user can add text, photos, videos, and/or other types of personal media to the mediagram. Selecting a particular one of thecontrols 414, for example, can result in the user being guided through selection, using a template, that is specific to the media excerpt 408 (e.g., Taylor Swift song) and/or theromance category 406. As an example, the template may suggest that the user obtain a photo of the user with the recipient, then include a ten-second personalized message or text expressing the user's feelings. The system can then automatically insert the photo and personalized message in the right locations relative to themedia excerpt 408, generating a personalized mediagram without requiring user to have media editing knowledge or skills. For example, there is no need for the user to figure out where the photo and personalized message should go (e.g., relative to the chill moment), how to edit video/audio, or how to perform other tasks. As such, automatically inserting the photo and personalized message can create a compilation of a professional-looking video, with complicated details of video editing being handled automatically for the user. Additional controls in theinterface 400 can allow the user to preview and view the mediagram once completed. -
FIG. 4E shows a display of theexample interface 400 in which the user is distributing the mediagram. In this example, the user is sending the mediagram by email, but other distribution channels, including sending via social media, are available throughcontrols 414. The user can select the recipients of the user's mediagram from acontact list 416. Contacts in thecontact list 416 can be annotated, such as to differentiate between contacts who have mediagram accounts (e.g., designated with mobile phone/app icons 420) and other contacts who do not have mediagram accounts (e.g., designated by grayed out a respective user icon). If a particular recipient has a mediagram account, then the mediagram can be delivered via their account. Otherwise, if a particular recipient does not have a mediagram account, then the mediagram can be delivered via available contact options (e.g., email, text, social network, etc.). - In some implementations, the user can optionally elect to send a gift with the mediagram. For example, selection of a respective control from
controls 422 can lead to additional user interface elements that allow the user to designate a monetary amount of or a selection of a gift that is included (e.g., using a link or an attachment) with the mediagram. Gifts can also be integrated into the mediagram, such as with a link that can navigate the recipient to a web page or other resource from which the gift can be redeemed. The user is also given the ability to download and/or purchase the song or video for their personal use. -
FIG. 4F shows a display of theexample mediagram 424 displayed on a mobile device of a recipient. For example, the mobile device can present amediagram entry 426 on a social network page of a User B for themediagram 424 created by a User A, as indicated by a mediagram socialnetwork header entry 428. Themediagram entry 426 can be generated, for example, if the user selected a social network control from thecontrols 414 in order to share the mediagram with one or more recipients who are friends of the user in the social network. Amediagram song title 430 can identify the media excerpt 408 (e.g., Taylor Swift song) selected by the user. Amediagram description 432 can include, for example, a number of underlined portions that are links to the content, which can help drive cross-user promotion based on the mediagram content. The underlined portions can include, for example, anartist link 432 a, asong link 432 b for themedia excerpt 408, analbum link 432 c, agift link 432 d (e.g., if a gift was selected using controls 422), and anapp link 432 e by which a mediagram application can be downloaded. -
FIGS. 5A-M are block diagrams showing example assemblies of mediagrams. The block diagrams depicted inFIGS. 5A-M present solutions to various technical problems for mediagram creation. First, a variable amount of user-supplied personalized content (photos, video, text) the length of the audio portion of the license music video playing prior to and after the primary licensed video clip can present a problem for presenting a chill moment at the right time within a mediagram. Audio pre-rolls can be used to account and solve for this. The goal is to provide an audio overlay for the user supplied personalized content that transitions seamlessly into and out of the licensed video clip. Several of the block diagrams depicted inFIGS. 5A-M use pre-rolls to solve for variable personalized content. - Second, video and audio tracks for original content can be licensed together or separately, depending on contractual agreements with various licensors, which can create technical hurdles in generating a mediagram file that is compliant with licensing agreements. For example, some agreements may permit the audio track from a video (e.g., movie, music video) to be licensed separately (and at a lower price point) than the price for licensing the audio and video together. However, some agreements may not permit such bifurcation of audio and video licensing rights. Additionally, some agreements may grant licenses to master audio loops from a song that could be used for pre-roll fillers, but some agreements may not. Some agreements may also grant licenses to lead in or out of the licensed media content with other content (e.g., third party content, in-house generated content), but others may not. The block diagrams depicted in
FIGS. 5A-M provide a variety of approaches and file formats for generating mediagrams to accommodate and comply with a wide variety of licensing restrictions imposed by agreements with licensors. - Third, audio and video track synchronization, particularly when they are not licensed together throughout the entirety of the mediagram, can be problematic. To solve for this, several of the block diagrams depicted in
FIGS. 5A-M insert blank video on the audio-only licensed portions so that a single video file can be generated and used for personalization. For example, if an excerpt for a music video includes a first portion that is licensed for audio only—meaning that the mediagram system has an audio file for the first portion—and a second portion that is licensed audio and video—meaning that the system has a video file for the second portion—there may be potential issues with synchronization if the first and second portions are adjoined to each other with various personalized user content. To solve for this, a blank video track can be combined with the audio file for the first portion of the excerpt to generate a video file for the first portion of the excerpt. Then, the video file for the first portion and the video file for the second portion can be assembled together to ensure proper synchronization between the first and second portions. After generating this singular video file, the personalization can be added and combined to generate the personalized mediagram. - For example, in
FIG. 5A , the mediagram includes a licensed video andaudio excerpt 504 that is combined withpersonalization sections FIG. 5B , a licensed video andaudio excerpt 512 is combined withpersonalization sections excerpt 512 and the audio 510 can be from the same movie). - In
FIG. 5C , licensedaudio 518 is combined with licensed visuals, which are then combined with user-designated audio clips used in thepersonalization sections licensed audio 518. The video excerpts can consist of label approved visuals 518 (e.g., album art, concert photos/pictures, concert video, photo shoots, etc.). Label approvedvisuals 518 can be used, for example, when there is not an available music video, or when label approved visuals are more appropriate than the music video. - In
FIG. 5D ,personalization section 520 consists of complete audio section, andpersonalization section 522 consists of looping segments of a song, e.g., associated with licensedvideo audio excerpt 524. - In
FIG. 5E , there is a solid piece of licensed or user-generatedaudio 526 overpersonalization 528. In this example, there is no video excerpt, and voice audio can take precedent over the audio file. - In
FIG. 5F , a licensedvideo audio excerpt 530 is preceded bypersonalizations time 536, and each being generated using atemplate filter 538 that can automatically fit personalization content around a video excerpt. For example, the personalized pictures and text can be a set phrase from the song, but depending on length of the phrase, this time can vary. For instance, if the phrase is three seconds long the app will use three second increments to determine the length of each personalized option. This can seamlessly lead into the excerpt, and can create a standard for the timing of the personalization sections. For instance, a picture can be three seconds and a text box can be three seconds long, which can cause a video to need to fit into a divisible of three seconds. For personalized videos, the app can put the “extra” seconds at the beginning of the video with a template filler, which can solve a technological issue with fitting the personalization section. The loop can be either a continuous video or singular audio loops. For single audio loops, they can be programmatically assembled with personalized media content on the fly. For the video loop, the audio can be, for example, a looping phrase of the song and the visual can be a black screen. The loops can be pre-combined into a single file on the beginning and end of the excerpt, with a black screen over which personalization can be added. The program can be designed to start at the beginning of a loop, with a maximum number of permitted loops for a mediagram. Technological aspects of such a configuration include, at least, a portion of the audio from the licensed music video being chosen for use as looping audio clip. This clip can be pre-chosen and stored as a supplementary file to the licensed music video. The audio clip can be played over the user-supplied content in a loop and then stopped during the playback of the music video, for example. -
FIG. 5G shows an example mediagram in which the components of the media content being personalized inFIG. 5F (separate loops and video excerpt with audio) are combined into a single,seamless file 540. Creating a seamless file (instead of using the separate component files (loops, video excerpt, video audio) that will be used for personalization), for example, can resolve technological issues associated with presenting individual files, such as audio and video files getting out of sync, and can create a seamless transition that improves the user experience. -
FIG. 5H shows an example mediagram that uses a solid stream ofaudio 542 that then transitions into avideo excerpt 544. For example, in this scenario the music and video tracks are licensed for the entire mediagram, even though the video is only presented during theexcerpt 544 portion. The personalization in this example is placed on the video track before or after theexcerpt 544. The mediagram system can determine the start and end times for the beginning and end of the video, not the excerpt in the middle, which can provide users will have the ability to extend and contract the audio of the clip depending on how much personalization is added before or after the excerpt without having to use fillers or loops. Thus a mediagram can have varying beginning and end times within the actual song. Personal images and text (and other personal media) can have a preset duration while personal videos can vary in length. In this example, the entire music video (and accompanying audio layer) can be available within the app. The cue point for beginning to play the audio layer of the music video would be determined programmatically on-the-fly by counting the time-values (e.g. in seconds) associated with each piece of user supplied personalization. The licensed audio layer (no video) portion of the video file can be played starting from the calculated cue point. The video portion of the licensed music video can be masked during that time. At the end of the user personalization portion of the playback, the audio layer of the licensed music video can then continue to play, and the video layer can become visible for the licensed video playback portion of the presentation. -
FIG. 5I shows an example mediagram in which a user has uploaded a voice recording for apersonalization 546, and an audio clip is created and licensed by the system for apersonalization 548. -
FIG. 5J shows an example mediagram in which a video stream ofaudio 550 overlays a variety of personalization options, in which inclusion of avideo excerpt 552 is optionally included. - In
FIG. 5K , the audio and video portions of amusic video excerpt 556 are licensed along with the audio tracks (not video) for thepersonalization sections excerpt 556. This scenario can involve the two audio only files and a video file being licensed and combined into the mediagram, and can be used, for example, when sync rights will not permit personalization on top of the video portion. Similar toFIG. 5H , personalization before and after theexcerpt 556 can be variable in length. -
FIG. 5L depicts an example scenario in which a template for the video excerpt guides the user in how to best personalize the mediagram. In this example, the template guides the user to designate the most impactful, sentimental picture, which the mediagram system can automatically place immediately after the video excerpt. Alternatively, the mediagram system can permit the user to place personal media content at various locations and can identify the location where the most impactful picture should be placed, which can correspond with the chill moment. -
FIG. 5M depicts an example mediagram 562 that has anintroduction 560 appended to the start of the mediagram 562 and, in some instances, anend 564 appended to the end of the mediagram 562. Theintroduction 560 can include any of a variety of different combinations of visual content and audio content, such as the examples 566 a-c (other combinations are also possible). For example, theintroduction 560 can include a combination of preselected introductory visual content and preselectedintroductory audio content 566 a, a combination of personalized visual content and preselectedintroductory audio content 566 b, a combination of preselected introductory visual content and personalizedaudio content 566 c, and/or other combinations (e.g., combination of preselected and personalized visual content, combination of preselected and personalized audio content). Preselected introductory audio content can be, for example, an audio mark (e.g., music or other audio files that identify a good or service). Preselected introductory video content can be, for example, a visual mark (e.g., logo, animation, name, or other visual content that identifies a good or service). Personalized visual content can be, for example, videos and/or images selected by a user (e.g. user-generated photos and/or videos). Personalized audio content can be, for example, audio recordings selected by a user (e.g., user-recorded audio message). The personalized visual and/or audio content may extend into and be part of the mediagram 562. - Additionally and/or alternatively, the mediagram 562 can include the
end 564, which can be similar to theintro 560 in that it can include preselected and/or personalized audio and/or visual content. For example, theend 564 can include a combination of preselected visual andaudio content 568 a, a combination of personalized visual content and preselectedaudio content 568 b, a combination of preselected visual content and personalizedaudio content 568 c, and/or other combinations. -
FIG. 6 is a conceptual diagram of anexample system 600 for generating personalized media content, such as mediagrams. Theexample system 600 can facilitate a user search of a video (e.g., a music video), and can present a custom list of videos to the user based on various specified search parameters. In response to receiving a user selection of one of the videos from the custom list of videos, for example, thesystem 600 can integrate various user-provided media items (e.g., audio, video, text, and/or images) with an excerpt of the selected video, based on a personalization template associated with the selected video. - For example, the
system 600 can present user interfaces, such as theuser interface 602, to assist the user in selecting a video for a particular recipient. A variety of different features can be used to guide selection for the user's self-expression and for the best fit video for the recipient. For example, theinterface 602 can provide a set of questions to guide the user with selecting the best video to express themselves (e.g., provide engagement announcement), not particularly to a specified person. Libraries can be provided for the user on each of the personalized content types: text, photos, videos/animations. The style and tone of the library content presented to a user can be pre-filtered based on prior personalization choices presented within the app (interests, age, sex, etc.). The specific library content presented can also be based on the category of the chosen music video and/or specific tagged keywords applied to it by administrators of the app. - In another example, the
interface 602 can provide a set of questions to guide the user with picking the best video to tell a specific user a specified message they want to get across (e.g., provide well wishes to friend who lost a loved one). In addition to a user being able to provide explicit search parameters, the system can use data associated with the user profile and/or supplied at the time of song selection such as occasion, age, nature of the relationship to guide song selection and suggest personalized content. The application can programmatically filter the song list based on tags applied to the song list at the database level. These tags can be similar to the allowed profile choices (age, relationship type, etc.). - In a further example, the
interface 602 can prompt a user to answer questions about the user/or recipient to pick which song to use. This can create a personal list of songs to choose from. Such questions can be based on, for example, demographics of recipient (age, gender, occasion, favorite genre of music, relationship, message you want to get across). The answers and contact information (i.e. email address, or other unique identifier) can be stored to create a profile for the user if they sign up with the app. If the recipient responds to the mediagram and signs up, the profile that was saved can populate the recipient's profile. - In another example, cached profiles for recipients generated by other users can be leveraged in song selection. For example, recipient/user profiles can be built based on information other users have provided. When finding what song to pick, stored answers and links to accounts/email addresses can be used to identify songs.
- As shown in
FIG. 6 , theexample system 600 can include a personalizedvideo creation system 620. The personalizedvideo creation system 620, for example, can be implemented using one or more computer servers. In some examples, the computing server(s) can include various forms of servers, including, but not limited to a network server, a web server, an application server, or a server farm. The computing server(s) may be configured to execute application code associated with a variety of software components (e.g., modules, objects, libraries, services, etc.) and/or hardware components. Two or more software components, for example, may be implemented on the same computing device, or on different devices, such as devices included in a computer network, a peer-to-peer network, or on a special purpose computer or special purpose processor. Operations performed by each of the components may be performed by a single computing device, or may be distributed to multiple devices. - The personalized
video creation system 620, for example, can provide various user interfaces (e.g., web interfaces, client/server interfaces, etc.) for presenting information to users through various types of user devices (e.g., laptop or desktop computers, tablet computers, smartphones, personal digital assistants, or other stationary or portable devices), and for receiving input from the user devices in regard to generating personalized videos. The user devices can communicate with the personalizedvideo creation system 620, for example, over one or more networks, which may include a local area network (LAN), a WiFi network, a mobile telecommunications network, an intranet, the Internet, or any other suitable network or any appropriate combination thereof. - Operations performed by the
example system 600 and the personalizedvideo creation system 620 will be described in further detail below, for example, with reference toFIGS. 7A and 7B . -
FIG. 7A is a flowchart of anexample technique 700 for generating personalized videos. Theexample technique 700 can be performed by any of a variety of video generating systems, such as the personalized video creation system 620 (shown inFIG. 6 ). - User search parameters can optionally be received (702). Referring to
FIG. 6 , for example, avideo search interface 602 can be presented at a user device, the interface including a set of controls (e.g., text input controls, option selection controls, etc.) through which a user can specify values for one or more parameters to facilitate a search and selection of a video. In the present example, thevideo search interface 602 includes a control for specifying an age, a control for specifying a gender, a control for specifying a relationship, a control for specifying a message to be sent, a control for specifying a preferred genre of music, a control for specifying a favorite artist, a control for specifying a favorite song, and a control for specifying an emotion to be expressed. A user of the personalizedvideo creation system 620 may want to send personalized video that incorporates a friendly/romantic/upbeat message, for example. The user can select one or more appropriate values using one or more corresponding controls in thevideo search interface 602, for example, and can submit the selected values to the personalizedvideo creation system 602 as user input 604. - Video options can optionally be displayed (704). In response to the user input 604 (shown in
FIG. 6 ), for example, the personalizedvideo creation system 620 can identify one or more videos that match the user selected values of the various search parameters. A corpus of videos (not shown), for example, may be indexed per the search parameters to facilitate subsequent searches. In response to the user input 604, for example, a custom list ofvideo options 606 can be presented at the user device. In the present example, the custom list ofvideo options 606 includes Katy Perry's “Firework,” Outkast's “Hey Ya,” and the Romantics' “What I Like About You.” - A user selection of a video can be received (706). For example, a user can select one of the videos presented in the custom list of video options 606 (shown in
FIG. 6 ) presented at the user device. As another example, a user can submit a video title, a video title and an artist name, or another sort of video identifier. In the present example, the user selects the Romantics' “What I Like About You,” as indicated by user selection 608. - A determination is made of whether a preselected excerpt of video is available (708). For example, each of the videos included in an indexed corpus of videos may be associated with a corresponding preselected excerpt of the video. A video excerpt, for example, can be a portion of the video, and can be of a duration that is less than a duration of the video itself. The video excerpt can include one or more impactful moments, such as moments for which the video and/or associated music are generally recognized, such as a chorus of a song, a popular scene of a video, or another sort of impactful moment. Video excerpts can be manually and/or automatically generated.
- If the preselected excerpt of video is unavailable, an excerpt of the selected video can be generated (710), as will be discussed in further detail with regard to
FIG. 7B . - If the preselected excerpt of video is available, the preselected excerpt can be retrieved (712). For example, in response to the user selection 608 (e.g., a selection of the Romantics' “What I Like About You”), a video excerpt 610 (shown in
FIG. 6 ) can be retrieved. Thevideo excerpt 610, for example, can be of a duration that is less than that of the video, such as fifteen seconds, thirty seconds, a minute, or another suitable length of time. In some implementations, a duration of a video excerpt may be based at least in part on video and/or musical elements of the video. For example, beginning and/or end points of a video excerpt may occur during scene transitions of a corresponding video, musical transitions (e.g., transitions between a chorus and a verse, transitions to and from solo portions) of the corresponding video, or other appropriate transition points. In general, video excerpts can include a continuous audio track from an original video, and can include a segmented video track which includes one or more portions of the original video, and one or more personalization locations for user provided media. The portion(s) of the original video and the personalization location(s) can occur at any position within a video excerpt, such as at the beginning, middle, or end of the excerpt. In the present example, thevideo excerpt 610 includes afirst personalization location 612 at the beginning of the excerpt, aportion 614 of the original video in the middle of the excerpt, and asecond personalization location 616 at the end of the excerpt. Thevideo excerpt 610 of the present example also includes acontinuous audio track 618 from the original video, such that the audio track is synchronized with theportion 614 of the original video. - A prompt can be provided for the user to provide personalized media (714). For example, each video excerpt can be associated with one or more corresponding personalization templates, which can be retrieved by the personalized
video creation system 620 from a data store of personalization templates 624 (shown inFIG. 6 ). In general, personalization templates can be used by the personalizedvideo creation system 620 to place user provided media in appropriate personalization locations of a video excerpt. For example, a personalization template for the video excerpt can include locations for user provided text, user provided video (e.g., including audio), and a user provided image. In the present example, the user can be prompted to provide each media item in accordance with the personalization template, such as through a prompt to “type a hello message for the recipient,” a prompt to “upload a short video telling the recipient what you like about them,” a prompt to “upload a funny picture,” and a prompt to “type a goodbye message.” As another example, the user can provide one or more media items of the user's choice, and the personalizedvideo creation system 620 can match the provided media items by media type to a suitable personalization template for the video excerpt. For example, in response to receiving a series of images from the user, the personalizedvideo creation system 620 can select a suitable personalized template for the video excerpt that is configured to accept media items of the received type (e.g., images). Templates can suggest stored media content that is appropriate to include with a particular video, such as famous quotes, canned “helper” text/templates for different types of mediagrams, and/or libraries of artwork, graphics, and pre-generated text. - User-provided media can be automatically placed in one or more designated personalization locations in the excerpt of video (716). For example, the personalized
video creation system 620 can place the user providedmedia 622 in designated personalization locations in the video excerpt 610 (shown inFIG. 6 ). In the present example, text provided by the user (e.g., in response to the prompt to “type a hello message for the recipient”) is placed in a designatedpersonalization location 622 a. A video provided by the user (e.g., in response to the prompt to “upload a short video telling the recipient what you like about them”), for example, is placed in a designatedpersonalization location 622 b. Audio associated with the provided video is integrated with (e.g., overlaid on) thecontinuous audio track 618 from the original video, for example, at a designated personalization location 622 e. The image provided by the user (e.g., in response to the prompt to “upload a funny picture”), for example, is placed in a designated personalization location 622 c. Additional text provided by the user (e.g., in response to the prompt to “type a goodbye message”), for example, is placed in a designated personalization location 622 d. - A preview can be provided to the user (718). For example, in response to receiving the one or more media items from the user, the personalized video creation system 620 (shown in
FIG. 6 ) can generate a preview of the video excerpt integrated with the user provided media item(s). The preview can be provided to the user at the user's device, and the user can be given an option to modify and resubmit the media items. - A personalized video (e.g., mediagram) can be finalized (720). For example, the personalized video creation system 620 (shown in
FIG. 6 ) can finalize the personalized video, including generating a file of a selected type (e.g., .AVI, .FLV, .GIF, .MOV, .WMV, .MP4, etc.). After finalizing the personalized video, for example, the file can be sent to one or more selected recipients and/or posted to one or more social media platforms. - For example, a proprietary file format for the
video 610 can be used for personalizing the video to allow for the smooth and accurate playback of the various elements involved (photos, looping audio, licenses video, text, etc.) a single video file should be constructed in a special fashion. The video file can include three segments. First, a lead segment (612) can include a blank/black video track and looping audio repeated in the audio track. This lead section can be standardized to be at least of a certain length capable of playing during the time the personalized content would be displayed. Second, the middle segment (614) of the proprietary file can include the licensed music video and the accompanying licensed track. Third, the last segment (616) of the proprietary file format can again include blank/black video as looping audio played on the audio track. - The use of this file configuration can be to programmatically determine the length of time the personalized content needed to be displayed and cue the video file (programmatically) at the proper point within the blank video and looping audio segment, over which, the photos and text is displayed. When the personalized content display period has completed the looping audio track can seamlessly transition (within the underlying video file format) to play the licensed music and audio segment of the file. Finally, a smooth transition can occur to the looping audio once again (located in the final segment of the proprietary file) as additional personalized content can be displayed over the blank video portion of the file. Note that the looping audio could alternately be replaced by a seamless audio track which transitions into and out of the license video segment. In this way the format is flexible to allow for both kinds of audio content to play during the personalized content—looping or seamless. A benefit of this approach is to reduce the complexity of synchronizing content programmatically and push the organization work into the content editing and preparation process, and also circumventing potential issues that sequencing the audio together as individual files could introduce such as small gaps or glitches in the audio playback moving from one file to the next.
-
FIG. 7B is a flowchart of anexample technique 750 for generating an excerpt of a selected video. Theexample technique 700 can be performed by any of a variety of video generating systems, such as the personalized video creation system 620 (shown inFIG. 6 ). - A video can be retrieved (752). For example, the
video creation system 620 can retrieve a video from a corpus of videos. Retrieving the video, for example, can be performed in response to a user selection of a video, when a corresponding preselected excerpt of the video is unavailable. As another example, one or more videos can be selected for automatic generation of corresponding excerpts, and the selected video(s) can be retrieved for further processing. - The video can be automatically analyzed to identify one or more emotionally impactful moments (754). For example, the
video creation system 620 can automatically analyze the retrieved video to identify emotionally impactful moment(s) in the video, such as portions of the video and/or associated music which are generally recognized as causing an emotional impact. - In some implementations, automatically analyzing the retrieved video can include performing an automatic analysis of the video content and/or associated audio content. For example, video analysis of the video content can be performed to identify portions of the video which include a close up of a performer in the video of a particular duration (e.g., several seconds), which may be associated with an emotionally impactful moment. As another example, audio analysis of audio content associated with the video can be performed to identify portions of the video which include various musical transitions (e.g., significant volume level changes, key changes, transitions between solo instrumentation and singing, etc.), which may be associated with an emotionally impactful moment. As another example, text analysis of time indexed lyrics associated with the video can be performed to identify portions of the video which include lyrics that correspond with a song title associated with the video, a particular topic (e.g., love, happiness, etc.), or another sort of lyric that may indicate an emotionally impactful moment.
- In some implementations, automatically analyzing the retrieved video can include performing an automatic analysis of user interaction data associated with the retrieved video. For example, user interaction data may include video play data for the retrieved video from various users. Identifying emotionally impactful moments, for example, can include identifying portions of the video which are frequently replayed by users. As another example, a video presentation platform may provide users with an option for generating video clips, and identifying emotionally impactful moments can include identifying portions of the video which have frequently been included in user generated video clips. As another example, a video presentation platform may provide users with an option for indicating a point in time in the video to commence playback, and identifying emotionally impactful moments can include identifying the point in time that has frequently been selected.
- For example, emotionally impactful moments can be identified by using one or more of the following features:
-
- referencing song portion/hook start and stop times in third-party online library,
- truncating song based on lyric analysis and corresponding lyric timestamps in the song,
- waveform analysis to identify optimal ‘chill’ moment based on dramatic waveform changes, such as changes in verse/chorus, pitch, tempo, loudness, verse/chorus transitions based on changes in amplitude/frequency, determine hook from repeated waveform sections, and/or determine instrumentation change moment, e.g. small ensemble to full orchestra,
- Tagging the hook, chorus, intro, outro, and other song features,
- Analyzing beats-per-minute (BPM) data to determine, for example, whether a chill moment exists within the song (e.g., ballads are slower and often do not include chill moments, whereas dance-able tunes are faster and more often include chill moments),
- Identifying the first iteration of optimal excerpt, e.g. the first verse/chorus transition, as the primary ‘chill’ moment sets the stage for the rest of the song,
- Assigning weights to selection criteria, such as verse/chorus transitions being given more weight than repetitious outros, additional weights being given to input reflecting user preferences, profile, relationship to recipient, and additional weights being assigned to music visuals referencing iconic sections, and
- Modeling the classification criteria around human utterances, such as lower frequencies being associated with more somber mood, whereas higher frequencies indicate alertness, excitement, and slower tempos trigger reflection, faster tempos inspire motion (entrainment)—a melody's pitch generally rises and then falls, which can allow for following the ‘melodic arc’ to identify phrases.
- Chill moments within media content can be based on a variety of factors, such as mode, mood, tone, pitch, tempo, and/or volume. Many of these factors couple together and are used in tandem, and a combination of these factors (2 or more factors) can provide chill moments. To fully capture a chill moment in a way that emotionally impactful, an excerpts can be shorter in length, such as from 10 seconds up to over a minute for the music video portion of the song.
- As discussed above, chill moments, also known as goose bumps and shivers down the spine, are physiological reactions to visual art, speech, beauty, physical contact and music. The autonomic responses can include increased heart rate, increased respiration, forearm muscle activity, increased skin conductance and forearm pilo-erection (hair-raising). Chills induced by music are evoked by temporal phenomena, such as expectations, delay, tension, resolution, prediction, surprise and anticipation. Chills are evidence of the human brain's ability to extract specific kinds of emotional meaning from music.
- Neural mechanisms of chills involve increased blood flow in the brain regions responsible for movement planning and reward, specifically the nucleus accumbens, left ventral striatum, dorsomedial midbrain, insula, thalamus, anterior cingulate, supplementary motor area and bilateral cerebellum. Decreased blood flow in the brain during chill moments has been observed in areas known to process emotions and visual stimuli, namely the amygdala, left hippocampus and posterior cortex.
- Chill moments are most often generated by stark musical contrasts, e.g. dramatic changes in mode (minor to major), loudness (soft to loud), tempo (slow to fast), mood (sad to happy), tone (dull to bright) and pitch (low to high). Lyrical passages can trigger chill moments; however, the effect is secondary to the musical effect.
- People are most likely to experience chills when listening tunes with which they are familiar and have learned to appreciate, comparing what they are hearing with their recalled musical model. The music builds tension up to the chill moment (the build-up is longer for romantic, mood-changing chills), at which point there is a resolving release and a concomitant emotional/physiological response.
- Chill moment identification and use in mediagrams to provide emotionally impactful messages to recipients can be specifically accomplished using the techniques and systems described throughout this document.
- Referring back to
FIG. 7B , a starting point for a video excerpt can be identified, based at least on part on a target emotionally impactful moment (756). For example, the target emotionally impactful moment can be identified based at least in part using automatic analysis. The starting point for the video excerpt can be designated as occurring at the beginning of the target emotionally impactful moment, or can be designated as occurring at a point in time before the beginning of the moment. In some implementations, the point in time before the beginning of the moment can be a predetermined amount of time (e.g., 15 seconds, 30 seconds, etc.). In some implementations, the point of time before the beginning of the moment can be automatically selected based at least in part on video and/or musical elements of the video. For example, the beginning point of a video excerpt may occur during a scene transition, a musical transition, or at another suitable transition point. - An ending point for the video excerpt can be identified, based at least on part on the target emotionally impactful moment (758). The ending point for the video excerpt can be designated as occurring at the end of the target emotionally impactful moment, or can be designated as occurring at a point in time after the ending of the moment. In some implementations, the point in time after the ending of the moment can be a predetermined amount of time (e.g., 15 seconds, 30 seconds, etc.). In some implementations, the point of time after the ending of the moment can be automatically selected based at least in part on video and/or musical elements of the video. For example, the ending point of a video excerpt may occur during a scene transition, a musical transition, or at another suitable transition point.
- For example, designating the starting and ending point can include automatically identifying natural and seamless entrances and exits of the excerpt. In particular, the automatic identification can avoid jarring, altering pitch, dead air, off beat, unnatural entrances and exits to the excerpt. Additionally, the automatic identification can establish complete messages and sentiments, thoughts, ideas, phrases, etc. for the excerpt (not truncating messages).
- One or more portions of the video excerpt can be designated for personalization (760). For example, the target emotionally impactful moment can be associated with a duration (e.g., based on automatic analysis of the retrieved video and/or user interaction data associated with the video), and portions of the video excerpt that occur outside of the duration of the moment can be designated for personalization.
- The video excerpt can be finalized for personalization (762). For example, portions of the video excerpt that are designated for personalization can be removed, transition effects can be applied such that video and/or audio appropriately fades in and out, and other suitable finalization techniques can be applied to the video excerpt. After finalizing the video excerpt, for example, it can be added to a corpus of preselected video excerpts.
- The
techniques - In another example, streaming media services (e.g., SPOTIFY, PANDORA) can be incorporated into the system to permit near limitless selection from existing databases and instantaneous access. Such services may present various disadvantages, such as copyright issues, song version variation, spoofed titles, and improper truncation that would have to happen on the fly and decrease mediagram quality.
- In another example, a user's device song library could be used as a source of new media content for incorporation into mediagrams and, possibly, into the system library.
-
FIG. 8A is a conceptual diagram of an example social media platform 800 for providing improved and more meaningful social interactions among users. The platform 800 can include a variety of features that provide a variety of benefits over conventional social platforms. In particular, features of the platform 800 aim to build better relationships among users by promoting sincere social interactions (no shallow interactions), to put emotion and meaning back into social media, to provide both sender and receiver with insights (whether realized or not), to offer private person-person communication (as opposed to communication in front of a broader audience), to provide a fun/playful tone that makes relationships easier to maintain and more rewarding, to assist users in conveying and sending sentiments more accurate to actual feelings (versus free form text), and to provide social media that can be useful and uplifting to all users, including introverts. - The platform 800 improves upon social platforms in a number of ways. For example, the platform 800 uses gamification to create scarcity within the platform, such as scarcity with the number of mediagrams that can be generated and distributed on the platform 800, and scarcity with regard to the frequency of interactions among users. A relationship concierge can also employ scarcity in providing prompts at a deliberate pace (e.g., one prompt per day), which can cause users to wait for the chance to use “high value” prompts (i.e., scarcity can cause users to use these high value prompts less frequently). Scarcity on the platform 800 can also mimic real-world interactions (e.g., receiving cards/gifts/sentiments) among users that are less frequent than on social platforms and, in general, more meaningful. Scarcity on the platform 800 can also reduce burn out among users and can promote regular schedule of usages.
- The platform 800 can also draw on game theory to improve social interactions and relationships. For example, subtle visual and audio cues (e.g., message being concealed and unwrapped like a gift) can be used when viewing/responding to delivered prompts to enhance the emotional state of the user when viewing/receiving the delivered items. Rewards can be used to increase behavior, such as rewards for improvements in a relationship (e.g., more frequent interactions, more meaningful interactions) and/or establishing new relationships. Such rewards can include, for example, points, ratings, icons, symbols, branding methods, emojis and/or other features to represent relationship states.
- The platform 800 can also incorporate a relationship concierge that can help users improve variety, depth and frequency of communication within relationships. Such a relationship concierge can use artificial intelligence (AI) algorithms and systems to predict interests, supply content and guide the user towards more meaningful relationships. For example, the relationship concierge can understand who the people involved in a relationship by creating a smart wall that prompts the users to interact with each other on the wall in particular ways to improve and maintain their relationship. The relationship concierge can be fed information about users (e.g., interests, demographic information) and their relationship (e.g., common interests, type of relationship), and can churn on that information with its AI techniques to determine and provide, for example, to insert prompts directly into the user's shared private wall to facilitate continued and improved communication. To avoid annoying some users and to allow for varied interest in the relationship concierge, its involvement can be adapted to match user preference (e.g., increase or restrict involvement in relationship). Such interest in the relationship concierge can be explicit (e.g., user-designated concierge settings) and/or implicit (e.g., user liking or disliking certain prompts from the concierge).
- The platform 800 can decrease the anxiety associated with social network interactions being in front of a broader audience, which can cause users' interactions to be more guarded and less authentic, through the use of private walls that one-on-one between users. With private walls, only the participants in the wall are able to view/contribute to the conversation. Prompts from the relationship concierge can be presented on private walls. For example, prompts can include questions (e.g., individual questions, instructions, ideas for topics, joint questions that are asked to both users with the answers only being presented if both users answer), drawing pictures (e.g., draw pictures and send to each other), games (e.g., creating a story line by line, hang man, 20 questions), challenges (e.g., can take a snap of yourself doing something fun/unique, user-designated challenges), articles, pictures (e.g., creating memes and comment on pictures, Rorschach test, photo hunt), other creative options (make picture, memes, art, jokes), and/or other options. Private walls can use extrinsic stimulation (e.g., using colors, movement and sound to keep users attention) and intrinsic stimulation (e.g., creating an environment that fosters an intimate connection) to engage users. Such private walls can, for example, create environments that fosters communication among both extroverted and introverted individuals who are looking for social media that is more protective and thoughtful than traditional social media, that includes more intimate communication using media, and that offers protection, reassurance, and control over messages (e.g., knowledge of who sees the messages, who can see the messages, time-limited duration).
- As noted in the previous paragraph, the platform 800 can use temporal aspects to reduce anxiety and uncertainty. In one example, messages can have a lifespan and will be inaccessible to users/deleted from the server once they expire. In another example, users can only view messages for a limited number of viewings, a limited number of times, can only be viewed after a specified period, etc. Such features (time limits, view limits) can be controlled and designated by the user. Similarly, the platform 800 can permit users to create messages that are sent at a predetermined time. (e.g., sent next Thursday at 10 am) and/or after an event has occurred (e.g., user returns from vacation).
- Additionally, the platform 800 can provide and security measures in place to provide assurances and protection to user privacy, such as private walls being restricted to the participants and/or controls restricting and notifications regarding screenshots taken of content. For example, the platform 800 and mobile apps running on client devices can prohibit forwarding messages outside of the app, alteration of a shared wall to being accessible to the participants, the taking of screenshots (recipient is notified if screenshot is attempted), the device's ability to copy and paste text, and/or images, downloading pictures and messages, and/or forwarding content to other users.
- In addition to private one-on-one walls, the platform 800 can provide group walls that, similarly, are restricted to only the participants within the group. Group walls can be shared by more than two members and can creates a venue to share thoughts, ideas, commentary on topics, as well as a place to share pictures, videos, and other media content with specific people. Each user who is part of a group can view all comments/postings in the group. Each group can have an organizer who controls the group through group membership, topics, lifespan, moderation, and/or other group parameters. Members of a group can contribute to conversations, but are not permitted to control group parameters. The organizer can be identified to the group. As with private one-on-one walls, group walls can also have relationship concierges that help supply and insert content into the group, such as topics of common interest (either explicitly identified by the group or implicitly determined from user preferences). For example, the relationship concierge can prompt a group wall with different media types, such as pictures, questions, games, current news articles, memes, “good news” stories (e.g., stories that are relevant and positive. Aimed at creating thought provoking and inspirational dialogue), pop culture questions, themes, and/or other features. Also similar to private one-on-one walls, group walls can include temporally limited content as well as having a time-limited existence. For example, the organizer and/or system can set a lifespan for the group, which can be noted to the group, after which the group will automatically dissolve and all of the content from the group wall will be deleted. Group walls can foster an environment for “self-regulated” discussion/sharing groups, which can permit the organizer and/or group members to remove users from the group, either through organizer admin approval, a vote of the users, and/or other features. Content within the group wall can be automatically analyzed, flagged, and deleted if deemed inappropriate (e.g., trolling, hate speech).
- Referring back to
FIG. 8A , the system 800 includes a social andmedia platform 802 that provides a social platform, as described in the preceding paragraphs, as well as a media personalization platform, as described above with regard toFIGS. 1-7 . Theplatform 802 operates using a variety of different data sources, includingvideo excerpts 804,personalized videos 806,personalization templates 808, user profiles 810, relationship profiles 812, andsocial data 814. The user profiles 810 can include user information (e.g., demographics, interests, location) and can model user behavior. The relationship profiles 812 can include relationship information (e.g., users involved in relationship, type of relationship, duration of relationship) and can model the relationship (e.g., state of the relationship). Thesocial data 814 can include the data on social interactions between users (e.g., messages, posts, prompts, responses to prompts, content views) and other data for thesocial platform 802. - In the depicted example, user A (associated with computing device 816) and user B (associated with computing device 846) have a private wall for their relationship on the
platform 802. A relationship concierge running on theplatform 802 can periodically determine whether and when a prompts should be provided to one or more of the users A and B to help facilitate their relationship. As part of the relationship concierge process on theplatform 802, the relationship between the users A and B can be analyzed (step A, 826). Such analysis can include evaluation of a variety of factors and data, including the profiles for the users A and B, the profile for the relationship between users A and B, analysis of historical interactions between the users A and B (e.g., determining a rating for the relationship), and/or other factors. Based on the analysis, theplatform 802 determines that a prompt should be provided to user A (step B, 828). - In response to that determination, the prompt is provided to the device 816 for user A (step C, 830) and is presented (822) on the
private wall 818 in sequential order withother interactions 820. Theprivate wall 818 includes aninterface 824 for the user to respond, user input is received and provided to the platform (step D, 831-832). - The
platform 802 can receive and store the response (step E, 834) and can determine a minimum time delay for user B to respond (step F, 836). The time delay can vary depending on a variety of factors, such as the state of the relationship, a current trend of the relationship (e.g., becoming closer, becoming more distant), and/or other factors. Once the time delay has been determined, theresponse 840 and thetime delay 842 can be transmitted (step G, 838). - The
device 846 for user B can receive and present the response in the private wall, which includes theearlier message 820, therelationship concierge prompt 848, and theresponse 850. Based on thedelay instructions 842, thedevice 846 can automatically restrict input being provided (via the input interface 852) to reply to theresponse 850 until after the delay has expired. - The platform 800 and the
devices 816 and 846 can repeatedly perform these operations A-I in the back and forth communication between the user A-B, which is configured in such a way by the platform 800 so as to enhance the quality of the social interactions. -
FIG. 8B is a conceptual diagram of another examplesocial media platform 860 for providing improved and more meaningful social interactions among users. Theplatform 860 can include a variety of features that provide a variety of benefits over conventional social platforms, and can be similar to the platform 800. In particular, features of the platform 800 aim to build better relationships among users by promoting sincere social interactions (no shallow interactions), to put emotion and meaning back into social media, to provide both sender and receiver with insights (whether realized or not), to offer private person-person communication (as opposed to communication in front of a broader audience), to provide a fun/playful tone that makes relationships easier to maintain and more rewarding, to assist users in conveying and sending sentiments more accurate to actual feelings (versus free form text), and to provide social media that can be useful and uplifting to all users, including introverts. - For example, the
platform 860 includes atext messaging system 868 that permits free form, direct messaging between users 862-864 (one-to-one messaging and group messaging). The users 862-864 can generate and distribute the content between each other using thetext messaging system 868, such as through entering text (e.g., SMS message), providing multimedia content (e.g., mediagram, videos, photos, MMS message), and/or other content. Theplatform 860 can also include algorithms and an artificial intelligence (AI) system 870 that can use AI and algorithmic logic to dynamically infuse the text-messaging interface 868 between two users 862-864 with special selected and curatedcontent 866 that can help facilitate more meaningful interactions and relationships. For example, the algorithms and AI system 870 can model a relationship between the two users 862-864 and use that model to select particular content from the curatedcontent 866, and can present that selected content at particular points in time to facilitate the relationship between the users 862-864. The selected content can be injected into thetext messaging system 868 in any of a variety of ways, such as by presenting the content to one of the users 862-864 to prompt that user's interaction with the other user, presenting the content to both of the users 862-864 to facilitate interactions among the users, and/or other mechanisms. The algorithms and AI system 870 can continually improve upon and refine its relationship model for the users 862-864 based on interactions between the users on thetext messaging system 868 and their response to content injected into thetext messaging system 868 by the algorithms and AI system 870, which can allow the platform to deliver content and experiences that enrich the relationship. - As discussed above, the content that is distributed by the users 862-864 and/or selected from the
curate content 866 can be any of a variety content, including content excerpts (e.g., 2-6 second clips from mediagrams, 2-6 second clips from videos and/or music). Such content excerpts can be extracted from any of a variety of different content sources, such as music videos, television shows, live television shows (award shows), media (news), movies, sound bites from various media, mediagrams, and/or other content. Content excerpts can be sent as quick self-contained messages and/or in conjunction with other messages, for example, to enhance the overall impact of the messages. Similar to a mediagram, content excerpts can be sent without attaching the underlying content from which the excerpt is extracted and/or without associated user messages. In some instances, words and lyrics of the content excerpts can be included and/or transmitted with the content excerpts. -
FIG. 9A is anexample system 900 for providing an improved social media platform with more meaningful social interactions among users. Theexample system 900 includes a social and media platform 902 (similar to the social and media platform 802), the databases 804-814 described above with regard toFIG. 8A , anduser computing device 924. Theplatform 902 can include one or more computer servers, such as cloud computing systems. Theplatform 902 includes amedia personalization system 904 with amedia analyzer 908 that analyzes media content to identify excerpts for personalization, apersonalization assistant 910 that guides users through the personalization processes described above with regard toFIGS. 1-7 , and amedia finalizer 912 that assembles personalized media content (e.g., mediagrams). - The
platform 902 also includes asocial media system 906, which includes arelationship analyzer 914 to determine the state and rating for relationships (seeexample technique 2000 inFIG. 20 , which can be performed by the relationship analyzer 914), arelationship concierge 916 to prompt and facilitate meaningful social interactions among users (seeFIGS. 11A-F , 12A-H, 15A-D, and 18A-B, and corresponding description below), agroup relationship manager 918 that regulates group walls and group interactions (seeFIGS. 16 and 21A -B, and corresponding description below),interactive games 920 that are used on private and group walls to allow users to play games alone or together on the platform (seeFIG. 11B and corresponding description of option 1132), and an interactivewellness app manager 922 that provides features for users to self-rate their wellness state and to allow for wellness rating-related interactions among users (seeFIGS. 17A-H and corresponding description). - The
relationship concierge 916 evaluates whether and how to prompt users using a variety of relationship data representation and analysis techniques. For example, various values attached to questions (or other prompting in the database 814) can be stored, updated, and evaluated in a hierarchy to determine timing and nature of prompting delivered to the user. Promptings can have various database values, such as one or more of the following: -
- a “seriousness rating” value in the database (e.g. 1-100) that indicates how light or heavy the nature of the subject is or question is,
- a “nature of relationship” value that indicates the type of relationship between two individuals (e.g., father, brother, coworker, etc.),
- a “topic” value that indicates the category of content (e.g., cars, politics, personal history, food, etc.),
- a “current relevance” value that indicates whether the subject is uniquely topical to current news or events,
- a sine wave like pattern that is applied around the seriousness range that is determined to be appropriate as a current starting point for a prompt (e.g., if 40 is the target number, prompting may be delivered in a pattern such as “30, 35, 40, 45, 40, 35, 30” etc.),
- topics can be favored or rejected (e.g. no politics) and stored as various values depending on user responses (e.g., stored as “never user” if rejected strongly, stored as a modifier value if favored),
- the center point of the sine wave pattern of delivery can be “nudged” little by little by the user for responding to “More Serious” or “Less Serious” preference promptings,
- frequency of promptings can default to a standard value (e.g., 1 prompting/day), but promptings may be delivered at different times throughout the day using, for example, a sine wave (or random) pattern centered on a given starting point (e.g., provide appearance of unpredictability),
- add additional variation by introducing randomization that may lead to skipping entire day(s) and/or possible delivering more than one prompting in a day,
- Record response times (time of day) in database for user, which can be analyzed for most activity to form a favored usage time to be applied to questions or contacts that a user is “stuck” on (i.e., user's relationship with other user is not progressing or has stalled out), and
- general “level” value that can be applied to each prompting in the database, and used to tally an overall ad cumulative “score” for an individual user if they choose to answer a given question. The cumulative score can be used to set milestones crossed that are either hidden from the user, or exposed to the user (i.e. displaying a tiered “level” you have achieved in the app with a particular friend).
- The
user computing devices 924 can each include a mobile app, native app, and/orweb application 926 running on thedevices 924 that provide interfaces and features for the social platform (e.g. private walls, group walls, personalized media content) directly to users. Theapplication 926 includes amedia personalizer client 928 that presents media personalization features in a user interface and that communicates with theplatform 902 to create media personalization. Theapplication 926 also includes a media player (e.g., generic media player, special/secure media player) and asocial media client 932 that implements client-side of the features for theplatforms application 926 also includes aninteractive games client 934 to provide interfaces for interactive games and aninteractive wellness client 936 to provide an interface for a wellness rating and interaction service provided by thesocial media system 906. -
FIG. 9B is diagram of anexample system 940 for providing an improved social media platform with more meaningful social interactions among users. Theexample system 940 is similar to thesystem 900 and includes components that can be used to implement the social and media platform 902 (similar to the social and media platform 802) and the databases 804-814 described above with regard toFIG. 8A . For example, thesystem 940 includes aconcept model 942, adata model 946, and acontent model 950 that are used to model processes, user relationships, content, and other details that are used to provide the improved social media platform. - The
concept model 942 can include data, rules, coding, logic, algorithms, computer systems, and/or other features to implement and provide enhanced social interactions among the users, which can be provided by, for example, the algorithms and AI system 870 described above with regard toFIG. 8B . Theconcept model 942 can be programmed to implement various psychological principles that are incorporated into the underlying rationale of feature and mechanism design and the AI and algorithmic frameworks that support them. For example, theconcept model 942 can provide a feature set that addresses the emotive opportunities and issues, or affective benefits and costs, associated with remote communications, with the end goal being to maximize benefit metrics and minimize or mitigate cost metrics. Benefit metrics can include an emotional expressiveness metric (e.g., metric assessing ease with which platform permit users to express emotional states to others and/or to perceive feelings expressed by others), engagement and playfulness metric (e.g., metric assessing whether platform facilitates communication that is fun and exciting to participants), presence-in-absence metric (e.g., metric assessing whether platform fosters feeling of closeness and/or connectedness to others even though separated by time or space), opportunity for social support metric (e.g., metric assessing platform's ability to facilitate social support without being physically present, such as providing a general sense of the other person “being there” for you, reducing negative affect (such as soothing anxiety), and increasing positive affect (such as feeling “special” or loved)), and/or other metrics. Cost metrics can include a feeling obligated metric (e.g., metric assessing to what extent a platform creates an unwanted obligation to connect, such as creating unwanted feelings of obligation or guild to communicate), unmet expectations metric (e.g., metric assessing platform's propensity to create expectations for communication with others that will not be met and, as a result, have a negative impact on participants), threat to privacy metric (e.g., metric assessing platform's propensity to unexpectedly exposing private information to others, concerns that others are eavesdropping on private communication, and concerns that actions may be invading privacy of others), and/or other metrics. - The
concept model 942 includesalgorithms 944 a andAI 944 b (e.g., algorithms and AI 870),interaction mechanisms 944 c (e.g., routines and/or subsystems to permit and facilitate user interactions),relationship psychology rulebase 944 d (e.g., rules outlining different relationship models that can be used to categorize relationships among users), and process anddata flows 944 e (e.g., processes and data flows to facilitate improved social interactions among users, including obtaining implicit relationship feedback from user interactions, refining relationship modeling, and identifying content and timing to deliver the content to the users). Examples of these content selections and prompts provided to users are described below with regard toFIGS. 11A-F and 12A-H. One the primary goals of theconcept model 942 is to help users stay connected over the long-term. The periodicity of prompts provides a cadence (commensurate with the user's wishes for a given relationship) of interactions that otherwise might languish due to ordinary circumstances of life and the typical dynamics of psychological tendencies. Mechanisms which support this goal can include: -
- user control over periodicity and level of content delivery,
- user control over ongoing improvement of content relevance,
- user anticipation via timing indications of prompting content visible in the UI, and
- AI/Algorithms providing relationship-context-aware enhancements.
These and other features can improve social platforms by facilitating long-term use based on users gaining a perceptible recognition of some appreciable enrichment to his or her relationships.
- The
data model 946 corresponds to the structure and storage of data for thesystem 940, including the structure and storage of content, user profile data, relationship profile data, histories, and/or other data. Thedata model 946 includesdatabase schemas 948 a (e.g., table definitions, cloud storage database schemas and distribution), data storage, maintenance, andmanagement procedures 948 b (e.g., data storage policies, cloud based storage policies), and application programming interfaces (APIs) 948 c (e.g., APIs to handle server and user device requests). - The
content model 950 corresponds to the content that is delivered to users on the social platform. The content model includes content sourcing andcreation 952 a (e.g., user-generated content, preselected content, content models that can be adapted to personalize prompts to users),content psychology rulebase 952 b (e.g., rules defining different types of content and their appropriateness to different users), and/orcontent management procedures 952 c (e.g., processes for curating content over time). Thecontent model 950 and thedata model 946 can be used to providecontent classifications 958, which can be used to identify relevant content to deliver to users at various points in time depending on any of a variety of factors, such as relationship profiles, user profiles, and/or other relevant details. Thecontent classifications 958 can includecontent definitions 960 a (e.g., definitions for different types of content) andcontent taxonomies 960 b (e.g., hierarchical organization of relationships of different types of content). For example, the classification of content into different types of content can include different configurations of media and data, and can rely on multiple different taxonomies across different data dimensions that function together to more accurately classify content for selection and delivery to users. Taxonomies can include, for instance, a modal taxonomy (e.g., classification of content delivery which considers the combination of structural mechanism of delivery and general purpose behind the delivery), topical taxonomy (e.g., hierarchical classification of the content itself, i.e. topics and subject matter such arts, sports, history, music, baseball, Mozart, Babe Ruth), topical metadata (e.g., non-hierarchical meta-data grouping method to allow retrieval and sorting by criteria such as, descriptive (e.g., fun, serious, cultural, academic, controversial), quantitative (e.g., locality, time sensitivity, age appropriateness, complexity level within topic)), and/or others. - Referring to
FIG. 9C , which depicts anexample system 970 for providing an improved social media platform with more meaningful social interactions among users. Theexample system 970 is similar to thesystems FIG. 8A . For example, thesystem 970 includes a content store 972 (similar to the curated content 866) from which content is selected and served to users 990. - In the depicted example, the
content store 972 illustrates an example modal classification (example of content classifications 958), as described above. This diagram represents a simplified example of modal classification of content, but other different and/or more complex classifications are also possible, such as different modes consisting of different data/media configurations and can each have different handling in terms of the data modeling and client-side presentation. - The example modal classification includes example data elements 974 a-f. A first element is a shared subjects of
interest data element 974 a, which represents content that is specifically relevant to a relationship based on a shared interest in a given area of subject matter. This data element can be provided at varying levels of specificity, with more specific categorization aiding in selecting more relevant content for the users in a relationship. The shared subjects of interest element 974 can included fields corresponding to, for example, facts, news, articles, media, and/or other fields. - A second element is a
concierge data element 974 b that stores data values designed to promote or assist the users' interactions in a pragmatic way, which are used byAI logic 976 to select relevant content for users. Examples fields for theconcierge data element 974 b include reminders (e.g., personal dates such as birthdays, graduations, and anniversaries, holidays such as Christmas, Mother's Day, Father's Day, and Valentine's Day), activity suggestions (e.g., enumerated data field including designations such as, simple, involved, random, fun, or context-specific suggestions of activities for the users), emergent (e.g., prompts resulting from AI/Algorithmic analysis, such as identifying a keyword from natural language processing such as “dinner” prompting links to local restaurants of shared favorite food types), and/or other data fields. - A third element is a data gather
element 974 c that directs in-line prompts that ask about a user, about other users, about relationships, and/or offer the ability to provide quick feedback about the content being delivered. For example, a normalized data prompts field that is part of theelement 974 c can include prompts that can be stored as key-value pairs, such as “How long have you known each other?”—(Number of Years); “What is the relationship—Brother, sister, mother, father, uncle, niece, good friend, new friend, co-worker, wife, significant other?”—(Options List); “What interests do you share?”—(Options List); “Would you like more or less prompts with this connection?”—(More/Less); “Did you like the last mediagram message?”—(Y/N). - A fourth element is a
programs data element 974 d that stores values representing thematic sequences of content and/or prompts. Such adata element 974 d can include any of a variety of fields, such as interpersonal fields that identify current positions along or settings for a sequence of prompts that are more aggressively designed to help learn about the other person, for example a series of prompts on politics, or a helpful series of prompts to help in troubled relationships. Such a field may be configurable by users to have varying levels of controversial or difficult questions/content, such as being configured to have a higher likelihood of generating controversial or difficult questions or content. User interface features can be output across multiple different user computing devices so that setting configurations are purposefully, voluntarily and mutually requested/agreed upon by the users, as opposed to, for example, being independently instantiated by AI or algorithms. - The
data element 974 d can additionally include informative fields that correspond to sequences of content and/or prompts that are presented to users. A variety of different sequences are possible, such as sequences of content and/or prompts pertaining specific subject matter. For example, different sequences of content and/or prompts can pertain to the history of the French Revolution, basic car maintenance facts, a biography of Steven Spielberg, and/or others. Thedata element 974 d can also include entertainment fields that correspond to sequences of media content that are presented to users. For example, content sequences can include sequences of short stories, sequences of illustrated series, short comic novels, and/or others. - A fifth element can be an
interactive data element 974 e that stores programmatic elements (e.g., applications, programs, routines) that can be run to promote interactions between users at particular points in time. Theinteractive data elements 974 e can include, for example, drawing programmatic elements (e.g., interactive drawing programs, such as collaborative drawing programs), game programmatic elements (e.g., interactive games, such as chess or other strategy games), touch points (e.g., features promoting simple user interactions, such as interactive images), entertainment programmatic elements (e.g., videos, music), and/or other programmatic elements. - A sixth element can be a promoted
data element 974 f, which can include promoted content, such as paid advertising content that can be targeted to users based on relationship profiles, user profiles, and/or other information/factors. Promoteddata elements 974 f can include, for example, text, links, images, videos, interactive media elements, and/or other types of content containing one or more promotional messages. - Referring back to
FIG. 9B , theconcept model 942 and thedata model 946 can be used to provideprofiles 954, such as profiles modeling individual users (e.g., user profiles) and to model relationships between multiple users (e.g., relationship profiles for relationships between two users and/or relationships between groups of users (more than two users).Profiles 954 can include any of a variety of different types, such asuser profiles 956 a, relationship profiles 956 b,relationship histories 956 c,relationship fingerprints 956 d, and/or other profiles.Profiles 954 can be used to identify content that is relevant for presentation to users based on any of a variety of factors, such as user preferences (as represented by the user profiles 956 a), the user relationships (as represented by the relationship profiles 956 b), historical context for user relationships (as represented by therelationship histories 956 c), and relationship fingerprints (as represented by therelationship fingerprints 956 d). - Referring back to
FIG. 9C , an example of the profiles 956 a-d being used by anAI logic system 976 to select content from thecontent store 972 for dissemination (980) touser devices 982 is depicted. For example, content can be selected from thecontent store 972 usingrelationship profiles 956 b, which can include personal user data shared between users as well as all particular information about the relationship. Relationship profiles 956 b can be created by gathering data (984) from theuser devices 982. For example, users can have full control over viewing, adding to, and editing the content of each of their relationship profiles 956 b. Various user-facing mechanisms for gathering and storing relationship-specific information can be used to gatherdata 984 to buildrelationship profiles 956 b, such as theuser devices 982 presenting a profile user interface for direct user viewing and editing of relationship profiles 956 b (e.g., viewing and editing various fields/parameters), in-line social network prompts to obtain quick relationship feedback (e.g., prompts designed to unobtrusively allow the user to alter profile settings on-the-fly through quick feedback, such as through one-click responses), indirect relationship feedback from the user devices 982 (e.g., user reaction (or lack thereof) to selected and presented content), and/or other data. Relationship profiles 956 b can include a variety of relationship-related data, such as data identifying shared interests, relationship length, relationship type (e.g., brother, sister, friend, co-worker), relationship nature (e.g., serious, light-hearted, romantic, platonic), desired frequency of interaction (e.g., daily, weekly, monthly). - The
user computing devices 982 can present user interfaces designed to allow the user to view data and other inputs being used to buildrelationship profiles 956 b. Such user interfaces can, for example, present condensed relationship information on a relationship profile dashboard screen, present graphical visualization based on factors such as number of shared interests, activity level, number of prompts responded to, etc., and/or other relationship-related graphical elements. Examples of user interface features to visualize relationships are depicted with regard toFIGS. 13A-C . - The user and relationship profiles 956 b can be generated using the data gathering 984 from the
user computing devices 982, through direct and indirect feedback from the users. For example, profile building input can be directly gathered through participation by the user as they populate user and relationship profiles with information. User interfaces will allow users to supply data in a variety of ways, such as information supplied about user, information supplied about relationships, and information supplied about other users. In another example, direct data prompts can be provided to users directly asking information, such as small portions of information that can, in some instance, be provide through a “one-click” response, and is easily dismissible by the user in order to be unobtrusive.FIG. 14C is a screenshot of an example “one-click” feedback interface in whichcontent 1472 is presented with selectable graphical elements 1474-1476 that the user can select with a single click/selection action to provide feedback related to thecontent 1472. In another example, user answers to content prompts (e.g., prompts selected a messaging store 978) can be used to construct the profiles 956 a-d, such as answers to the questions “what color is your favorite?” presented as clickable grid of colors, or “which historic figure do you admire most?” presented as a selection of photos. In another example, usage data indicating how users access and use thesystem 970 can be recorded and stored, such as usage data indicating when users message in a relationship, how often, how quickly they respond to prompts, where (GPS) do they usually interact with the app, and/or other usage information. In another example, natural language processing (NLP) can be used to analyze free-form textual responses from users answering prompts, and can be used to extract and store key-value pairs and other associated information, such as the frequency with which key words are used to build a referenceable index for reference in AI model inputs. - Referring back to
FIG. 9B , user interface features 962 can be provided by combining theconcept model 942, thedata model 946, and thecontent model 950. The user interface features can be selected based on, for example, theprofiles 954 and thecontent classifications 958. Example user interface features are presented inFIGS. 11C-F and 13B-C.FIGS. 11A-B show general user interface features, andFIGS. 11C-F present example specific user interface features that can be selected for presentation to users.FIGS. 13B-C present example timing indicators for relationships. - Referring to
FIG. 11C , an example touch pointuser interface feature 1150 is presented. A touch point is a brief interaction prompted by the system (identified by “Mora”) that can be more fun than demanding or thoughtful and that can typically have a basic level of interactivity. For instance, the example touch point in this example is a prompt to draw a picture of the other user who is part of the relationship (“Anne”). Other examples of touch points are “tapping one of three emoji-style faces,” “tapping a photo of Kirk or Picard,” and “drawing a sketch (in-line, in-app) of a Mora suggested subject.” Touch points can target a presence-in-absence objective. - Referring to
FIG. 11D , an example shared experiences (content focus) feature 1160 is presented. A shared experience feature can prompt people in a relationship to share a regular but punctuated, periodic and sequenced media experience over an extended period of time. Content can be based primarily on shared subjects of interest. In the depicted example, the shared experience feature that is presented (identified by “Mora”) regards Steven Spielberg films, which is an interest shared by the users. Other examples of shared experience features include “viewing the biography of a favorite historical figure in a series of bite-size content delivered at a cadence desirable by the user(s),” “playing a game of chess within the interface over the course of weeks or months,” and “sharing thematic sequences of content within the interface is like slowly watching a TV series together.” Shared experiences can target a presence-in-absence objective and an engagement & playfulness objective. Shared experiences can have a low risk of negative affective costs. - Referring to
FIG. 11E , an examplerelationship concierge feature 1170 is presented. The relationship concierge can provide reminders, suggestions, assistance, and/or other prompts to assist users with maintaining and improving their relationships. In the depicted example, a user reminded that it is his/her friends birthday and with a suggestion to send a mediagram (also called a MoraGram), and then the user acting on that suggestion by sending a mediagram. Other example relationship concierge features can include “it's your uncle's birthday next week. He likes fishing and camping—how about a related gift?” and “it looks like you're planning dinner—would you like suggestions?” Relationship concierges can target opportunities for social support, engagement, and playfulness objectives. - Referring to
FIG. 11F , anexample enrichment feature 1180 is presented. An enrichment feature can provide prompts and content which encourage learning about the other person on a meaningful level. Enrichment features can have an unassuming approach and can avoids perception of being “clinical.” In the depicted example, the enrichment feature is prompting the user to share a favorite memory with the others user. Enrichment features can target emotional expressiveness objectives. -
FIGS. 13B-C present user interfaces with example relationship timing indictors. Timing indicators are a visual display which surfaces the underlying engagement frequency mechanic, the delivery buffering mechanic, or both. The variations below describe benefits and risks that could result from implementation of these mechanics.FIG. 13B shows an example user interface with a single timing indicator andFIG. 13C shows an example user interface with dual timing indicators. Other quantities of timing indicators are also possible. The timing indictors can present visualization for timing related to one or more of the following relationship features: -
- incoming user messages from system prompts,
- outgoing user messages from system prompts,
- timing since last outgoing message,
- timing since last incoming message,
- timing since last communication (either incoming or outgoing), and/or
- time until system is scheduled to deliver next prompt.
- Referring back to
FIG. 9B , content programming andsequencing 964 combines theconcept model 942 and thecontent model 950. Delivering users content that is relevant and specific can be a significant challenge. If content is too general the perception by the user may be that they appear to be advertisements. For example, if the user provides a general interest in sports when, unknown to the system, the user has a specific interest in the Boston Red Sox, attempts to deliver relevant content falling under the general “sports” classification can cause disengagement and frustration by the user (e.g., serving content related to the NFL, other baseball teams). As a result, the breadth and depth of thecontent model 950 can have a significant impact on the relevance of content that is selected for presentation to users, and ultimately on user engagement with the system and other users. - The content programming and
sequencing 964 can include a variety of data elements that are being tracked and used to determine when and what content to serve to users, such as engagement frequency, delivery buffering, ephemerality, privacy of shared information, and/or others. Engagement frequency relates to the level of involvement the system has with the user and, more specifically, to particular relationships. A user may set a default value for the desired frequency for prompts and content delivered to the user, and can do this individually for each relationship. For example, the user may choose to set a high frequency (e.g. daily) for a significant other while setting a very low frequency for an old acquaintance (e.g. monthly or quarterly). Frequency setting can be adjusted through direct and/or indirect user feedback, such as adjusting the timing of these feedback prompts based on analytic data of actual user behavior. For example, if the user regularly delays a response to prompts in a given relationship, a frequency adjustment prompt would be delivered to the user. - Delivery buffering is a mechanism which purposely delays the sending and receiving of messages (e.g., prompt responses by users) by a certain amount of time (e.g., hours, days). Delivery buffering is contrary to conventional social media systems which seek to speed up the pace of user interactions. Delivery buffering can provide a variety of benefits, such as allowing users the ability to recall messages, as needed, and to build anticipation during which both senders and recipients are thinking about each other (e.g., incoming message buffering is visually communicated in the UI, such as
FIGS. 13B-C ). - Ephemerality refers to messages and content sent between users that will be “removed” after a certain period of time. The window of time that elapses before content is removed can be controlled by the user(s) per relationship. This feature can help preserve user privacy. Privacy of shared information relates to features that purposely limit a user's ability to distribute information shared on the platform will be implemented. The features include disallowing the ability to copy and paste content from the app to other applications, and discouraging the capture of screen contents by use of a devices screen shot feature. Where possible, this device feature would be disabled while using the app. However, device manufacturers have typically not allowed the disabling of the screen shot feature. A method of informing the user that the message-sender is notified of screen shots being taken will be employed as discouragement.
-
FIG. 10 is a flow chart 1000 withuser interfaces interfaces social media client 932 on theuser computing devices 924, for example. In the depicted example, an initial social connection is established between two users. As part of this process, the users provide information and answer questions about each other and about their relationship, which thesystem 906 uses to create and/or improve upon user and relationship profiles that are used by the system 906 (e.g., used by the relationship concierge 916). - In the
first interface 1002, a series of information requests and questions 1006-1012 are posed to the user for his/her new relationship with theuser 1004. As indicated by the username foruser 1004, usernames can include any of a variety of ASCII characters (including non-alphanumeric characters, such as symbols and operators) as well as icons/emojis/graphics (as indicated by the seashell icon). In the depicted example, the user is prompted to provide the user's desired prompt frequency for therelationship 1006, the type ofrelationship 1008, common interests among theusers 1010, and types of prompts that the users are interested in 1012. Responses to this information assists in initializing the relationship (1014). - In the
second interface 1016, the user is again presented with a series of information requests and questions 1018-1024. For example, the user is prompted to designate a desired level of concierge involvement in the relationship 1018 (e.g., heavy involvement can cause all interactions to pass through the concierge—meaning no freeform exchange outside of concierge prompts, minimal involvement can permit many interactions outside of the concierge), whether the concierge should prompt one or both users at atime 1020, theprompt types 1022 that the user is interested in, and desired minimal delay for users to interact with each other on thewall 1024. With these parameters selected, the concierge can be initialized (1026) and the users can begin socially interaction in the platform (1028). -
FIGS. 11A-B are screenshots ofexample user interfaces mobile computing device 1100 for interacting with other users via private walls on a social platform. - Referring to
FIG. 11A , an examplehome screen interface 1102 is provides alist 1112 the user's friends on the platform along with relationship information for each of the friends. Each of the friends is identified by a username 1106, a relationship status icon 1104 (status of the relationship between the user of thedevice 1100 and the friend), a relationship rating 1108 (rating of the relationship between the user of thedevice 1100 and the friend), and information on the last interaction between the users 1110. More stars for the ratings 1108 indicates a stronger relationship, and fewer stars indicates a weaker relationship. Relationship ratings 1108 can be determined based on a variety of factors, such as points for questions and aggregate point summaries over time. The relationships (as identified by the friends 1106) are sorted in thelist 1112 in reverse order so that the relationships most in need of attention by the user are seen at the top of thelist 1112. The relationships in thelist 1112 can be selected to navigate to a private wall for the relationship. - Referring to
FIG. 11B , a private wall for a relationship withuser 1142 is presented in theinterface 1114, which includes a variety of different options 1118-1132 and 1140 for the user to interact with theother user 1142. Theprivate wall 1114 also includes a chronological view of recent interactions between the users, which includesunprompted messages 1134 and 1136, as well as a prompt 1138 that has been provided to the user. As indicated by the timestamps of the messages 1134-1136 and the prompt 1138, over two weeks had elapsed since the users had interacted, which is what likely triggered the relationship concierge to provide the prompt 1138 to continue communication between the users. The user can respond to the prompt 1138 through theinterface 1140 and/or through one or more of the interaction options 1118-1132. The user can also elect to ignore the prompt 1138 and/or to indicate dislike of the prompt 1138. - The interaction options 1118-1132 include an interactive wellness feature 1118 (see
FIGS. 17A-H ), aprompt feature 1120 to request for another or different prompt (e.g., step outside of current art of prompts from the relationship concierge),questions 1122, aninteractive drawing feature 1124, apicture sharing feature 1126, a mediagram creation andsharing feature 1128, a photo/video sharing feature 1130, and agames feature 1132. - The
interface 1114 also includes relationship status information 1144 (rating for the relationship with user 1142) andoptions 1146 to modify settings for the relationship. -
FIGS. 12A-H are screenshots of an example process flow for a relationship concierge facilitating and improving social interactions among users via private walls on a social platform. - Referring to
FIG. 12A , theuser device 1200 for Anne includes theinterface 1204 for a private wall between Anne and David on Monday at 3:00, which is when Anne receives a prompt 1206 from the relationship concierge. The prompt 1206 is accompanied by afield 1208 through which Anne can respond to the prompt. - Referring to
FIG. 12B , at the same time as Anne receives the prompt 1206, theuser device 1202 for David does not present any prompts in theinterface 1205, including not presenting the prompt 1206 just given to Anne. This scenario represents an initial state where ho prior prompt history is visible in David's view. - Referring to
FIG. 12C , a few minutes later at 3:05 on Monday, Anne enters and submits ananswer 1208 to the prompt 1206 given by the relationship concierge, as indicated by the sentstatus 1210 for the prompt and answer (1206-1208). - Referring to
FIG. 12D , at 3:05 on Monday, David receives Anne's message 1212-1216. The question (or directive) given by the relationship concierge is visible to David (1214), in addition to the content of her reply (1216). Although not depicted, the prompt response can be an icon that, once selection, opens up like a gift with animation. Such animations features could additionally be used as ways to present electronic gifts and/or donations to other users and/or organizations (e.g., charitable donations to disaster victims). - Referring to
FIG. 12E , the following day at 1:30, Anne has not received a new prompt from the relationship concierge, and any new prompts given to David are not visible in theinterface 1204. - Referring to
FIG. 12F , on Tuesday at 1:30, David receives a new prompt 1218 from the relationship concierge, which includes afield 1220 to provide a response. The prior history of sent and received prompts is visible in theinterface 1205, but may be removed after a default or user-set amount of time. - Referring to
FIG. 12G , at 1:40 on Tuesday, Anne receives a message 1224-1228 from David which displays both the prompt 1226 given to David and the content of hisreply 1228. - Referring to
FIG. 12H , at 1:40 on Tuesday, David has replied 1220 to the latest prompt 1218 given by the relationship concierge, as indicated by the sentstatus 1222. -
FIG. 13A is screenshot of anexample user interface 1300 on a mobile computing device for viewing a user's friends and the corresponding interaction delays until another relationship concierge prompt is expected. Theuser interface 1300 presents a list of friends across a number of different categories, including a “Msg”column 1302 that indicates whether the user of the device presenting theinterface 1300 has a message waiting from one of his/her friends. Such messages can include, for example, any type of prompt, a mediagram (personalized music video message), etc. Theunopened gift icons 1308 indicate that the user has not viewed the waiting message yet. The openedgift icons 1310 indicate that the user has already viewed all messages sent from the corresponding friend. - The “Name”
column 1304 displays the name of the friend(s) with whom you (the user) are having a private conversation. The “Time Until Next”column 1306 indicates an amount of time, which could be either approximate window of time or a precise amount of time, until the next prompt will arrive for that relationship from the relationship concierge. The “Time Until Next”column 1306 could be used to represent additional and/or alternative relationship metrics. For example, the “Time Until Next”column 1306 could indicate timers (bars) representing how much time has passed since the user last communication with a given contact. In such a scenario, the longest bar would be shown on top of the list to highlight the relationship in greatest need of attention. Color distinctions in the timer bars can indicate an “overdue” state where too much time has passed (according to default values or user set values). -
FIG. 14A is a conceptual diagram of an example personal concierge system andalgorithm 1400 for facilitating and improving user relationships on a social network. Therelationship concierge 1402 is a programmed logic that is designed to interpret and understand the nature of a relationship between two people, tendencies, and behaviors of each individual, and to formulate a forward looking program of prompts based on those factors. Therelationship concierge 1402 is largely algorithm-based, using historical user data and user inputs to determine the content of the prompts given to a user. Therelationship concierge 1402 also incorporates one or more AI techniques and platforms to allow for decisions to be made that are not pre-programmed into the algorithm or pre-determined. Therelationship concierge 1402 can be allowed to make choices for the users based on emerging patterns of usage and user input. - The
relationship concierge 1402 can use a variety of different data sources 1404-1408 to determine and provide prompts to users. For example, therelationship concierge 1402 can use historicaluser behavior data 1404, which can include, for example, answers to prompts, how long the user takes to responds, the times of day the user responds, how quickly the respond to certain categories of prompts, how often they dismiss (reject) certain types of prompts, and/or other relevant data representing historical user behavior. - In another example, the
relationship concierge 1402 can useuser adjustment data 1406 which indicates changes in relationships over time. Users are provided with options to directly supply feedback and information on the nature of their relationships with others. For example, the user could indicate that the relationship for a contact is intimate/romantic in nature, and further that there has been a recent breakup in the relationship, and further that they either want to re-kindle the relationship or to ease it into a platonic relationship. In another example, the user could indicate that the contact is an old friend that they would simply like to stay in touch with but is not interested in delving into deep conversations. Other forms of ongoing direct user input can be indicated via in-line feedback options within the ongoing conversation that they liked or disliked a given prompt type, or that they would like the speed up or slow down the rate at which prompts are supplied. - In another example, the
relationship concierge 1402 can use one or more default program ofprompts 1408 based on the standard parameters of input (described above). In addition to the one or more default programs, specialized sets of prompts can be centered around a theme that can be chosen by the users. These special sets of prompts, if chosen, can be weighted above the standard parameters. The set of special prompts centered on a theme can have a discrete quantity and start/end date, not necessarily known to the users. Examples of the special themed programs that can be delivered by the Relationship Concierge can be a series of prompts that have an aim to, for example: reconcile political viewpoint differences, patch up a failing relationship, deeply explore the memoires and life of an individual (e.g., a grandmother and granddaughter relationship), explore a specific subject such as philosophy, religious beliefs, and/or lighter subjects such as movies, art, music, sports, etc. -
FIG. 14B is a diagram of anexample system 1450 to vary content that is selected for presentation to users. Content delivery to users can seek to balance user-reported desirability with natural variation to avoid both extremes of content irrelevance and overly predictable consistency. Theexample system 1450 can be implemented as part of the example platforms/systems system 1450 can effectively utilize a feedback loop to provide content and, based on user feedback, to refine the selection of future content that is selected for delivery to users. - Relationship profiles and history (1452) can be used to select content (1454) for presentation to users. The relationship profiles can include data that describes relevant matching characteristics learned by various methods including self-reported data contained in individual profiles, data gathered from algorithms and analytics, and user-reported data about the specific relationship. A history of content delivered to the relationship can be stored in order to track, regulate, and plan the flow of content. The content can be, for example, taxonomically organized content is stored in the system which is then queried and retrieved based on relevance to the specific relationship.
- Topical interests (1456) can be used to refine the content selection (e.g., pare down a large set of content to a smaller subset of content). Topical interests are qualitative measures of both implicit relevance (e.g. some content is of more general relevance to a married couple than friends or co-workers) and explicit relevance (e.g. user-supplied data indicating a shared interest in baseball or the Boston Red Sox.). The more specific the domain of interest is defined the greater the value of topical interest. The classification of content is stored as part of the topical taxonomy described in a later section.
- Intensity (1458) and frequency (1460) parameters can be used to further refine the content selection. Intensity generically refers to where in the spectrum of casual to intimate (or personal) the nature of the content belongs. The intensity of any given piece of content is an attribute applied and stored in the metadata taxonomy. Frequency can be, for example, a quasi-mutually agreed upon value between two users regulating how often the system will deliver content. For example, if one user sets the initial desired frequency at daily and the other sets the desired frequency at weekly then the system may set the starting point for the actual delivery frequency at every three days, and is thus a de facto negotiated interaction.
- The selected content can then be delivered via one or more modules (1462) (e.g.,
FIGS. 11C-F ). Modules are how content is manifest in the user interface. Each module type is a specific combination of media formatting (e.g. text only, text with image, etc.), interactivity characteristics, and categorical purpose (e.g. a reminder, a question prompt, an element of a thematic sequence (see programs), or a promoted ad). - User feedback can be obtained (1464) from the UI and used to further refine the relationship profile and history (1452). For example, as content is delivered, the opportunity for users to provide quick “one-click” feedback will be presented. Occasionally buttons that allow users to tap Less Often/More Often/No Change or More Like This/Less Like This will be attached to a piece of system-delivered content. This feedback is used to adjust the relationship profile data accordingly.
-
FIGS. 15A-D are screenshots of a relationship concierge being applied to other social platforms providing predominantly open communication among broad groups of users, such as FACEBOOK, TWITTER, LINKEDIN, and/or other social platforms. - Referring to
FIG. 15A , anexample user interface 1500 on a social platform providing predominantly open communication among users is depicted. Theinterface 1500 can be a news feed, for example. Theinterface 1500 includes apost field 1502 through which the user can create and submit a post for distribution across a broad group of users (e.g., friends, fans, followers, public). Theinterface 1500 includes a relationship concierge prompt 1504 that is presented to the user. The prompt 1504 is identified as being from the relationship concierge and that it is presently only visible to the user (1506). The prompt 1504 identifies that it pertains to the User A (user of the interface 1500) not having interacted with User B in over one month and suggests a number of options (1508). The example options include afirst option 1510 to like or comment on a recentlypopular post 1512 of User B. At least a portion of thepost 1512 is presented in the interface along withinteractive features post 1512, as facilitated by the relationship concierge, will be viewable by and potentially broadcast to a broader audience than just User A and User B. - A
second option 1518 is to publicly post a message on User B's wall. Again, this option includes aninteractive feature 1520 to perform the action from within the feed. Also again, this interaction will be viewable by and potentially broadcast to a broader audience than just User A and User b. - A
third option 1522 is to answer a question for User B regarding User A's favorite movie over the past year. Again, this option includes aninteractive feature 1524 to perform the action from within the feed. This option, however, provides aselect box 1526 through which the User A can designate whether the answer to this question should be delivered as a private message (not initially viewable beyond User A and User B, unless forwarded or shared with other users) or posted to a broader audience. In this example, the User A enters an answer to the question in thethird option 1522 and does not select thebox 1526. - Referring to
FIG. 15B , aninterface 1528 for User B on the social platform presents apost 1532 for therelationship concierge prompt 1504 andanswer 1524 from User A. Thepost 1532 includes information identifying that the User A answered a question posed by the relationship concierge for User B, the question andanswer 1536, and features 1538-1540 through which the User A, the User B, and other users can interact with thepost 1532. Thenews feed 1528 for the User B also includes a field for the User B to create apost 1530 and a post from anotheruser 1542. - Referring to
FIG. 15C , theinterface 1500 for the User A on the social platform is again presented with the prompt 1504 from the relationship concierge. However, in this example the user selects thebox 1526 to deliver theanswer 1524 to thequestion 1522 privately to User B. - Referring to
FIG. 15D , a private messaging interface 1550 (e.g., FACEBOOK MESSENGER) for the User B on the social platform is presented. Theinterface 1550 depicts theprivate message 1552 from the User A as well as the question andanswer 1554 to the prompt 1504 from the relationship concierge. Theprivate message 1552 is presented among other private and group messages 1556-1566 for the User B on the social platform. Unlike thesocial platforms message 1552 more broadly than just the relationship between the User A and the User B. -
FIG. 16 is a diagram depicting creation and use of aprivate group wall 1600 on a social platform to improve and enhance meaningful social interactions. Theexample group wall 1600 has multiple users 1602 who are members of the group and who are permitted to contribute to thewall 1600. The group organizer is identified at the top of the list with the notation “organizer.” The organizing user can designate a variety of parameters for the group, including who is invited/permitted to be a member, permissions for other members to add new members (e.g., friends of original members are able to be added), time limits on the existence of the group wall (e.g., 2 month expiration date), roles for different group members to play within the group (e.g., rock band roles—band member, groupie, fan), and/or other features. - The group wall can be initiated with conversation starter, which can be facilitated by the relationship concierge. The conversation starter can include, for example, pictures, drawings, memes, videos, news stories, questions, etc. If the group organizer needs help finding a topic of common interest, they can use the relationship concierge 1604 to create a custom list 1604 of common interests (which can automatically be identified from user profile analysis) and can choose a
topic 1606 most of the participants have in common. The selectedtopic 1606 can be used to insertinitial content 1608 into thewall 1600 that pertains to the selectedtopic 1606. In the depicted example, theinitial content 1608 includes news articles relevant to thetopic 1606. -
FIGS. 17A-H are screenshots of anexample user interface 1702 on acomputing device 1700 for users to express and interact with others regarding their emotional well-being. - Referring to
FIG. 17A , theinterface 1702 is a visual aid for users to better understand their feelings and improve their mental states. The three corners of the interface 1702 (a triangle) represent emotional extremes. The top corner (yellow/white) is selfless compassion, a bright ideal to strive for. The left corner stands for passion and the right corner represents depression. Thecenter circle 1704 represents normality, the sphere of daily emotions. - Users can use the
interface 1702 to assist in expressing and improving their emotional states, to improve their moods, and gain peace-of-mind. Amovable pin icon 1706 can be placed at different positions throughout theinterface 1702 by the user to represent his/her current emotional state. The user can adjust the positioning of theicon 1706 as frequently or infrequently as he/she wants (e.g., hourly, daily, weekly). Increased frequency of use can assist users in understanding and tracking the change in their emotional state over time, and can help them work to improve their moods. For example, the threecorners interface 1702 represent calm, understanding, enlightened, generous, compassionate (top corner 1710, which can be colored yellow-white); anger, agitated, irritated, passionate (leftcorner 1718, which can be colored red); and sad, depressed, down, bored, dispassionate (right corner 1714, which can be colored blue). The threesides interface 1702 represent optimistic, enthusiastic, upbeat, joyful (left side 1708, which can be colored orange); friendly, sociable, agreeable, cool (right side 1712, which can be colored green); and anxious, upset, worried, fearful (bottom 1716, which can be colored purple). The top half of theinterface 1702 can represent positive, healthy emotions, whereas the bottom half represents negative, less-healthy emotions - Three different walls on the
social platforms device 1700—a private wall that is only accessible to the user of thedevice 1700 and the relationship concierge (seeFIG. 17B ), a shared private wall for a relationship between two users (seeFIG. 17C ), and a private group wall for more than two users (seeFIG. 17D ). - Referring to
FIG. 17B , a private wall for theinterface 1702 that is only accessible to the user of thedevice 1700 and the relationship concierge is depicted. Thepin 1706 indicates the user's current mood. Thepin 1706 can be positioned, for example, by the user with the three sliders 1720-1724—Calm/Anxious, Friendly/Angry, Optimistic/Depressed. The user may also choose to include input from the relationship concierge and/or other users. - For example, in the depicted example the user has moved the
pin 1706 in response to a sad event occurring (e.g., user's pet just died). The user does this by moving their Optimistic/Depressed slider 1724 to the right into the blue area. This action can update the user's interface in other shared walls and/or group walls for the user, for example, in response to the user providing permission for it to be shared in that manner. Sharing theinterface 1702 can be a way for users to share their emotional state with others when it may otherwise be difficult to express their emotions. In this example, other users who see the user's current state in the interface may be prompted to respond by sending appropriate mediagrams to the user to help improve his/her mood. - Referring to
FIG. 17C , a shared wall is depicted in which thepin 1706 for the user of thedevice 1700 is superimposed on thesame interface 1702 as anotherpin 1726 for the other user of the shared wall. - Referring to
FIG. 17D , a group wall is depicted in which thepin 1706 for the user of thedevice 1700 is superimposed on thesame interface 1702 other pins 1726-1730 for the other users who are members of the group wall. The current mood of every member in the group can be displayed on theinterface 1702. Different group walls can address feelings about different topics, for example. Members of the group, including the user of thedevice 1700, may choose to use mediagrams or other interactive/social features to interact with other group to improve their moods. Users who are able to successfully improve the mood of other users through various actions on the social platform can receive positive relationship points, which can factor into relationship ratings. - Referring to
FIG. 17E , different mood goals can be designated for the corners of theinterface 1702. If, for example, the corners represent Compassionate (1732), Impassionate (1736), and Dispassionate (1734), the user can decide to meditate, reflect on relationships, and/or reach out to other members in order to move their icon (1706) upwards toward the Compassionate (1732) corner. - Referring to
FIG. 17F , theinterface 1702 can be used to represent different strategy vectors 1738-1742. For example, users can imagine altering their moods along three Strategy Vectors—Engaged/Detached (1740), Caring/Selfish (1738), and Calm/Agitated (1742). Activities for improving mental states with these strategy vectors can include, for example, interacting more with other users, helping to solve another user's problems, and self-help (e.g., meditation, exercise, listening to music, etc.). - Referring to
FIG. 17G , theinterface 1702 can be used to represent conflict resolution goals 1744-1750, such as Ultimatum (1746), Surrender (1748), Compromise (1750), and Contentment (1744). Users can use theinterface 1702 to resolve conflict by first choosing an approach—Ultimatum, Surrender, or Compromise—and then adopting a strategy that will lead to Contentment. - Referring to
FIG. 17H , theinterface 1702 can be used to assist users coping with grief. For example, a user can follow their progress through the 5 (suggested) stages of grief (Disbelief 1752,Anger 1754, Bargaining 1756, Depression 1758, and Acceptance 1760), eventually improving their moods throughunderstanding 1762. -
FIGS. 18A-B are flowcharts ofexample techniques - Referring to
FIG. 18A , theexample technique 1800 can be for determining and transmitting prompts to users who share a private wall corresponding to their relationship part of a relationship concierge. The user profiles for the users sharing the wall and the relationship profile between the users can be accessed (1802). Historical interactions between the users via the private wall can be analyzed (1804). The user profiles, the relationship profile, and/or the historical interactions between the users can be used to determine a current state for the relationship between the users (1806). Such a relationship state can be, for example, a relationship rating or score that is provided to quantify aspects of a user relationship, such as the quality, closeness, and/or other relationship aspects. The current relationship state can be compared with other relationship states for other relationships that one or both of the users have (1808). For example, a comparison can be made to determine whether the current relationship under evaluation is better, the same as, or worse than other relationships. The trend of the relationship over time can also be determined by evaluating time sequence relationship states for the users (1810). For example, an assessment can be performed to determine whether the relationship is improving (i.e., users are becoming closer), staying the same, or declining (i.e., users are becoming more distant). Evaluation of current and trending wellness states that the users have self-reported (e.g., via the interface 1702) can also be performed (1813). For example, the emotional state of each user may be affecting the relationship between the users and may provide insight into corrective actions via prompts that could be taken to improve both the user's wellness state and the relationship. - Based on one or more of the relationship factors determined in 1802-1813, a determination can be made as to whether to provide a prompt to one or both of the users in the relationship (1814). If no prompts are determined, then the
technique 1800 can repeat. If prompts are determined, then one or both of the users can be identified to receive the personal concierge prompt based on one or more of the factors determined in 1802-1813 (1816). For example, if one of the users in the relationship has a depressed wellness state and the other user in the relationship has a positive wellness state, the user with the positive wellness state can selected to receive the prompt to interact with the depressed user (in an attempt to improve the depressed user's wellness state). A determination of the type of prompt that should be provided to the selected user can be made (1818). Extending the previous example, in the case of a depressed user, the prompt may be for the positive user to provide something more impactful to the depressed user, like a mediagram. Once the user to receive the prompt has been selected and the prompt type has been identified, the prompt can be transmitted (1820). - Referring to
FIG. 18B , theexample technique 1850 can be for determining and transmitting prompts to a personal wall for the user and the personal concierge alone (no other users permitted on the private wall). The user's profile can be accessed (1852) and can be used to determine whether any upcoming events exist for the user or the user's friends (1854). At appropriate times, reminders for such upcoming events can be provided on the personal wall for the user (1856). A determination can be made as to whether any user-set reminders are upcoming (1858). At appropriate times, reminders for such user-set reminders can be provided on the personal wall for the user (1860). A determination can be made as to whether any current events or news related to user interests have come out recently that the user is not aware of (1862). If such current events or news does exist, then notifications can be provided by the personal concierge on the user's personal wall for those current events and/or news. -
FIG. 19 is a flowchart of anexample technique 1900 for determining and transmitting delays between interactions on a social platform. For a particular relationship between two users, the user profiles and the relationship profile for the users can be accessed (1902) and, along with historical data for the users and the relationship, can be used to determine a historical cadence of interactions between the users (1904). The current type of interaction that would be delayed can be identified (1906), a current status of the relationship between the users can be determined (1908), and the current relationship trend for the users can be determined (1910). Based on one or more of the factors determined in 1902-1910, a determination can be made as to whether or not a response to the current interactions between the users should be delayed (1912). For example, if the relationship is currently strong and the current type of interaction is a mediagram, then a delay in the response may be appropriate. However, if the relationship is currently weaker and is trending in decline, then either no delay or a minimal delay may be instructed. Other ways and outcomes for determining whether a delay is appropriate are also possible. - If not delay is needed, then instructions can be provided to permit the user of the client device to respond without a delay (1915). If a delayed is determined to be needed, then the delay length can be determined based on one or more of the factors determined in 1902-1910 (1914). For example, if the user relationship is trending upward and the users typically have a lengthier cadence of interactions, then a longer delay can be determined. In another example, if the relationship is trending downward, them a shorter delay may be determined. Once the delay and the delay length has been determined, it instructions for instituting the delay on a client device can be transmitted (1916).
-
FIG. 20 is a flowchart of anexample technique 2000 for determining relationship ratings on a social platform. For a particular relationship between two users, the user profiles and the relationship profile for the users can be accessed (1902) and the historical interactions between the users can be accessed (1904). Relationship points, which can be at least one of the metrics by which relationships are rated, can be allocated for each of the interactions (1906). - Allocated points can then be weighted more heavily for interactions that indicate relationship strength (e.g., smaller time gaps between interactions, improved wellness evaluations following interactions, more significant interactions (e.g., mediagrams sent frequently)), and weighted less for interactions that indicated relationship weakness (e.g., longer time gaps between interactions, decreased or flat wellness evaluations following interactions, less significant interactions). For example, allocated points can be weighted based on time intervals between interactions (2008) and allocated points can be weighted based on correlations between wellness ratings and interactions (2010). Other weighting schemes are also possible.
- The trend of weighted point allocations over time can be determined by evaluating a time series of weighted points for the relationship (2012). If the relationship is trending toward improvement—meaning that the weighted point allocations generally increase over time—then additional positive trend points can be awarded to the relationship (2014).
- Weighted points can be aggregated (2016) and used to determine a relationship rating (2018). For example, the aggregate weighted points can be evaluated over the time period within which they occur to determine one or more normalized statistics for the relationship (e.g., average weighted points per time unit (e.g., day, week, month), median point value, standard deviation of point values). The relationship rating can be output and used to infer the state of the relationship (2020).
-
FIGS. 21A-B are flowcharts ofexample techniques - Referring to
FIG. 21A , thetechnique 2100 is one in which user-initiated group creation takes place. A user selects an option to create a group wall (2102) and the user (now the group creator) designates users to be a part of the group (2104). A prompt can be determined for the group based on the users in the group (2106). For example, a prompt to initialize social interactions on the group wall can be determined based on interests for users in the group. The prompt can be provided to one, some, or all of the users in the group (2108). In response to injecting the prompt into the group wall, users can transmit responses and other interactions that are inserted into the group wall as well (2110). - The group wall can include self-policing features by which group members can flag inappropriate content and/or inappropriate members for the group. Flagged content can be provided to the creator and/or other users in the group for review and possible deletion (2112). Similarly, flagged users can be provided to the creator and/or other users in the group for review and possible removal from the group. Suggestions for additional or new users to be added to the group can also be provided to the creator and/or other users in the group for approval (2114). The steps 2160-2114 can repeat for a threshold period of time, after which the group wall can automatically end (2116).
- Referring to
FIG. 21B , thetechnique 2150 is one in which automatic (non-user-initiated) group creation takes place. To automatically initiate a group, user profiles and relationship profiles can be analyzed to identify users to automatically include in the group (2152). For example, users with common interests between each other and one or more preexisting connections to one or more other people in the pool of candidates for the group can be added to the group (each member of the group does not need a preexisting connection with each other member of the group). For example, a concierge-created group wall can be created with users who are fans of a sports team that recently won a big game or championship. The group can be automatically created and the members of the group can be notified (2154). - The concierge organizing the automatic group can seed the automatically created group wall with a starting prompt (and subsequent follow-on prompts) (2156). Users can interact with each other on the group wall in response to the prompt (2158). One or more users of the group can be designated to moderate the group wall (2160). In some implementations, the group wall does not allow invitation of random or connected additional contacts. After a pre-set expiration time (e.g., 24 hours, 2 days, 7 days), the group can end automatically (2162).
-
FIG. 22 is a block diagram ofexample computing devices Computing device 2200 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers.Computing device 2200 is further intended to represent any other typically non-mobile devices, such as televisions or other electronic devices with one or more processers embedded therein or attached thereto.Computing device 2250 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, and other computing devices. The components shown here, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the inventions described and/or claimed in this document. -
Computing device 2200 includes aprocessor 2202,memory 2204, astorage device 2206, a high-speed controller 2208 connecting tomemory 2204 and high-speed expansion ports 2210, and a low-speed controller 2212 connecting to low-speed bus 2214 andstorage device 2206. Each of thecomponents processor 2202 can process instructions for execution within thecomputing device 2200, including instructions stored in thememory 2204 or on thestorage device 2206 to display graphical information for a GUI on an external input/output device, such as display 2216 coupled to high-speed controller 2208. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also,multiple computing devices 2200 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). - The
memory 2204 stores information within thecomputing device 2200. In one implementation, thememory 2204 is a computer-readable medium. In one implementation, thememory 2204 is a volatile memory unit or units. In another implementation, thememory 2204 is a non-volatile memory unit or units. - The
storage device 2206 is capable of providing mass storage for thecomputing device 2200. In one implementation, thestorage device 2206 is a computer-readable medium. In various different implementations, thestorage device 2206 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as thememory 2204, thestorage device 2206, or memory onprocessor 2202. - The high-
speed controller 2208 manages bandwidth-intensive operations for thecomputing device 2200, while the low-speed controller 2212 manages lower bandwidth-intensive operations. Such allocation of duties is an example only. In one implementation, the high-speed controller 2208 is coupled tomemory 2204, display 2216 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 2210, which may accept various expansion cards (not shown). In the implementation, low-speed controller 2212 is coupled tostorage device 2206 and low-speed bus 2214. The low-speed bus 2214 (e.g., a low-speed expansion port), which may include various communication ports (e.g., USB, Bluetooth Ethernet, wireless Ethernet), may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. - The
computing device 2200 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as astandard server 2220, or multiple times in a group of such servers. It may also be implemented as part of arack server system 2224. In addition, it may be implemented in a personal computer such as alaptop computer 2222. Alternatively, components fromcomputing device 2200 may be combined with other components in a mobile device (not shown), such ascomputing device 2250. Each of such devices may contain one or more ofcomputing devices multiple computing devices -
Computing device 2250 includes aprocessor 2252,memory 2264, an input/output device such as adisplay 2254, acommunication interface 2266, and atransceiver 2268, among other components. Thecomputing device 2250 may also be provided with a storage device, such as a micro-drive or other device, to provide additional storage. Each of thecomponents - The
processor 2252 can process instructions for execution within thecomputing device 2250, including instructions stored in thememory 2264. The processor may also include separate analog and digital processors. The processor may provide, for example, for coordination of the other components of thecomputing device 2250, such as control of user interfaces, applications run by computingdevice 2250, and wireless communication bycomputing device 2250. -
Processor 2252 may communicate with a user throughcontrol interface 2258 anddisplay interface 2256 coupled to adisplay 2254. Thedisplay 2254 may be, for example, a TFT LCD display or an OLED display, or other appropriate display technology. Thedisplay interface 2256 may comprise appropriate circuitry for driving thedisplay 2254 to present graphical and other information to a user. Thecontrol interface 2258 may receive commands from a user and convert them for submission to theprocessor 2252. In addition, anexternal interface 2262 may be provided in communication withprocessor 2252, so as to enable near area communication ofcomputing device 2250 with other devices.External interface 2262 may provide, for example, for wired communication (e.g., via a docking procedure) or for wireless communication (e.g., via Bluetooth® or other such technologies). - The
memory 2264 stores information within thecomputing device 2250. In one implementation, thememory 2264 is a computer-readable medium. In one implementation, thememory 2264 is a volatile memory unit or units. In another implementation, thememory 2264 is a non-volatile memory unit or units. Expansion memory 2274 may also be provided and connected tocomputing device 2250 through expansion interface 2272, which may include, for example, a subscriber identification module (SIM) card interface. Such expansion memory 2274 may provide extra storage space forcomputing device 2250, or may also store applications or other information forcomputing device 2250. Specifically, expansion memory 2274 may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, expansion memory 2274 may be provide as a security module forcomputing device 2250, and may be programmed with instructions that permit secure use ofcomputing device 2250. In addition, secure applications may be provided via the SIM cards, along with additional information, such as placing identifying information on the SIM card in a non-hackable manner. - The memory may include for example, flash memory and/or MRAIVI memory, as discussed below. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the
memory 2264, expansion memory 2274, or memory onprocessor 2252. -
Computing device 2250 may communicate wirelessly throughcommunication interface 2266, which may include digital signal processing circuitry where necessary.Communication interface 2266 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through transceiver 2268 (e.g., a radio-frequency transceiver). In addition, short-range communication may occur, such as using a Bluetooth WiFi, or other such transceiver (not shown). In addition,GPS receiver module 2270 may provide additional wireless data tocomputing device 2250, which may be used as appropriate by applications running oncomputing device 2250. -
Computing device 2250 may also communicate audibly usingaudio codec 2260, which may receive spoken information from a user and convert it to usable digital information.Audio codec 2260 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset ofcomputing device 2250. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating oncomputing device 2250. - The
computing device 2250 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as acellular telephone 2280. It may also be implemented as part of asmartphone 2282, personal digital assistant, or other mobile device. - Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
- These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. Other programming paradigms can be used, e.g., functional programming, logical programming, or other programming. As used herein, the terms “machine-readable medium” “computer-readable medium” refers to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor.
- To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
- The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet.
- The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
- While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any inventions or of what may be claimed, but rather as descriptions of features specific to particular implementations of particular inventions. Certain features that are described in this specification in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination.
- Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
- Thus, particular implementations of the subject matter have been described. Other implementations are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/146,484 US20190045252A1 (en) | 2016-12-30 | 2018-09-28 | Digital video file generation |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662441081P | 2016-12-30 | 2016-12-30 | |
US15/860,545 US10123065B2 (en) | 2016-12-30 | 2018-01-02 | Digital video file generation |
US16/146,484 US20190045252A1 (en) | 2016-12-30 | 2018-09-28 | Digital video file generation |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/860,545 Continuation US10123065B2 (en) | 2016-12-30 | 2018-01-02 | Digital video file generation |
Publications (1)
Publication Number | Publication Date |
---|---|
US20190045252A1 true US20190045252A1 (en) | 2019-02-07 |
Family
ID=62710986
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/860,545 Active US10123065B2 (en) | 2016-12-30 | 2018-01-02 | Digital video file generation |
US15/860,568 Active US10110942B2 (en) | 2016-12-30 | 2018-01-02 | User relationship enhancement for social media platform |
US16/146,484 Abandoned US20190045252A1 (en) | 2016-12-30 | 2018-09-28 | Digital video file generation |
US16/147,048 Active 2038-10-29 US11284145B2 (en) | 2016-12-30 | 2018-09-28 | User relationship enhancement for social media platform |
US17/698,964 Active US11831939B2 (en) | 2016-12-30 | 2022-03-18 | Personalized digital media file generation |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/860,545 Active US10123065B2 (en) | 2016-12-30 | 2018-01-02 | Digital video file generation |
US15/860,568 Active US10110942B2 (en) | 2016-12-30 | 2018-01-02 | User relationship enhancement for social media platform |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/147,048 Active 2038-10-29 US11284145B2 (en) | 2016-12-30 | 2018-09-28 | User relationship enhancement for social media platform |
US17/698,964 Active US11831939B2 (en) | 2016-12-30 | 2022-03-18 | Personalized digital media file generation |
Country Status (2)
Country | Link |
---|---|
US (5) | US10123065B2 (en) |
WO (1) | WO2018126279A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200357405A1 (en) * | 2018-01-29 | 2020-11-12 | Ntt Docomo, Inc. | Interactive system |
US10891103B1 (en) | 2016-04-18 | 2021-01-12 | Look Sharp Labs, Inc. | Music-based social networking multi-media application and related methods |
US20210375247A1 (en) * | 2020-05-29 | 2021-12-02 | Daniel Patrick Murphy | Alternative method to real-time bidding systems by optimizing aggregate sales through viral pricing within the digital entertainment industry and audio file publishing rights tracking through metadata efficiencies |
US11284145B2 (en) | 2016-12-30 | 2022-03-22 | Mora Global, Inc. | User relationship enhancement for social media platform |
US11481434B1 (en) | 2018-11-29 | 2022-10-25 | Look Sharp Labs, Inc. | System and method for contextual data selection from electronic data files |
WO2022226577A1 (en) * | 2021-04-26 | 2022-11-03 | Rodd Martin | A digital video virtual concierge user interface system |
US20230017181A1 (en) * | 2019-08-29 | 2023-01-19 | Rovi Guides, Inc. | Systems and methods for generating personalized content |
GB2621040A (en) * | 2021-04-26 | 2024-01-31 | Martin Rodd | A digital video concierge user interface system |
Families Citing this family (102)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160019544A1 (en) * | 2014-07-16 | 2016-01-21 | MapMyld, Inc. | Systems and methods for augmenting transactions using digital identity and relationship maps |
US10417677B2 (en) * | 2012-01-30 | 2019-09-17 | Gift Card Impressions, LLC | Group video generating system |
US9132436B2 (en) | 2012-09-21 | 2015-09-15 | Applied Materials, Inc. | Chemical control features in wafer process equipment |
US10256079B2 (en) | 2013-02-08 | 2019-04-09 | Applied Materials, Inc. | Semiconductor processing systems having multiple plasma configurations |
US11637002B2 (en) | 2014-11-26 | 2023-04-25 | Applied Materials, Inc. | Methods and systems to enhance process uniformity |
US20160225652A1 (en) | 2015-02-03 | 2016-08-04 | Applied Materials, Inc. | Low temperature chuck for plasma processing systems |
GB2581032B (en) | 2015-06-22 | 2020-11-04 | Time Machine Capital Ltd | System and method for onset detection in a digital signal |
US9741593B2 (en) | 2015-08-06 | 2017-08-22 | Applied Materials, Inc. | Thermal management systems and methods for wafer processing systems |
US10504700B2 (en) | 2015-08-27 | 2019-12-10 | Applied Materials, Inc. | Plasma etching systems and methods with secondary plasma injection |
US9721551B2 (en) | 2015-09-29 | 2017-08-01 | Amper Music, Inc. | Machines, systems, processes for automated music composition and generation employing linguistic and/or graphical icon based musical experience descriptions |
US10854180B2 (en) | 2015-09-29 | 2020-12-01 | Amper Music, Inc. | Method of and system for controlling the qualities of musical energy embodied in and expressed by digital music to be automatically composed and generated by an automated music composition and generation engine |
US20170221125A1 (en) * | 2016-02-03 | 2017-08-03 | International Business Machines Corporation | Matching customer and product behavioral traits |
US10504754B2 (en) | 2016-05-19 | 2019-12-10 | Applied Materials, Inc. | Systems and methods for improved semiconductor etching and component protection |
US9865484B1 (en) | 2016-06-29 | 2018-01-09 | Applied Materials, Inc. | Selective etch using material modification and RF pulsing |
US10546729B2 (en) | 2016-10-04 | 2020-01-28 | Applied Materials, Inc. | Dual-channel showerhead with improved profile |
CN108259315A (en) * | 2017-01-16 | 2018-07-06 | 广州市动景计算机科技有限公司 | Online picture sharing method, equipment, client and electronic equipment |
US10431429B2 (en) | 2017-02-03 | 2019-10-01 | Applied Materials, Inc. | Systems and methods for radial and azimuthal control of plasma uniformity |
US10943834B2 (en) | 2017-03-13 | 2021-03-09 | Applied Materials, Inc. | Replacement contact process |
US10999296B2 (en) | 2017-05-15 | 2021-05-04 | Forcepoint, LLC | Generating adaptive trust profiles using information derived from similarly situated organizations |
US10862927B2 (en) | 2017-05-15 | 2020-12-08 | Forcepoint, LLC | Dividing events into sessions during adaptive trust profile operations |
US10447718B2 (en) | 2017-05-15 | 2019-10-15 | Forcepoint Llc | User profile definition and management |
US10999297B2 (en) | 2017-05-15 | 2021-05-04 | Forcepoint, LLC | Using expected behavior of an entity when prepopulating an adaptive trust profile |
US10917423B2 (en) | 2017-05-15 | 2021-02-09 | Forcepoint, LLC | Intelligently differentiating between different types of states and attributes when using an adaptive trust profile |
US9882918B1 (en) | 2017-05-15 | 2018-01-30 | Forcepoint, LLC | User behavior profile in a blockchain |
US10129269B1 (en) | 2017-05-15 | 2018-11-13 | Forcepoint, LLC | Managing blockchain access to user profile information |
US10943019B2 (en) | 2017-05-15 | 2021-03-09 | Forcepoint, LLC | Adaptive trust profile endpoint |
US10623431B2 (en) | 2017-05-15 | 2020-04-14 | Forcepoint Llc | Discerning psychological state from correlated user behavior and contextual information |
US11276590B2 (en) | 2017-05-17 | 2022-03-15 | Applied Materials, Inc. | Multi-zone semiconductor substrate supports |
US11276559B2 (en) | 2017-05-17 | 2022-03-15 | Applied Materials, Inc. | Semiconductor processing chamber for multiple precursor flow |
USD883300S1 (en) * | 2017-05-22 | 2020-05-05 | Subsplash Ip, Llc | Display screen or portion thereof with graphical user interface |
USD878402S1 (en) * | 2017-05-22 | 2020-03-17 | Subsplash Ip, Llc | Display screen or portion thereof with transitional graphical user interface |
USD878386S1 (en) * | 2017-05-22 | 2020-03-17 | Subsplash Ip, Llc | Display screen or portion thereof with transitional graphical user interface |
US10581953B1 (en) * | 2017-05-31 | 2020-03-03 | Snap Inc. | Real-time content integration based on machine learned selections |
US10839002B2 (en) * | 2017-06-04 | 2020-11-17 | Apple Inc. | Defining a collection of media content items for a relevant interest |
US10297458B2 (en) | 2017-08-07 | 2019-05-21 | Applied Materials, Inc. | Process window widening using coated parts in plasma etch processes |
US20190065610A1 (en) * | 2017-08-22 | 2019-02-28 | Ravneet Singh | Apparatus for generating persuasive rhetoric |
US11157700B2 (en) | 2017-09-12 | 2021-10-26 | AebeZe Labs | Mood map for assessing a dynamic emotional or mental state (dEMS) of a user |
US11362981B2 (en) | 2017-09-12 | 2022-06-14 | AebeZe Labs | System and method for delivering a digital therapeutic from a parsed electronic message |
US10545720B2 (en) * | 2017-09-29 | 2020-01-28 | Spotify Ab | Automatically generated media preview |
US10903054B2 (en) | 2017-12-19 | 2021-01-26 | Applied Materials, Inc. | Multi-zone gas distribution systems and methods |
US11328909B2 (en) | 2017-12-22 | 2022-05-10 | Applied Materials, Inc. | Chamber conditioning and removal processes |
US10854426B2 (en) | 2018-01-08 | 2020-12-01 | Applied Materials, Inc. | Metal recess for semiconductor structures |
US10964512B2 (en) | 2018-02-15 | 2021-03-30 | Applied Materials, Inc. | Semiconductor processing chamber multistage mixing apparatus and methods |
CN113965807B (en) * | 2018-02-27 | 2022-12-13 | 腾讯科技(深圳)有限公司 | Message pushing method, device, terminal, server and storage medium |
US11043245B2 (en) * | 2018-02-28 | 2021-06-22 | Vertigo Media, Inc. | System and method for compiling a singular video file from user-generated video file fragments |
US10319600B1 (en) | 2018-03-12 | 2019-06-11 | Applied Materials, Inc. | Thermal silicon etch |
US10886137B2 (en) | 2018-04-30 | 2021-01-05 | Applied Materials, Inc. | Selective nitride removal |
US11176607B1 (en) | 2018-06-28 | 2021-11-16 | Square, Inc. | Capital loan optimization |
US11277368B1 (en) * | 2018-07-23 | 2022-03-15 | Snap Inc. | Messaging system |
US10805310B2 (en) * | 2018-08-10 | 2020-10-13 | Lenovo (Singapore) Pte. Ltd. | Content availability modification |
US11049755B2 (en) | 2018-09-14 | 2021-06-29 | Applied Materials, Inc. | Semiconductor substrate supports with embedded RF shield |
US10892198B2 (en) | 2018-09-14 | 2021-01-12 | Applied Materials, Inc. | Systems and methods for improved performance in semiconductor processing |
US11062887B2 (en) | 2018-09-17 | 2021-07-13 | Applied Materials, Inc. | High temperature RF heater pedestals |
US11417534B2 (en) | 2018-09-21 | 2022-08-16 | Applied Materials, Inc. | Selective material removal |
JP7007249B2 (en) * | 2018-09-28 | 2022-01-24 | 富士フイルム株式会社 | Image processing device, image processing method and image processing program |
US11183140B2 (en) * | 2018-10-10 | 2021-11-23 | International Business Machines Corporation | Human relationship-aware augmented display |
US11682560B2 (en) | 2018-10-11 | 2023-06-20 | Applied Materials, Inc. | Systems and methods for hafnium-containing film removal |
US11121002B2 (en) | 2018-10-24 | 2021-09-14 | Applied Materials, Inc. | Systems and methods for etching metals and metal derivatives |
EP3874392A1 (en) * | 2018-11-02 | 2021-09-08 | Mycollected, Inc. | Computer-implemented, user-controlled method of automatically organizing, storing, and sharing personal information |
US11437242B2 (en) | 2018-11-27 | 2022-09-06 | Applied Materials, Inc. | Selective removal of silicon-containing materials |
US11355098B1 (en) * | 2018-12-13 | 2022-06-07 | Amazon Technologies, Inc. | Centralized feedback service for performance of virtual assistant |
US10852323B2 (en) * | 2018-12-28 | 2020-12-01 | Rohde & Schwarz Gmbh & Co. Kg | Measurement apparatus and method for analyzing a waveform of a signal |
US11721527B2 (en) | 2019-01-07 | 2023-08-08 | Applied Materials, Inc. | Processing chamber mixing systems |
US10920319B2 (en) | 2019-01-11 | 2021-02-16 | Applied Materials, Inc. | Ceramic showerheads with conductive electrodes |
CN109951526B (en) * | 2019-02-20 | 2021-08-24 | 腾讯音乐娱乐科技(深圳)有限公司 | Lyric transmission method and related equipment |
US11250468B2 (en) | 2019-02-28 | 2022-02-15 | International Business Machines Corporation | Prompting web-based user interaction |
US11790408B2 (en) * | 2019-03-01 | 2023-10-17 | Vungle, Inc. | Automated video advertisement creation |
US10631047B1 (en) | 2019-03-29 | 2020-04-21 | Pond5 Inc. | Online video editor |
KR102656963B1 (en) * | 2019-04-03 | 2024-04-16 | 삼성전자 주식회사 | Electronic device and Method of controlling thereof |
US10997295B2 (en) | 2019-04-26 | 2021-05-04 | Forcepoint, LLC | Adaptive trust profile reference architecture |
US11388132B1 (en) * | 2019-05-29 | 2022-07-12 | Meta Platforms, Inc. | Automated social media replies |
US11720933B2 (en) * | 2019-08-30 | 2023-08-08 | Soclip! | Automatic adaptive video editing |
US11562014B1 (en) * | 2019-09-04 | 2023-01-24 | Meta Platforms, Inc. | Generating visual media collections for a dynamic social networking account |
US11037538B2 (en) | 2019-10-15 | 2021-06-15 | Shutterstock, Inc. | Method of and system for automated musical arrangement and musical instrument performance style transformation supported within an automated music performance system |
US10964299B1 (en) | 2019-10-15 | 2021-03-30 | Shutterstock, Inc. | Method of and system for automatically generating digital performances of music compositions using notes selected from virtual musical instruments based on the music-theoretic states of the music compositions |
US11024275B2 (en) | 2019-10-15 | 2021-06-01 | Shutterstock, Inc. | Method of digitally performing a music composition using virtual musical instruments having performance logic executing within a virtual musical instrument (VMI) library management system |
US11257105B2 (en) * | 2019-11-21 | 2022-02-22 | Rockspoon, Inc. | System and method for customer and business referral with a concierge system |
US11587107B2 (en) * | 2019-11-21 | 2023-02-21 | Rockspoon, Inc. | System and method for customer and business referrals with a smart device concierge system |
US11783358B2 (en) * | 2019-11-21 | 2023-10-10 | Rockspoon, Inc. | System and method for customer and business referral with a concierge system |
US11228544B2 (en) * | 2020-01-09 | 2022-01-18 | International Business Machines Corporation | Adapting communications according to audience profile from social media |
EP4115628A1 (en) * | 2020-03-06 | 2023-01-11 | algoriddim GmbH | Playback transition from first to second audio track with transition functions of decomposed signals |
US11818286B2 (en) * | 2020-03-30 | 2023-11-14 | Snap Inc. | Avatar recommendation and reply |
US11625873B2 (en) | 2020-03-30 | 2023-04-11 | Snap Inc. | Personalized media overlay recommendation |
US20210303618A1 (en) * | 2020-03-31 | 2021-09-30 | Aries Adaptive Media, LLC | Processes and systems for mixing audio tracks according to a template |
WO2021201893A1 (en) * | 2020-04-01 | 2021-10-07 | Hewlett-Packard Development Company, L.P. | Item recommendation by chatbot |
US11151229B1 (en) | 2020-04-10 | 2021-10-19 | Avila Technology, LLC | Secure messaging service with digital rights management using blockchain technology |
US10873852B1 (en) | 2020-04-10 | 2020-12-22 | Avila Technology, LLC | POOFster: a secure mobile text message and object sharing application, system, and method for same |
US12003821B2 (en) * | 2020-04-20 | 2024-06-04 | Disney Enterprises, Inc. | Techniques for enhanced media experience |
CN113010397B (en) * | 2021-03-17 | 2023-01-20 | 维沃移动通信有限公司 | Social contact track generation method and social contact track generation device |
CN113050857B (en) * | 2021-03-26 | 2023-02-24 | 北京字节跳动网络技术有限公司 | Music sharing method and device, electronic equipment and storage medium |
US20220317635A1 (en) * | 2021-04-06 | 2022-10-06 | International Business Machines Corporation | Smart ecosystem curiosity-based self-learning |
EP4327558A1 (en) | 2021-04-20 | 2024-02-28 | Block, Inc. | Live playback streams |
KR102359543B1 (en) * | 2021-06-04 | 2022-02-08 | 셀렉트스타 주식회사 | Method, Computing Device and Computer-readable Medium for Dividing Work and Providing it to Workers in Crowdsourcing |
US11540013B1 (en) | 2021-06-23 | 2022-12-27 | Rovi Guides, Inc. | Systems and methods for increasing first user subscription |
US20230045426A1 (en) * | 2021-08-05 | 2023-02-09 | Yaar Inc. | Instruction interpretation for web task automation |
WO2023133237A1 (en) * | 2022-01-07 | 2023-07-13 | AugX Labs, Inc. | Rapid generation of visual content from audio |
US11877050B2 (en) | 2022-01-20 | 2024-01-16 | Qualcomm Incorporated | User interface for image capture |
US11895368B2 (en) * | 2022-03-04 | 2024-02-06 | Humane, Inc. | Generating, storing, and presenting content based on a memory metric |
US20230368533A1 (en) * | 2022-05-13 | 2023-11-16 | Lakshminath Reddy Dondeti | Method and system for automatically creating loop videos |
US20230379156A1 (en) | 2022-05-23 | 2023-11-23 | Snap Inc. | Unlocking sharing destinations in an interaction system |
US12062386B2 (en) * | 2022-07-29 | 2024-08-13 | Rovi Guides, Inc. | Systems and methods of generating personalized video clips for songs using a pool of short videos |
CN116866498B (en) * | 2023-06-15 | 2024-04-05 | 天翼爱音乐文化科技有限公司 | Video template generation method and device, electronic equipment and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100050083A1 (en) * | 2006-07-06 | 2010-02-25 | Sundaysky Ltd. | Automatic generation of video from structured content |
US20110064388A1 (en) * | 2006-07-11 | 2011-03-17 | Pandoodle Corp. | User Customized Animated Video and Method For Making the Same |
US20130216200A1 (en) * | 2012-02-20 | 2013-08-22 | Paul Howett | Systems and methods for variable video production, distribution and presentation |
US20150312618A1 (en) * | 2014-04-28 | 2015-10-29 | Activevideo Networks, Inc. | Systems and Methods for Generating a Full-Motion Video Mosaic Interface for Content Discovery with User-Configurable Filters |
US20160014482A1 (en) * | 2014-07-14 | 2016-01-14 | The Board Of Trustees Of The Leland Stanford Junior University | Systems and Methods for Generating Video Summary Sequences From One or More Video Segments |
US20160057492A1 (en) * | 2014-08-22 | 2016-02-25 | Netflix, Inc. | Dynamically adjusting video merchandising to reflect user preferences |
US20170134793A1 (en) * | 2015-11-06 | 2017-05-11 | Rovi Guides, Inc. | Systems and methods for creating rated and curated spectator feeds |
US20180007404A1 (en) * | 2015-01-07 | 2018-01-04 | Crea-Japan Inc. | Video creation server, video creation program, video creation method, and video creation system |
US20180089194A1 (en) * | 2016-09-28 | 2018-03-29 | Idomoo Ltd | System and method for generating customizable encapsulated media files |
US20180132011A1 (en) * | 2015-04-16 | 2018-05-10 | W.S.C. Sports Technologies Ltd. | System and method for creating and distributing multimedia content |
Family Cites Families (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002052373A2 (en) * | 2000-12-22 | 2002-07-04 | Torrance Andrew W | Collecting user responses over a network |
US20030014272A1 (en) * | 2001-07-12 | 2003-01-16 | Goulet Mary E. | E-audition for a musical work |
WO2004099900A2 (en) * | 2002-12-20 | 2004-11-18 | Banker Shailen V | Linked information system |
US7603413B1 (en) * | 2005-04-07 | 2009-10-13 | Aol Llc | Using automated agents to facilitate chat communications |
IL173222A0 (en) | 2006-01-18 | 2006-06-11 | Clip In Touch Internat Ltd | Apparatus and method for creating and transmitting unique dynamically personalized multimedia messages |
WO2007092946A2 (en) * | 2006-02-08 | 2007-08-16 | Entermedia Corporation | Downloadable server-client collaborative mobile social computing application |
US7789304B2 (en) * | 2006-08-16 | 2010-09-07 | International Business Machines Corporation | Child lock for electronic device |
US8347213B2 (en) | 2007-03-02 | 2013-01-01 | Animoto, Inc. | Automatically generating audiovisual works |
US20090089184A1 (en) | 2007-09-28 | 2009-04-02 | Embarq Holdings Company, Llc | Content portal for media distribution |
US9602605B2 (en) * | 2007-10-26 | 2017-03-21 | Facebook, Inc. | Sharing digital content on a social network |
KR20090043753A (en) * | 2007-10-30 | 2009-05-07 | 엘지전자 주식회사 | Method and apparatus for controlling multitasking of terminal device with touch screen |
US20110066940A1 (en) | 2008-05-23 | 2011-03-17 | Nader Asghari Kamrani | Music/video messaging system and method |
KR101546774B1 (en) * | 2008-07-29 | 2015-08-24 | 엘지전자 주식회사 | Mobile terminal and operation control method thereof |
US8281027B2 (en) | 2008-09-19 | 2012-10-02 | Yahoo! Inc. | System and method for distributing media related to a location |
US8860865B2 (en) * | 2009-03-02 | 2014-10-14 | Burning Moon, Llc | Assisted video creation utilizing a camera |
US8397253B2 (en) | 2009-07-23 | 2013-03-12 | Fmr Llc | Inserting personalized information into digital content |
US8379668B2 (en) * | 2010-01-21 | 2013-02-19 | Comcast Cable Communications, Llc | Controlling networked media capture devices |
US8682971B2 (en) | 2010-06-22 | 2014-03-25 | International Business Machines Corporation | Relationship management in a social network service |
US9183509B2 (en) | 2011-05-11 | 2015-11-10 | Ari M. Frank | Database of affective response and attention levels |
US9292882B2 (en) | 2011-07-20 | 2016-03-22 | Mark Blinder | Social circle based social networking |
US20130317936A1 (en) * | 2012-05-25 | 2013-11-28 | Apple Inc. | Digital mixed tapes |
IN2015DN02124A (en) * | 2012-08-31 | 2015-08-14 | Funke Digital Tv Guide Gmbh | |
US20140164507A1 (en) * | 2012-12-10 | 2014-06-12 | Rawllin International Inc. | Media content portions recommended |
US20150004591A1 (en) | 2013-06-27 | 2015-01-01 | DoSomething.Org | Device, system, method, and computer-readable medium for providing an educational, text-based interactive game |
US10169447B2 (en) * | 2014-02-24 | 2019-01-01 | Entefy Inc. | System and method of message threading for a multi-format, multi-protocol communication system |
US10165069B2 (en) | 2014-03-18 | 2018-12-25 | Outbrain Inc. | Provisioning personalized content recommendations |
US20160292648A1 (en) | 2015-03-31 | 2016-10-06 | GymLink, Inc. | Web-Based System and Method for Facilitating In-Person Group Activities Having Democratic Administration by Group Members |
WO2016197141A1 (en) * | 2015-06-05 | 2016-12-08 | Olav Bokestad | System and method for posting content to networks for future access |
US20170250930A1 (en) * | 2016-02-29 | 2017-08-31 | Outbrain Inc. | Interactive content recommendation personalization assistant |
US10165316B2 (en) * | 2016-03-31 | 2018-12-25 | Viacom International Inc. | Device, system, and method for hybrid media content distribution |
US10123065B2 (en) | 2016-12-30 | 2018-11-06 | Mora Global, Inc. | Digital video file generation |
-
2018
- 2018-01-02 US US15/860,545 patent/US10123065B2/en active Active
- 2018-01-02 US US15/860,568 patent/US10110942B2/en active Active
- 2018-01-02 WO PCT/US2018/012102 patent/WO2018126279A1/en active Application Filing
- 2018-09-28 US US16/146,484 patent/US20190045252A1/en not_active Abandoned
- 2018-09-28 US US16/147,048 patent/US11284145B2/en active Active
-
2022
- 2022-03-18 US US17/698,964 patent/US11831939B2/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100050083A1 (en) * | 2006-07-06 | 2010-02-25 | Sundaysky Ltd. | Automatic generation of video from structured content |
US20110064388A1 (en) * | 2006-07-11 | 2011-03-17 | Pandoodle Corp. | User Customized Animated Video and Method For Making the Same |
US20130216200A1 (en) * | 2012-02-20 | 2013-08-22 | Paul Howett | Systems and methods for variable video production, distribution and presentation |
US20150312618A1 (en) * | 2014-04-28 | 2015-10-29 | Activevideo Networks, Inc. | Systems and Methods for Generating a Full-Motion Video Mosaic Interface for Content Discovery with User-Configurable Filters |
US20160014482A1 (en) * | 2014-07-14 | 2016-01-14 | The Board Of Trustees Of The Leland Stanford Junior University | Systems and Methods for Generating Video Summary Sequences From One or More Video Segments |
US20160057492A1 (en) * | 2014-08-22 | 2016-02-25 | Netflix, Inc. | Dynamically adjusting video merchandising to reflect user preferences |
US20180007404A1 (en) * | 2015-01-07 | 2018-01-04 | Crea-Japan Inc. | Video creation server, video creation program, video creation method, and video creation system |
US20180132011A1 (en) * | 2015-04-16 | 2018-05-10 | W.S.C. Sports Technologies Ltd. | System and method for creating and distributing multimedia content |
US20170134793A1 (en) * | 2015-11-06 | 2017-05-11 | Rovi Guides, Inc. | Systems and methods for creating rated and curated spectator feeds |
US20180089194A1 (en) * | 2016-09-28 | 2018-03-29 | Idomoo Ltd | System and method for generating customizable encapsulated media files |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10891103B1 (en) | 2016-04-18 | 2021-01-12 | Look Sharp Labs, Inc. | Music-based social networking multi-media application and related methods |
US11169770B1 (en) | 2016-04-18 | 2021-11-09 | Look Sharp Labs, Inc. | Music-based social networking multi-media application and related methods |
US11797265B1 (en) | 2016-04-18 | 2023-10-24 | Look Sharp Labs, Inc. | Music-based social networking multi-media application and related methods |
US11449306B1 (en) | 2016-04-18 | 2022-09-20 | Look Sharp Labs, Inc. | Music-based social networking multi-media application and related methods |
US11284145B2 (en) | 2016-12-30 | 2022-03-22 | Mora Global, Inc. | User relationship enhancement for social media platform |
US11514910B2 (en) * | 2018-01-29 | 2022-11-29 | Ntt Docomo, Inc. | Interactive system |
US20200357405A1 (en) * | 2018-01-29 | 2020-11-12 | Ntt Docomo, Inc. | Interactive system |
US11481434B1 (en) | 2018-11-29 | 2022-10-25 | Look Sharp Labs, Inc. | System and method for contextual data selection from electronic data files |
US11971927B1 (en) | 2018-11-29 | 2024-04-30 | Look Sharp Labs, Inc. | System and method for contextual data selection from electronic media content |
US20240152687A1 (en) * | 2019-08-29 | 2024-05-09 | Rovi Guides, Inc. | Systems and methods for generating personalized content |
US20230017181A1 (en) * | 2019-08-29 | 2023-01-19 | Rovi Guides, Inc. | Systems and methods for generating personalized content |
US11922112B2 (en) * | 2019-08-29 | 2024-03-05 | Rovi Guides, Inc. | Systems and methods for generating personalized content |
WO2021242733A1 (en) * | 2020-05-29 | 2021-12-02 | Murphy Daniel P | Alternative method to real-time bidding systems by optimizing aggregate sales through viral pricing within the digital entertainment industry and audio file publishing rights tracking through metadata efficiencies |
US20210375247A1 (en) * | 2020-05-29 | 2021-12-02 | Daniel Patrick Murphy | Alternative method to real-time bidding systems by optimizing aggregate sales through viral pricing within the digital entertainment industry and audio file publishing rights tracking through metadata efficiencies |
US12014711B2 (en) * | 2020-05-29 | 2024-06-18 | Daniel Patrick Murphy | Alternative method to real-time bidding systems by optimizing aggregate sales through viral pricing within the digital entertainment industry and audio file publishing rights tracking through metadata efficiencies |
WO2022226577A1 (en) * | 2021-04-26 | 2022-11-03 | Rodd Martin | A digital video virtual concierge user interface system |
GB2621040A (en) * | 2021-04-26 | 2024-01-31 | Martin Rodd | A digital video concierge user interface system |
Also Published As
Publication number | Publication date |
---|---|
US20180188916A1 (en) | 2018-07-05 |
US11831939B2 (en) | 2023-11-28 |
US10123065B2 (en) | 2018-11-06 |
US11284145B2 (en) | 2022-03-22 |
WO2018126279A1 (en) | 2018-07-05 |
US20190037264A1 (en) | 2019-01-31 |
US10110942B2 (en) | 2018-10-23 |
US20220210497A1 (en) | 2022-06-30 |
US20180192108A1 (en) | 2018-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11831939B2 (en) | Personalized digital media file generation | |
Bishop | Anxiety, panic and self-optimization: Inequalities and the YouTube algorithm | |
Schellewald | Understanding the popularity and affordances of TikTok through user experiences | |
Yang et al. | Understanding Young Adults' TikTok Usage | |
US20210357447A1 (en) | Interactive Content Feedback System | |
JP2023519070A (en) | A media platform for exercise systems and methods | |
US20190098371A1 (en) | Media narrative presentation systems and methods with interactive and autonomous content selection | |
Juluri | Becoming a global audience | |
US20130297599A1 (en) | Music management for adaptive distraction reduction | |
McDaniel | Popular music reaction videos: Reactivity, creator labor, and the performance of listening online | |
Mihelj et al. | The challenge of flow: state socialist television between revolutionary time and everyday time | |
Ko | The Role of User Interactions In Social Media On Recommendation Algorithms: Evaluation of Tiktok’s Personalization Practices From User’s Perspective[Istanbul University] | |
Morgan | Music, metrics, and meaning: Australian music industries and streaming services | |
Myles et al. | Innovation & Digital Theatremaking: Rethinking Theatre with “The Show Must Go Online” | |
Herman | For who page? TikTok creators’ algorithmic dependencies | |
Palmer | Pop ubiquity: cameo performance as star management | |
Brasseur et al. | BAND: A strategic framework to help indie rock musicians build their audience via streaming platforms and social media | |
XU | The determinants of creator performance on creative content platforms: Evidence from Xiaohongshu and Bilibili | |
Feisthauer | Reconsidering Contemporary Music Videos | |
Sked | Music in the Moment of" Cyber Culture:" An Outward Spiral | |
Robinson et al. | Is he a dramatist? Or, something singular! Staging Dickensian drama as practice-led research | |
Rodriguez | LGBTQ YouTube: Community and Branding through New Media | |
Zanten | In the Mood For a Vibe: Decoding Vibes in Spotify’s Mood-playlists | |
Olk | Social Media and the Undergraduate Experience: Recommendations and Multi-Method Design Research in Attention and Social Media Use | |
Siu | The Rise and Fall of Popular Variety Programs: A Hong Kong Case Study |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: FISH & RICHARDSON, MINNESOTA Free format text: LIEN;ASSIGNOR:MORA GLOBAL, INC.;REEL/FRAME:051847/0038 Effective date: 20190415 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: LGI, INC., MINNESOTA Free format text: ASSIGNMENT OF SECURITY INTEREST;ASSIGNOR:FISH & RICHARDSON P.C.;REEL/FRAME:057177/0972 Effective date: 20210802 |
|
AS | Assignment |
Owner name: LGI, INC., TENNESSEE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MORA GLOBAL, INC.;REEL/FRAME:060156/0099 Effective date: 20220609 |