WO2016054006A1 - Methods and systems for multi-state recommendations - Google Patents

Methods and systems for multi-state recommendations Download PDF

Info

Publication number
WO2016054006A1
WO2016054006A1 PCT/US2015/052888 US2015052888W WO2016054006A1 WO 2016054006 A1 WO2016054006 A1 WO 2016054006A1 US 2015052888 W US2015052888 W US 2015052888W WO 2016054006 A1 WO2016054006 A1 WO 2016054006A1
Authority
WO
WIPO (PCT)
Prior art keywords
state
playlist
criteria
user
movie
Prior art date
Application number
PCT/US2015/052888
Other languages
French (fr)
Inventor
Ehud WEINSBERG
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Publication of WO2016054006A1 publication Critical patent/WO2016054006A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • G06F16/4387Presentation of query results by the use of playlists

Definitions

  • the present disclosure generally relates to recommendation methods and systems and, more particularly, to recommendations directed to multiple states.
  • Home entertainment systems including television and media centers, are converging with the Internet and providing access to a large amount of available content, such as video, movies, TV programs, music, etc.
  • content such as video, movies, TV programs, music, etc.
  • This expansion in the amount of content available on-demand has been a boon for consumers of content.
  • searching through vast catalogues of movies and TV programs, music inventories, video clip databases, etc. can be overwhelming. As a result,
  • Recommendation systems have become more popular.
  • Recommendation systems may be a feature offered by an online movie provider, may be built into a gateway, set-top box, etc., and may be a function of software applications run on personal computers, smart phones, etc.
  • a recommendation system may recommend content based on a user request. Users often request recommendations based on a particular state, e.g., state of mind, mood, etc., that the user is in. For example, a happy user may request a
  • a recommendation system may provide a list of recommended happy or funny movies.
  • a user in a sad state may request a recommendation for a dark, sad movie, and the recommendation system may provide a list of recommended dark, sad movies.
  • a user in an excited state may request a recommendation for a past-paced, mindless movie, etc.
  • a recommendation system may be able to recommend suitable content that matches the state of the user.
  • some users may request recommendations that are not based on the state the user is in, but are based on a desired state, e.g., a state the user wishes to be in.
  • a sad user might request a recommendation for a happy movie because the user wishes to be in a happy state.
  • a desired state e.g., a state the user wishes to be in.
  • recommendation system may provide a list of happy movies, and the user may commence to watch the recommended movies hoping to become happier.
  • the disconnect between the user's sad state and the happiness of the recommended movies might interfere with the connection the user feels with the movies.
  • This lack of connection may hinder or prevent the user from achieving the desired state, i.e., a happy state.
  • the user may attempt to watch the recommended movies, but may lose interest or become annoyed because the happiness of the
  • a recommendation system can determine a first movie that has a beginning state that is close to the first state identified by the user, e.g., the sad state the user is currently in.
  • recommendation systems can use a closeness criteria, such as determining a difference between a happiness value of the first state and a happiness value of the beginning state of the first movie, and requiring the difference to be less than a predetermined threshold value.
  • a closeness criteria may be used to ensure the ending state of the first movie is close to the beginning state of the second movie, and to ensure the ending state of the second movie is close to the second state, e.g., the happy state desired by the user.
  • the second state e.g., the happy state desired by the user.
  • recommendation systems can provide multi-state recommendations.
  • FIG. 1 is a block diagram of an example of a system for providing multi-state recommendations and delivering content according to various embodiments.
  • FIG. 2 is a block diagram of an example of a computing system, such as a set- top box/digital video recorder (DVR), gateway, etc., that can include multi-state recommendation functionality according to various embodiments.
  • DVR digital video recorder
  • FIG. 3 illustrates an example of a touch panel input device according to various embodiments.
  • FIG. 4 illustrates another example of an input device according to various embodiments.
  • FIG. 5 is a flowchart illustrating an example of a method of providing multi- state recommendations according to various embodiments.
  • FIG. 6 illustrates a timeline of a movie including tags at various time points along the timeline according to various embodiments.
  • FIG. 7 illustrates an example in which each tag of movie has been analyzed to determine a level of happiness associated with the tag, the movie being a happy movie according to various embodiments.
  • FIG. 8 illustrates an example in which each tag of another movie has been analyzed to determine a level of happiness associated with the tag, the movie being a sad movie according to various embodiments.
  • FIG. 9 illustrates an example of a playlist generated based on a request for a state-inducing playlist according to various embodiments.
  • FIG. 10 illustrates an example of a playlist generated based on a request for a state-balancing playlist according to various embodiments.
  • Consumers of content such as movies, television (TV), music, etc.
  • content items can have difficulty finding content they are likely to enjoy. Consumers may be faced with browsing through massive databases of content, for example, and can become overwhelmed and frustrated. Consumers may wish to obtain recommendations for content items (also referred to herein simply as "items"), e.g., movies, TV shows, songs, etc., from a recommender, such as a recommendation systems, a
  • a state e.g., state of mind, mood, etc.
  • Some users may request recommendations that are not based on the state the user is in, but are based on a desired state, e.g., a state the user wishes to be in.
  • a sad user might request a recommendation for a happy movie because the user wishes to be in a happy state.
  • a recommendation system may provide a list of happy movies, and the user may commence to watch the recommended movies hoping to become happier.
  • the disconnect between the user's sad state and the happiness of the recommended movies might interfere with the connection the user feels with the movies.
  • This lack of connection may hinder or prevent the user from achieving the desired state, i.e., a happy state.
  • the user may attempt to watch the recommended movies, but may lose interest or become annoyed because the happiness of the recommended movies seems unreal or contrived in light of the user's current state.
  • the recommendation system takes into account that multiple states exist in this situation, i.e., a current state (the user is sad) and a desired state (the user wants to be happy). For example, the request for a recommendation may identify that the user is in a sad state and desires to move into a happy state. Based on this information, the recommendation system may recommend a first movie that is sad at the beginning and becomes happier toward the end, and a second movie that begins at about the same level of happiness as the end of the first movie and that progresses to an ending that is even happier.
  • the sad beginning of the first movie may allow the user to become engaged more easily, and the progression through the two movies toward a happier state may be better able to move the user's state toward happy.
  • Another example of multiple state recommendations can include two or more people with different tastes trying to decide what to watch. For example, a husband may want to watch a fast-paced, mindless movie, while a wife may want to watch a slow, thought-provoking movie.
  • the multiple states can be a desired state of a first person (the husband's desired fast-pace, mindless state) and a desired state of a second person (the wife's desired slow, thought-provoking state). There may be no single movie that can satisfy both desired states.
  • Movie A that is entirely fast-paced and mindless and Movie B that is entirely slow and thought-provoking may not provide the best results. For example, if the couple watch Movie A first, the wife may find it difficult to sit through the entire movie without becoming annoyed or losing interest in movie watching. Likewise, the husband may become disinterested at the beginning of Movie B because of the extreme slowdown in pace from the end of Movie A and the sudden shock of deeply intellectual content.
  • a better recommendation may be, for example, a first movie that is fast-paced and mindless through most of the movie, but that slows down and becomes more thought-provoking towards the end, and a second movie that begins with some action and then slows down and becomes more thought-provoking.
  • a recommendation system may recommend content based on a user request that identifies multiple states, e.g., states of mind, moods, etc. For example, a user may indicate that she is sad and request a recommendation for a playlist of content items, e.g., movies, that will help her become happy.
  • the recommendation system may generate a playlist including a first movie that is sad at the beginning and becomes happier toward the end, and a second movie that begins at about the same level of happiness as the end of the first movie and that progresses to an ending that is even happier.
  • the sad beginning of the first movie may allow the user to become engaged more easily, and the progression through the two movies toward a happier state may be better able to move the user's state toward happy.
  • recommendation systems can use stored
  • characterizations of a content item such as tags at different time points during a movie, to determine the state of the content item at the different time points. For example, a scene in a movie may be tagged with information such as "wedding scene, tears of joy, classical music playing.” The information may be stored, for example, in metadata within the movie file, which can be read by the
  • a recommendation system and used to determine a state of the movie at that time point. For example, a recommendation system may determine that the tag "wedding scene, tears of joy, classical music playing" corresponds to a happy state. In this way, for example, a recommendation system may determine the state of a movie at different points in time, such as at the beginning of the movie and at the end of the movie. A recommendation system can, for example, determine a first movie that has a beginning state that is close to the first state identified by the user, e.g., the sad state the user is currently in.
  • recommendation systems can use a closeness criteria, such as determining a difference between a happiness value of the first state and a happiness value of the beginning state of the first movie, and requiring the difference to be less than a predetermined threshold value.
  • a closeness criteria may be used to ensure the ending state of the first movie is close to the beginning state of the second movie, and to ensure the ending state of the second movie is close to the second state, e.g., the happy state desired by the user.
  • recommendation systems can provide multi-state recommendations.
  • multi-state recommendations may be implemented in the recommendation system of an online movie provider.
  • FIGS. 1 -4 illustrate an example of an implementation in which multi-state recommendations can be provided by a recommendation system of an online content provider.
  • various embodiments can include, for example, stand-alone multi- state recommendation systems built into a gateway, set-top box, etc., and that various embodiments can be implemented in software applications that can be executed on personal computers, smart phones, etc.
  • FIG. 1 illustrates a block diagram of an example of a system 100 for delivering content and multi-state recommendations to a home or end user.
  • the content can originate from a content source 102, such as a movie studio or production house.
  • the content may be supplied in at least one of two forms.
  • One form may be a broadcast form of content.
  • the broadcast content is provided to the broadcast affiliate manager 104, which is typically a national broadcast service, such as the American Broadcasting Company (ABC), National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), etc.
  • the broadcast affiliate manager may collect and store the content, and may schedule delivery of the content over a delivery network, shown as delivery network 106.
  • Delivery network 106 may include satellite link transmission from a national center to one or more regional or local centers. Delivery network 106 may also include local content delivery using local delivery systems such as over the air broadcast, satellite broadcast, or cable broadcast.
  • the locally delivered content is provided to a user system 107 in a user's home.
  • User system 107 can include a receiving device 108 that can receive and process content and perform other functions described in more detail below. It is to be appreciated that receiving device 108 can be, for example, a set-top box, a digital video recorder (DVR), a gateway, a modem, etc. Receiving device 108 may act as entry point, or gateway, for a home network system that includes additional devices configured as either client or peer devices in the home network.
  • receiving device 108 can be, for example, a set-top box, a digital video recorder (DVR), a gateway, a modem, etc.
  • Receiving device 108 may act as entry point, or gateway, for a home network system that includes additional devices configured as either client or peer devices in the home network.
  • User system 107 can also include a display device 1 14. In some embodiments,
  • display device 1 14 can be an external display coupled to receiving device 108.
  • receiving device 108 and display device 1 14 can be parts of a single device.
  • the display device 1 14 may be, for example, a conventional 2-D type display, an advanced 3-D display, etc.
  • User system 107 can also include an input device 1 16, such as a remote controller, a keyboard, a mouse, a touch panel, a touch screen, etc.
  • the input device 1 16 may be adapted to provide user control for the receiving device 108 and/or the display device 1 14.
  • input device 1 16 may be an external device that can couple to receiving device 108 via, for example, a wired connection, a signal transmission system, such as infra-red (IR), radio frequency (RF) communications, etc., and may include standard protocols such as universal serial bus (USB), infra-red data association (IRDA) standard, Wi-Fi, Bluetooth and the like, proprietary protocols, etc.
  • IR infra-red
  • RF radio frequency
  • standard protocols such as universal serial bus (USB), infra-red data association (IRDA) standard, Wi-Fi, Bluetooth and the like, proprietary protocols, etc.
  • receiving device 108 and input device 1 16 can be part of the same device. Operations of input device 1 16 will be described in further detail below.
  • Special content may include, for example, premium viewing content, pay-per-view content, Internet access, other content otherwise not provided to the broadcast affiliate manager, e.g., movies, video games, other video elements, etc.
  • the special content may be content requested by the user, such as a webpage, a movie download, etc.
  • the special content may be delivered to a content manager 1 10.
  • the content manager 1 10 may be a service provider, such as an Internet website, affiliated, for instance, with a content provider, broadcast service, or delivery network service.
  • the content manager 1 10 may also incorporate Internet content into the delivery system.
  • the content manager 1 10 may deliver the content to the user's receiving device 108 over a communication network, e.g., communication network 1 12.
  • Communication network 1 12 may include high-speed broadband Internet type communications systems. It is important to note that the content from the broadcast affiliate manager 104 may also be delivered using all or parts of communication network 1 12 and content from the content manager 1 10 may be delivered using all or parts of delivery network 106. In some embodiments, the user may obtain content, such as webpages, etc., directly from the Internet 1 13 via communication network 1 12 without necessarily having the content managed by the content manager 1 10.
  • the special content is provided as an
  • the special content may completely replace some programming content provided as broadcast content.
  • the special content may be completely separate from the broadcast content, and may simply be a media alternative that the user may choose to utilize.
  • the special content may be a library of movies that are not yet available as broadcast content.
  • the receiving device 108 may receive different types of content from one or both of delivery network 106 and communication network 1 12.
  • the receiving device 108 processes the content, and provides a separation of the content based on user preferences and commands.
  • the receiving device 108 may also include a storage device, such as a hard drive or optical disk drive, for recording and playing back audio and video content. Further details of the operation of the receiving device 108 and features associated with playing back stored content will be described below in relation to FIG. 2.
  • the processed content is provided to display device 1 14.
  • content manager 1 10 also controls a recommendation system 1 17 that can include a recommendation engine 1 18 and a database 120. Recommendation system 1 17 can process multi-state
  • recommendation system 1 17 is controlled by content manager 1 10 in this example, it should be appreciated that in some embodiments, recommendation systems can be operated by other entities, such as separate recommendation service providers whose primary service is providing recommendations.
  • FIG. 2 includes a block diagram of an example of a computing system, such as a receiving device 200.
  • Receiving device 200 may operate similar to receiving device 108 described in FIG. 1 and may be included as part of a gateway device, modem, set-top box, personal computer, television, tablet computer, smartphone, etc.
  • Receiving device 200 may also be incorporated into other systems including an audio device or a display device.
  • the receiving device 200 may be, for example, a set top box coupled to an external display device (e.g., a television), a personal computer coupled to a display device (e.g., a computer monitor), etc.
  • the receiving device 200 may include an integrated display device, for example, a portable device such as a tablet computer, a smartphone, etc.
  • the input signal receiver 202 may include, for example, receiver circuits used for receiving, demodulation, and decoding signals provided over one of the several possible networks including over the air, cable, satellite, Ethernet, fiber and phone line networks.
  • the desired input signal may be obtained based on user input provided through a user interface 216.
  • the user input may include search terms for a search
  • the input signal received by input signal receiver 202 may include search results.
  • User interface 216 can be coupled to an input device, such as input device 1 16, and can receive and process corresponding user inputs, for example, keystrokes, button presses, touch inputs, such as gestures, audio input, such as voice input, etc., from the input device.
  • User interface 216 may be adapted to interface to a cellular phone, a tablet, a mouse, a remote controller, etc.
  • the decoded output signal is provided to an input stream processor 204.
  • the input stream processor 204 performs the final signal selection and processing, and includes separation of video content from audio content for the content stream.
  • the audio content is provided to an audio processor 206 for conversion from the received format, such as a compressed digital signal, to an analog waveform signal.
  • the analog waveform signal is provided to an audio interface 208 and further to the display device or audio amplifier.
  • the audio interface 208 may provide a digital signal to an audio output device or display device using a High- Definition Multimedia Interface (HDMI) cable, an audio interface such as via a
  • HDMI High- Definition Multimedia Interface
  • the audio interface may also include amplifiers for driving one more sets of speakers.
  • the audio processor 206 also performs any necessary conversion for the storage of the audio signals.
  • the video output from the input stream processor 204 is provided to a video processor 210.
  • the video signal may be one of several formats.
  • the video processor 210 provides, as necessary, a conversion of the video content, based on the input signal format.
  • the video processor 210 also performs any necessary conversion for the storage of the video signals.
  • a storage device 212 stores audio and video content received at the input.
  • the storage device 212 allows later retrieval and playback of the content under the control of a controller 214 and also based on commands, e.g., navigation instructions such as fast-forward (FF) and rewind (RW), received from user interface 216.
  • the storage device 212 may be, for example, a hard disk drive, one or more large capacity integrated electronic memories, such as static RAM (SRAM), or dynamic RAM (DRAM), an interchangeable optical disk storage system such as a compact disk (CD) drive or digital video disk (DVD) drive, etc.
  • the converted video signal, from the video processor 210, either originating from the input or from the storage device 212, is provided to the display interface 218.
  • the display interface 218 further provides the display signal to a display device, such as display device 1 14, described above.
  • the controller 214 is interconnected via a bus to several of the components of the device 200, including the input stream processor 204, audio processor 206, video processor 210, storage device 212, and user interface 216.
  • the controller 214 manages the conversion process for converting the input stream signal into a signal for storage on the storage device or for display.
  • the controller 214 also manages the retrieval and playback of stored content.
  • the controller 214 can receive multi-state information input by a user, as described below in more detail.
  • the controller 214 is further coupled to control memory 220 (e.g., volatile or non-volatile memory, including RAM, SRAM, DRAM, ROM, programmable ROM (PROM), flash memory, electronically programmable ROM (EPROM), electronically erasable programmable ROM (EEPROM), etc.) for storing information and instruction code for controller 214.
  • Control memory 220 may store instructions for controller 214.
  • Control memory 220 may also store a database of elements, such as graphic elements containing content. The database may be stored as a pattern of graphic elements, such as graphic elements containing content, various graphic elements used for generating a displayable user interface for display interface 218, and the like.
  • the memory may store the graphic elements in identified or grouped memory locations and use an access or location table to identify the memory locations for the various portions of information related to the graphic elements. Additional details related to the storage of the graphic elements will be described below.
  • the implementation of the control memory 220 may include several possible embodiments, such as a single memory device, more than one memory circuit communicatively connected or coupled together to form a shared or common memory, etc.
  • the memory may be included with other circuitry, such as portions of bus communications circuitry, in a larger circuit.
  • FIGS. 3 and 4 represent two examples of input devices, 300 and 400, such as input device 1 16.
  • Input devices 300 and 400 can couple with a user interface, such as user interface 216.
  • Input devices 300 and 400 may be used to initiate and/or select various functions available to a user related to the acquisition, consumption, access and/or modification of content, such as multimedia content, broadcast content, Internet content, etc.
  • Input devices 300 and 400 can also allow a user to input multi-state information and requests for recommendations, as described below in more detail.
  • FIG. 3 illustrates an example of a touch panel input device 300.
  • the touch panel device 300 may be interfaced, for example, via the user interface 216 of the receiving device 200 in FIG. 2.
  • the touch panel device 300 allows operation of the receiving device or set top box based on hand movements, or gestures, and actions translated through the panel into commands for the set top box or other control device. This is achieved by the controller 214 generating a touch screen user interface including at least one user selectable image element enabling initiation of at least one operational command.
  • the touch screen user interface may be pushed to the touch screen device 300 via the user interface 216.
  • the touch screen user interface generated by the controller 214 may be accessible via a webserver executing on one of the user interface 216.
  • the touch panel 300 may serve as a navigational tool to navigate a grid display, as described above for search results.
  • the touch panel 300 may serve as a display device allowing the user to more directly interact with the navigation through the display of content.
  • the touch panel 300 can also include a camera element and/or at least one audio sensing element.
  • the touch panel 300 employs a gesture sensing controller or touch screen enabling a number of different types of user interaction.
  • the inputs from the controller are used to define gestures and the gestures, in turn, define specific contextual commands.
  • the configuration of the sensors may permit defining movement of a user's fingers on a touch screen or may even permit defining the movement of the controller itself in either one dimension or two dimensions.
  • Two- dimensional motion such as a diagonal, and a combination of yaw, pitch and roll can be used to define any three-dimensional motions, such as a swing.
  • Gestures are interpreted in context and are identified by defined movements made by the user. Depending on the complexity of the sensor system, only simple one-dimensional motions or gestures may be allowed.
  • a simple right or left movement on the sensor as shown here may produce a fast forward or rewind function.
  • multiple sensors could be included and placed at different locations on the touch screen. For instance, a horizontal sensor for left and right movement may be placed in one spot and used for volume up/down, while a vertical sensor for up and down movement may be placed in a different spot and used for channel up/down. In this way specific gesture mappings may be used.
  • the touch screen device 300 may recognize alphanumeric input traces which may be automatically converted into alphanumeric text displayable on one of the touch screen device 300 or output via display interface 218 to a primary display device.
  • FIG. 4 illustrates another example of an input device, input device 400.
  • the input device 400 may, for example, be used to interact with the user interfaces generated by the system and which are output for display by the display interface 218 to a primary display device (e.g. television, monitor, etc).
  • the input device of FIG. 4 may be formed as a remote control having a 12-button alphanumerical keypad 402 and a navigation section 404 including directional navigation buttons and a selector button.
  • the input device 400 may also include a set of function buttons 406 that, when selected, initiate a particular system function (e.g. menu, guide, DVR, etc).
  • the input device 400 may include a set of programmable application specific buttons 408 that, when selected, may initiate a particularly defined function associated with a particular application executed by the controller 214.
  • Input device 400 may include a display screen 410 that can display information, such as program information, menu information, navigation information, etc.
  • the depiction of the input device in FIG. 4 is merely an example, and it should be appreciated that various input devices may include any number and/or arrangement of buttons that enable a user to interact with the user interface process according to various embodiments. Additionally, it should be noted that users may use either or both of the input devices depicted and described in FIGS. 3 and 4 simultaneously and/or sequentially to interact with the system. Other input devices are considered within the scope of the present disclosure.
  • the user input device may include at least one of an audio sensor and a visual sensor.
  • the audio sensor may sense audible commands issued from a user and translate the audible commands into functions to be executed by the user.
  • the visual sensor may sense the user's presence and match user information of the sensed user(s) to stored visual data in the usage database 120 in FIG. 1 . Matching visual data sensed by the visual sensor enables the system to automatically recognize the user's presence and retrieve any user profile information associated with the user.
  • Visual data may also be used by the recommendation system to determine a current state of the user. For example, the recommendation system may analyze the visual data to determine a facial expression of the user, which may allow the determination of the mood the user is in.
  • the visual sensor may sense physical movements of at least one user present and translate those movements into control commands for controlling the operation of the system.
  • the system may have a set of pre-stored command gestures that, if sensed, enable the controller 214 to execute a particular feature or function of the system.
  • An example of a type of gesture command may include the user waving their hand in a rightward direction which may initiate a fast forward command or a next screen command or a leftward direction which may initiate a rewind or previous screen command depending on the current context.
  • This description of physical gestures able to be recognized by the system is merely exemplary and should not be taken as limiting. Rather, this description is intended to illustrate the general concept of physical gesture control that may be recognized by the system and persons skilled in the art could readily understand that the controller may be programmed to specifically recognize any physical gesture and allow that gesture to be tied to at least one executable function of the system.
  • FIG. 5 is a flowchart illustrating an example of a method of providing multi- state recommendations according to various embodiments.
  • the method of FIG. 5 is directed to recommending a playlist based on two states.
  • the method may be performed by a recommendation system such as recommendation system 1 17 shown in FIG. 1 .
  • the recommendation system can obtain (501 ) a first state and obtain (502) a second state.
  • the recommendation system can obtain (501 ) a first state and obtain (502) a second state.
  • the recommendation system can provide a user with the ability to select a multi-state recommendation through a graphical user interface (GUI) displayed, for example, on display device 1 14 of user system 107.
  • GUI graphical user interface
  • the multi-state recommendation GUI can allow the user to select from among different types of multi-state recommendations for the recommendation system to provide.
  • one type of multi-state recommendation can be a state-inducing recommendation.
  • Another type of multi- state recommendation can be a state-balancing recommendation.
  • a state-inducing recommendation can be used, for example, when a user is in a particular state (e.g., sad) and desires to be in another state (e.g., happy).
  • a state- inducing recommendation can also be used when a group of users are currently in the same state and a different state is desired. For example, to prepare a group of excited and energetic children for bedtime, the parents may request a playlist that will help the excited children calm down.
  • the recommendation system can provide the user with a way to inform the recommendation system of the current state and the desired state.
  • the recommendation system can provide the user a list of states (e.g., sad, happy, calm, excited, etc.) from which the user may select a state that identifies the current state and a state that identifies the desired state.
  • the recommendation system may identify one or more of the current state and the desired state without the need for the user to explicitly identify the state. For example, if the recommendation system is granted access to personal information of the user, such as real-time video capture of the user's face, the user's biometric information, etc., the recommendation system may analyze the user's expressions, heart rate, etc., and estimate the current state of the user is excited and energetic. If the user requests a playlist that will end near the user's bedtime, the recommendation system may determine that the desired state is calm and relaxed.
  • states e.g., sad, happy, calm, excited, etc.
  • a state-balancing recommendation can be used, for example, to provide a playlist for multiple users when the users desire different states. For example, a husband may desire a mindless movie, while a wife may desire a thought-provoking movie.
  • the recommendation system can provide the users with a way to inform the recommendation.
  • the recommendation system can determine one or more of the states without the need for the users to explicitly identify the states. For example, if the recommendation system determines the identities of the users and has access to personal information, such as user preferences, viewing history, etc., the recommendation system may estimate the desired states. The recommendation system can then generate (503) a playlist of two or more content items. The recommendation system can generate the playlist based on certain criteria to help match the content to the first and second states. For example, the recommendation can analyze a database of content items to find a content item that begins at or near the first state. For example, if the first state is a happy state, the recommendation system can select a content item that begins in a happy state.
  • the recommendation system can select the first content item, in part, by determining that the first state and a beginning state of the first content item satisfy a closeness criteria.
  • the closeness criteria can include, for example, that the states are within a predetermined threshold range of each other.
  • recommendation system can select a last content item, in part, by determining that the second state and an ending state of the last content item satisfy the closeness criteria. In order to help ensure smoother transitions from one content item to the next, the recommendation system can further require each transition from the ending state of one content item to the beginning state of the next content item satisfies the closeness criteria.
  • FIGS. 6-10 illustrate example methods of determining content items with beginning and ending states that satisfy closeness criteria according to various embodiments.
  • a recommendation system can determine beginning states and ending states of content items based on tags associated with each content item.
  • FIG. 6 illustrates a timeline 601 of a Movie X, which can include tags 603 (Tags A-L) at various time points along the timeline.
  • Tags can include information that characterizes, describes, etc., the movie at the time point.
  • tags can describe the action that is happening in a movie, for example, "car chase,” "family eating breakfast,” “rocket blasting off,” etc., tags can characterize a movie at that time point moment, and can be used to determine information about the state of a movie.
  • tags can be generated manually. For example, a person can watch a movie, write descriptions of the movie at various points, and store the descriptions as tags. In some embodiments, tags can be generated automatically. For example, the frames of a movie can be analyzed by a software program to determine various characteristics, such as brightness, motion, audio volume levels, etc., and the program can use the resulting information to create tags. The tags can provide information about a state of the content item at a particular time. FIG. 7 illustrates an example in which each tag 603 of Movie X has been analyzed to determine a level of happiness associated with the tag.
  • Graph 701 shows each tag 603 is associated with a black dot, such as the dot labeled 705, representing a level of happiness on a scale of 0.0 to 1 .0.
  • a black dot such as the dot labeled 705
  • Each value or range of values on the scale can correspond to a particular state.
  • the scale of 0.0 to 1 .0 can be divided into ranges, and the ranges can correspond to a state 707 on the happiness scale.
  • tags 603 can have an associated level 705 on the happiness scale, which can correspond to a particular state 707.
  • Tag A may indicate that a wedding scene is happening in the movie, which may be associated with a high level of happiness that falls into the Very Happy state.
  • the tags themselves may include an explicit state, for example, that was assigned when the tag was created.
  • FIG. 7 shows an example of a movie that is happy throughout its entire timeline, because all of the happiness levels corresponding to the tags in Movie X correspond to the Happy state or the Very Happy state.
  • FIG. 8 shows an example of a movie, i.e., Movie Y, that is sad throughout its entire timeline.
  • Graph 801 shows all of tags 803 of Movie Y correspond to happiness levels 805 that correspond to the Sad state or the Very Sad state.
  • tags can be associated with different states along other spectrums, such as spectrums corresponding to levels of action (e.g., fast- paced vs. slow-paced), energy (e.g., high energy vs. calm), suspense (e.g., high suspense vs. low suspense), thought-provoking (e.g., intellectual vs. mindless), resolution (e.g., many unresolved issues in plot development, character development, unanswered questions, etc. vs. all issues resolved, questions answered, loose ends tied up, etc.), atmosphere (e.g., light vs. dark, open vs. stuffy, etc.), familiarity (e.g., exotic vs.
  • a state can be associated with a combination of spectrums.
  • the state of "Ready for Bed” may be associated with a combination of slow-paced state (e.g., the action spectrum), calm state (e.g., the energy spectrum), and mindless state (e.g., the thought-provoking spectrum).
  • the state of "Ready for Interesting Conversation” may be associated with a combination of intellectual state (e.g., the thought-provoking spectrum), unanswered questions state (e.g., the resolution spectrum), and many twists state (e.g., intricacy spectrum).
  • intellectual state e.g., the thought-provoking spectrum
  • unanswered questions state e.g., the resolution spectrum
  • many twists state e.g., intricacy spectrum
  • recommendation systems may determine if a user or users desires a state-inducing recommendation, e.g., the first state is a current state and the second state is the desired state.
  • the recommendation system can generate a playlist to help the user or users move from the current state to the desired state.
  • FIG. 9 illustrates an example representing a generated playlist 900 that includes two movies, Movie Q and Movie R.
  • FIG. 9 shows a graph 901 showing the timeline of Movie Q including tags 903 with associated levels of happiness, and shows a graph 905 showing the timeline of Movie Q including tags 907 with associated levels of happiness.
  • a first state 909 obtained by the recommendation system is shown on graph 901 .
  • First state 909 corresponds to the Sad state.
  • the recommendation system may have provided the user with the question "How do you feel?", and the user may have responded by selecting the Sad state.
  • a second state 91 1 obtained by the recommendation system is shown on graph 905.
  • Second state 91 1 corresponds to the Very Happy state, which may have been obtained likewise by a similar question to the user, e.g., "How would you like to feel?"
  • the recommendation system may have asked the user how soon the user would like reach the Very Happy state.
  • the user can respond by indicating, for example, a desired amount of time, a desired number of movies, etc. In this example, the user indicated she desired to reach the Very Happy state by the end of two movies.
  • the recommendation system can analyze the tags of movies in a movie database, for example, the movie database of an online movie provider such as Netflix®, M-Go®, etc.
  • the analysis can include determining a beginning state and an ending state on the happiness spectrum for each movie.
  • the recommendation system can, for example, determine a first subset of movies in the movie database that have a beginning state that satisfies a closeness criteria with first state 909.
  • the closeness criteria can include, for example, that the beginning state of the first movie in the playlist is within a predetermined threshold of first state 909.
  • the closeness criteria can require that all of the tags in the first 15 minutes of the first movie are associated with the first state, e.g., the Sad state in this example.
  • the closeness criteria may include an average happiness value of the tags in a particular range of time, a running average of happiness values, a threshold percentage of tags being associated with a particular state, etc.
  • the recommendation system can determine a second subset of movies in the movie database that have an ending state that satisfies the closeness criteria with second state 91 1 .
  • the closeness criteria can include, for example, that the ending state of the last movie in the playlist is within a predetermined threshold of second state 909.
  • the closeness criteria can require that all of the tags in the last 15 minutes of the last movie are associated with the second state, e.g., the Very Happy state in this example.
  • the recommendation system can then determine which movies satisfy the closeness criteria for transitions within the playlist.
  • the closeness criteria may require that each transition between the ending state of one movie in the playlist and the beginning state of the next movie in the playlist is within a
  • the recommendation system can compare the ending states of the movies in the first subset with the beginning states of the movies in the second subset to determine which pairs of movies satisfy the closeness criteria for transitions, and these pairs of movies can completely satisfy all of the closeness criteria. If there are more than one pair of movies that satisfy all of the closeness criteria, the recommendation system can apply further criteria to narrow down, e.g., further refine the results, to determine which playlist of two movies to recommend. For example, the recommendation system could determine which playlist of two movies best matches the first and second states and has a smooth transition by iteratively applying increasingly strict closeness criteria until a single playlist of two movies remains.
  • the recommendation system can analyze the beginning and ending states of movies in the remaining portion of the movie database, i.e., movies that are not in the first or second subsets, to determine combinations of movies that can be included in the playlist and satisfy the closeness criteria for transitions.
  • the recommendation system can also analyze the states in other portions of the movies in the movie database. For example, for each movie the recommendation system can determine an average state for every 10 minute increment in the movie's timeline, e.g., fine-grain data. The recommendation system can use this fine-grain data to aid in the selection of movies for the playlist.
  • the analysis of fine-grain data can depend, for example, on whether the user requested a state-inducing playlist or a state-balancing playlist.
  • playlist 900 is an example of a playlist generated in response to a request for a state-inducing playlist.
  • the recommendation system can apply additional criteria regarding the fine-grain data.
  • the selection of movies can be further based on an analysis of the fine- grain data to determine which combination of movies progress smoothly from Sad to Very Happy. For example, a curve-fit analysis of the happiness levels associated with the tags may be performed to determine which combination of movies results in the smoothest curve. As illustrated in FIG.
  • FIG. 10 illustrates an example representing a playlist 1000 generated based on a request for a state-balancing playlist.
  • FIG. 10 includes a graph 1001 showing the timeline of a Movie S including tags 1003 with associated levels of happiness, and a graph 1005 showing the timeline of a Movie T including tags 1007 with associated levels of happiness.
  • a first state 1009 and a second state 101 1 obtained by the recommendation system are shown on graphs 1001 and 1005.
  • First state 1009 corresponds to the Sad state
  • second state 101 1 corresponds to the Very Happy state.
  • the recommendation system may received an indication that two users desire a playlist of two movies.
  • the recommendation system may have provided the question "What kind of movie does the first user want to watch?"
  • the first user may have responded by selecting the Sad state.
  • the recommendation system may have next provided the question "What kind of movie does the second user want to watch?”
  • the second user may have responded by selecting the Very Happy state.
  • the first and second states do not correspond to a starting and ending states, as in a state-inducing
  • graphs 1001 and 1005 illustrate that first state 1009 and second state 101 1 each span the entire playlist timeline.
  • the recommendation system can analyze the movie database to match the beginning and ending of the playlist and the transitions within the playlist according to closeness criteria, similarly to the example of FIG. 9.
  • the beginning of a state-balancing playlist may match either one of the first or second states, so long as the ending of the playlist matches the other one of the first or second states.
  • the closeness criteria are the same regardless of whether the first user's desired state is labeled the "first state” and the second user's desired state is labeled the "second state", or vice versa.
  • fine-grain data may be used differently in state-balancing playlist generation than in generation of state-inducing playlists. For example, in state- balancing it may be desirable that the movies stay at or near one of the desired states for as much time as possible, which may give the first and second users a better experience.
  • the recommendation system can analyze the fine-grain data to determine which combination of movies maximizes the amount of time spent at or near either the first state or the second state. For playlists that include more than two movies, the recommendation system may additionally determine which combination of movies best equalizes the time spent at or near the first and second states. As illustrated in FIG.
  • Movie S begins in the Sad state desired by the first user and remains in the Sad state through most of the movie timeline. Toward the end of Movie S, the state becomes happier until the state is Not Happy or Sad at the end. Movie T begins in a state of Not Happy or Sad, and then jumps into the Very Happy state where it remains for the remainder of the movie. In this way, for example, the first and second users can each watch a movie that remains in their desired state for the majority of the timeline, and the playlist can offer a smooth transition from Sad to Very Happy, which may help prepare both the first and second users to go to the Very Happy state of Movie T from the Sad state of Movie S.
  • the recommendation system can receive more than two states. For example, a group of three friends request a playlist of three movies, and the friends desire three different states. As one skilled in the art would readily understand, the same principles may be applied to generate state-balanced playlists for more that two states.
  • the same closeness criteria may be used for the beginning, each transition, and the end of the playlist.
  • different closeness criteria can be used.
  • the recommendation system may use stricter closeness criteria for the beginning and ending of the playlist than the closeness criteria used for transitions.
  • each closeness criteria may be part of a larger, combined closeness criteria.
  • the closeness criteria may be that an average of differences of closeness (e.g., an average of the difference between the first state and the beginning of the first content item, the difference between the ending/beginning states at each transition, and the difference between the second state and the ending state of the last content item) may not exceed a predetermined threshold. This case may allow, for example, some transitions within the playlist to be less close if the beginning and ending states of the playlist are very close to the first and second states, respectively.
  • the recommendation system can apply further constraints to the selection of content items for the playlist.
  • the recommendation system may provide the user with an interface to narrow down the available choices to content items of a particular genre, artist, language, series, etc.
  • the user may place constraints on the length of the playlist, for example, by specifying a range for total playlist time, by specifying a number of content items in the playlist, etc.
  • the recommendation request may specify that the user is in an excited state and wants to calm down with a playlist of four episodes from a particular TV series.
  • a computing system such as a general purpose computer through computer-executable instructions (e.g., software, firmware, etc.) stored on a computer-readable medium (e.g., storage disk, memory, etc.) and executed by a computer processor.
  • a computer-readable medium e.g., storage disk, memory, etc.
  • software implementing one or more methods shown in the flowcharts could be stored in storage device 212 and executed by controller 214.
  • various elements shown in the figures may be implemented in various forms of hardware, software or combinations thereof. That is, various elements may be implemented in a combination of hardware and software on one or more
  • processor or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor ("DSP") hardware, read only memory (“ROM”) for storing software, random access memory (“RAM”), and nonvolatile storage.
  • DSP digital signal processor
  • ROM read only memory
  • RAM random access memory
  • any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
  • any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a combination of circuit elements that performs that function, software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function, etc.
  • the disclosure as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A recommendation system (200) can determine a first movie that has a beginning state that is close to the first state identified by the user, e.g., a sad state the user is currently in. Recommendation systems can use closeness criteria, such as determining a difference between a happiness value of the first state and a happiness value of the beginning state of the first movie, and requiring the difference to be less than a predetermined threshold value. Likewise, closeness criteria may be used to ensure the ending state of the first movie is close to the beginning state of the second movie, and to ensure the ending state of the second movie is close to the second state, e.g., the happy state desired by the user. In this way, for example, recommendation systems can provide multi-state recommendations.

Description

METHODS AND SYSTEMS FOR MULTI-STATE RECOMMENDATIONS
CROSS REFERENCE
This application claims priority to a U.S. Provisional Application, Serial No. 62/057,896, filed on September 30, 2014, which is herein incorporated by reference in its entirety.
TECHNICAL FIELD
The present disclosure generally relates to recommendation methods and systems and, more particularly, to recommendations directed to multiple states. BACKGROUND
Home entertainment systems, including television and media centers, are converging with the Internet and providing access to a large amount of available content, such as video, movies, TV programs, music, etc. This expansion in the amount of content available on-demand has been a boon for consumers of content. However, searching through vast catalogues of movies and TV programs, music inventories, video clip databases, etc., can be overwhelming. As a result,
recommendation systems have become more popular. Recommendation systems, for example, may be a feature offered by an online movie provider, may be built into a gateway, set-top box, etc., and may be a function of software applications run on personal computers, smart phones, etc.
A recommendation system may recommend content based on a user request. Users often request recommendations based on a particular state, e.g., state of mind, mood, etc., that the user is in. For example, a happy user may request a
recommendation for a happy or funny movie, and the recommendation system may provide a list of recommended happy or funny movies. Likewise, a user in a sad state may request a recommendation for a dark, sad movie, and the recommendation system may provide a list of recommended dark, sad movies. Similarly, a user in an excited state may request a recommendation for a past-paced, mindless movie, etc. In these cases, a recommendation system may be able to recommend suitable content that matches the state of the user. However, some users may request recommendations that are not based on the state the user is in, but are based on a desired state, e.g., a state the user wishes to be in. For example, a sad user might request a recommendation for a happy movie because the user wishes to be in a happy state. In this case, a
recommendation system may provide a list of happy movies, and the user may commence to watch the recommended movies hoping to become happier. However, the disconnect between the user's sad state and the happiness of the recommended movies might interfere with the connection the user feels with the movies. This lack of connection may hinder or prevent the user from achieving the desired state, i.e., a happy state. For example, the user may attempt to watch the recommended movies, but may lose interest or become annoyed because the happiness of the
recommended movies seems unreal or contrived in light of the user's current state.
SUMMARY
Examples and details are provided herein of systems and methods for providing recommendations based on multiple states. In this way, for example, it may be possible to reduce or eliminate some of the drawbacks of previous recommendation systems. In various embodiments, a recommendation system can determine a first movie that has a beginning state that is close to the first state identified by the user, e.g., the sad state the user is currently in. In various embodiments, recommendation systems can use a closeness criteria, such as determining a difference between a happiness value of the first state and a happiness value of the beginning state of the first movie, and requiring the difference to be less than a predetermined threshold value. Likewise, a closeness criteria may be used to ensure the ending state of the first movie is close to the beginning state of the second movie, and to ensure the ending state of the second movie is close to the second state, e.g., the happy state desired by the user. In this way, for example,
recommendation systems can provide multi-state recommendations.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of an example of a system for providing multi-state recommendations and delivering content according to various embodiments. FIG. 2 is a block diagram of an example of a computing system, such as a set- top box/digital video recorder (DVR), gateway, etc., that can include multi-state recommendation functionality according to various embodiments.
FIG. 3 illustrates an example of a touch panel input device according to various embodiments. FIG. 4 illustrates another example of an input device according to various embodiments.
FIG. 5 is a flowchart illustrating an example of a method of providing multi- state recommendations according to various embodiments.
FIG. 6 illustrates a timeline of a movie including tags at various time points along the timeline according to various embodiments.
FIG. 7 illustrates an example in which each tag of movie has been analyzed to determine a level of happiness associated with the tag, the movie being a happy movie according to various embodiments.
FIG. 8 illustrates an example in which each tag of another movie has been analyzed to determine a level of happiness associated with the tag, the movie being a sad movie according to various embodiments.
FIG. 9 illustrates an example of a playlist generated based on a request for a state-inducing playlist according to various embodiments.
FIG. 10 illustrates an example of a playlist generated based on a request for a state-balancing playlist according to various embodiments.
It should be understood that the drawings are for purposes of illustrating the concepts of the disclosure and are not necessarily the only possible configurations for illustrating the disclosure.
DETAILED DESCRIPTION
Consumers of content, such as movies, television (TV), music, etc., can have difficulty finding content they are likely to enjoy. Consumers may be faced with browsing through massive databases of content, for example, and can become overwhelmed and frustrated. Consumers may wish to obtain recommendations for content items (also referred to herein simply as "items"), e.g., movies, TV shows, songs, etc., from a recommender, such as a recommendation systems, a
recommendation service, etc.
Users often request recommendations based on a particular state, e.g., state of mind, mood, etc., that the user is in. However, some users may request recommendations that are not based on the state the user is in, but are based on a desired state, e.g., a state the user wishes to be in. For example, a sad user might request a recommendation for a happy movie because the user wishes to be in a happy state. In this case, a recommendation system may provide a list of happy movies, and the user may commence to watch the recommended movies hoping to become happier. However, the disconnect between the user's sad state and the happiness of the recommended movies might interfere with the connection the user feels with the movies. This lack of connection may hinder or prevent the user from achieving the desired state, i.e., a happy state. For example, the user may attempt to watch the recommended movies, but may lose interest or become annoyed because the happiness of the recommended movies seems unreal or contrived in light of the user's current state.
On the other hand, better results may be possible if the recommendation system takes into account that multiple states exist in this situation, i.e., a current state (the user is sad) and a desired state (the user wants to be happy). For example, the request for a recommendation may identify that the user is in a sad state and desires to move into a happy state. Based on this information, the recommendation system may recommend a first movie that is sad at the beginning and becomes happier toward the end, and a second movie that begins at about the same level of happiness as the end of the first movie and that progresses to an ending that is even happier. In this way, for example, the sad beginning of the first movie may allow the user to become engaged more easily, and the progression through the two movies toward a happier state may be better able to move the user's state toward happy. Another example of multiple state recommendations can include two or more people with different tastes trying to decide what to watch. For example, a husband may want to watch a fast-paced, mindless movie, while a wife may want to watch a slow, thought-provoking movie. In this case, the multiple states can be a desired state of a first person (the husband's desired fast-pace, mindless state) and a desired state of a second person (the wife's desired slow, thought-provoking state). There may be no single movie that can satisfy both desired states. Moreover, merely recommending Movie A that is entirely fast-paced and mindless and Movie B that is entirely slow and thought-provoking may not provide the best results. For example, if the couple watch Movie A first, the wife may find it difficult to sit through the entire movie without becoming annoyed or losing interest in movie watching. Likewise, the husband may become disinterested at the beginning of Movie B because of the extreme slowdown in pace from the end of Movie A and the sudden shock of deeply intellectual content. A better recommendation may be, for example, a first movie that is fast-paced and mindless through most of the movie, but that slows down and becomes more thought-provoking towards the end, and a second movie that begins with some action and then slows down and becomes more thought-provoking. A recommendation system may recommend content based on a user request that identifies multiple states, e.g., states of mind, moods, etc. For example, a user may indicate that she is sad and request a recommendation for a playlist of content items, e.g., movies, that will help her become happy. The recommendation system may generate a playlist including a first movie that is sad at the beginning and becomes happier toward the end, and a second movie that begins at about the same level of happiness as the end of the first movie and that progresses to an ending that is even happier. In this way, for example, the sad beginning of the first movie may allow the user to become engaged more easily, and the progression through the two movies toward a happier state may be better able to move the user's state toward happy.
In various embodiments, recommendation systems can use stored
characterizations of a content item, such as tags at different time points during a movie, to determine the state of the content item at the different time points. For example, a scene in a movie may be tagged with information such as "wedding scene, tears of joy, classical music playing." The information may be stored, for example, in metadata within the movie file, which can be read by the
recommendation system and used to determine a state of the movie at that time point. For example, a recommendation system may determine that the tag "wedding scene, tears of joy, classical music playing" corresponds to a happy state. In this way, for example, a recommendation system may determine the state of a movie at different points in time, such as at the beginning of the movie and at the end of the movie. A recommendation system can, for example, determine a first movie that has a beginning state that is close to the first state identified by the user, e.g., the sad state the user is currently in. In various embodiments, recommendation systems can use a closeness criteria, such as determining a difference between a happiness value of the first state and a happiness value of the beginning state of the first movie, and requiring the difference to be less than a predetermined threshold value. Likewise, a closeness criteria may be used to ensure the ending state of the first movie is close to the beginning state of the second movie, and to ensure the ending state of the second movie is close to the second state, e.g., the happy state desired by the user. In this way, for example, recommendation systems can provide multi-state recommendations.
In various embodiments, multi-state recommendations may be implemented in the recommendation system of an online movie provider. FIGS. 1 -4 illustrate an example of an implementation in which multi-state recommendations can be provided by a recommendation system of an online content provider. However, it should be understood that various embodiments can include, for example, stand-alone multi- state recommendation systems built into a gateway, set-top box, etc., and that various embodiments can be implemented in software applications that can be executed on personal computers, smart phones, etc.
FIG. 1 illustrates a block diagram of an example of a system 100 for delivering content and multi-state recommendations to a home or end user. The content can originate from a content source 102, such as a movie studio or production house. The content may be supplied in at least one of two forms. One form may be a broadcast form of content. The broadcast content is provided to the broadcast affiliate manager 104, which is typically a national broadcast service, such as the American Broadcasting Company (ABC), National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), etc. The broadcast affiliate manager may collect and store the content, and may schedule delivery of the content over a delivery network, shown as delivery network 106. Delivery network 106 may include satellite link transmission from a national center to one or more regional or local centers. Delivery network 106 may also include local content delivery using local delivery systems such as over the air broadcast, satellite broadcast, or cable broadcast. The locally delivered content is provided to a user system 107 in a user's home.
User system 107 can include a receiving device 108 that can receive and process content and perform other functions described in more detail below. It is to be appreciated that receiving device 108 can be, for example, a set-top box, a digital video recorder (DVR), a gateway, a modem, etc. Receiving device 108 may act as entry point, or gateway, for a home network system that includes additional devices configured as either client or peer devices in the home network.
User system 107 can also include a display device 1 14. In some
embodiments, display device 1 14 can be an external display coupled to receiving device 108. In some embodiments, receiving device 108 and display device 1 14 can be parts of a single device. The display device 1 14 may be, for example, a conventional 2-D type display, an advanced 3-D display, etc. User system 107 can also include an input device 1 16, such as a remote controller, a keyboard, a mouse, a touch panel, a touch screen, etc. The input device 1 16 may be adapted to provide user control for the receiving device 108 and/or the display device 1 14. In some embodiments, input device 1 16 may be an external device that can couple to receiving device 108 via, for example, a wired connection, a signal transmission system, such as infra-red (IR), radio frequency (RF) communications, etc., and may include standard protocols such as universal serial bus (USB), infra-red data association (IRDA) standard, Wi-Fi, Bluetooth and the like, proprietary protocols, etc. In some embodiments, receiving device 108 and input device 1 16 can be part of the same device. Operations of input device 1 16 will be described in further detail below.
A second form of content is referred to as special content. Special content may include, for example, premium viewing content, pay-per-view content, Internet access, other content otherwise not provided to the broadcast affiliate manager, e.g., movies, video games, other video elements, etc. The special content may be content requested by the user, such as a webpage, a movie download, etc. The special content may be delivered to a content manager 1 10. The content manager 1 10 may be a service provider, such as an Internet website, affiliated, for instance, with a content provider, broadcast service, or delivery network service. The content manager 1 10 may also incorporate Internet content into the delivery system. The content manager 1 10 may deliver the content to the user's receiving device 108 over a communication network, e.g., communication network 1 12. Communication network 1 12 may include high-speed broadband Internet type communications systems. It is important to note that the content from the broadcast affiliate manager 104 may also be delivered using all or parts of communication network 1 12 and content from the content manager 1 10 may be delivered using all or parts of delivery network 106. In some embodiments, the user may obtain content, such as webpages, etc., directly from the Internet 1 13 via communication network 1 12 without necessarily having the content managed by the content manager 1 10.
Several adaptations for utilizing the separately delivered content may be possible. In one possible approach, the special content is provided as an
augmentation to the broadcast content, providing alternative displays, purchase and merchandising options, enhancement material, etc. In another embodiment, the special content may completely replace some programming content provided as broadcast content. Finally, the special content may be completely separate from the broadcast content, and may simply be a media alternative that the user may choose to utilize. For instance, the special content may be a library of movies that are not yet available as broadcast content.
The receiving device 108 may receive different types of content from one or both of delivery network 106 and communication network 1 12. The receiving device 108 processes the content, and provides a separation of the content based on user preferences and commands. The receiving device 108 may also include a storage device, such as a hard drive or optical disk drive, for recording and playing back audio and video content. Further details of the operation of the receiving device 108 and features associated with playing back stored content will be described below in relation to FIG. 2. The processed content is provided to display device 1 14. In the example of FIG. 1 , content manager 1 10 also controls a recommendation system 1 17 that can include a recommendation engine 1 18 and a database 120. Recommendation system 1 17 can process multi-state
recommendation information that can be used to provide recommendations to the user as will be described in more detail below. Although recommendation system 1 17 is controlled by content manager 1 10 in this example, it should be appreciated that in some embodiments, recommendation systems can be operated by other entities, such as separate recommendation service providers whose primary service is providing recommendations.
FIG. 2 includes a block diagram of an example of a computing system, such as a receiving device 200. Receiving device 200 may operate similar to receiving device 108 described in FIG. 1 and may be included as part of a gateway device, modem, set-top box, personal computer, television, tablet computer, smartphone, etc. Receiving device 200 may also be incorporated into other systems including an audio device or a display device. The receiving device 200 may be, for example, a set top box coupled to an external display device (e.g., a television), a personal computer coupled to a display device (e.g., a computer monitor), etc. In some embodiments, the receiving device 200 may include an integrated display device, for example, a portable device such as a tablet computer, a smartphone, etc.
In receiving device 200 shown in FIG. 2, the content is received by an input signal receiver 202. The input signal receiver 202 may include, for example, receiver circuits used for receiving, demodulation, and decoding signals provided over one of the several possible networks including over the air, cable, satellite, Ethernet, fiber and phone line networks. The desired input signal may be obtained based on user input provided through a user interface 216. For example, the user input may include search terms for a search, and the input signal received by input signal receiver 202 may include search results. User interface 216 can be coupled to an input device, such as input device 1 16, and can receive and process corresponding user inputs, for example, keystrokes, button presses, touch inputs, such as gestures, audio input, such as voice input, etc., from the input device. User interface 216 may be adapted to interface to a cellular phone, a tablet, a mouse, a remote controller, etc. The decoded output signal is provided to an input stream processor 204. The input stream processor 204 performs the final signal selection and processing, and includes separation of video content from audio content for the content stream. The audio content is provided to an audio processor 206 for conversion from the received format, such as a compressed digital signal, to an analog waveform signal. The analog waveform signal is provided to an audio interface 208 and further to the display device or audio amplifier. In some embodiments, the audio interface 208 may provide a digital signal to an audio output device or display device using a High- Definition Multimedia Interface (HDMI) cable, an audio interface such as via a
Sony/Philips Digital Interconnect Format (SPDIF), etc. The audio interface may also include amplifiers for driving one more sets of speakers. The audio processor 206 also performs any necessary conversion for the storage of the audio signals.
The video output from the input stream processor 204 is provided to a video processor 210. The video signal may be one of several formats. The video processor 210 provides, as necessary, a conversion of the video content, based on the input signal format. The video processor 210 also performs any necessary conversion for the storage of the video signals.
A storage device 212 stores audio and video content received at the input. The storage device 212 allows later retrieval and playback of the content under the control of a controller 214 and also based on commands, e.g., navigation instructions such as fast-forward (FF) and rewind (RW), received from user interface 216. The storage device 212 may be, for example, a hard disk drive, one or more large capacity integrated electronic memories, such as static RAM (SRAM), or dynamic RAM (DRAM), an interchangeable optical disk storage system such as a compact disk (CD) drive or digital video disk (DVD) drive, etc. The converted video signal, from the video processor 210, either originating from the input or from the storage device 212, is provided to the display interface 218. The display interface 218 further provides the display signal to a display device, such as display device 1 14, described above. The controller 214 is interconnected via a bus to several of the components of the device 200, including the input stream processor 204, audio processor 206, video processor 210, storage device 212, and user interface 216. The controller 214 manages the conversion process for converting the input stream signal into a signal for storage on the storage device or for display. The controller 214 also manages the retrieval and playback of stored content. Furthermore, as will be described below, the controller 214 can receive multi-state information input by a user, as described below in more detail.
The controller 214 is further coupled to control memory 220 (e.g., volatile or non-volatile memory, including RAM, SRAM, DRAM, ROM, programmable ROM (PROM), flash memory, electronically programmable ROM (EPROM), electronically erasable programmable ROM (EEPROM), etc.) for storing information and instruction code for controller 214. Control memory 220 may store instructions for controller 214. Control memory 220 may also store a database of elements, such as graphic elements containing content. The database may be stored as a pattern of graphic elements, such as graphic elements containing content, various graphic elements used for generating a displayable user interface for display interface 218, and the like. In some embodiments, the memory may store the graphic elements in identified or grouped memory locations and use an access or location table to identify the memory locations for the various portions of information related to the graphic elements. Additional details related to the storage of the graphic elements will be described below. Further, the implementation of the control memory 220 may include several possible embodiments, such as a single memory device, more than one memory circuit communicatively connected or coupled together to form a shared or common memory, etc. Still further, the memory may be included with other circuitry, such as portions of bus communications circuitry, in a larger circuit.
FIGS. 3 and 4 represent two examples of input devices, 300 and 400, such as input device 1 16. Input devices 300 and 400 can couple with a user interface, such as user interface 216. Input devices 300 and 400 may be used to initiate and/or select various functions available to a user related to the acquisition, consumption, access and/or modification of content, such as multimedia content, broadcast content, Internet content, etc. Input devices 300 and 400 can also allow a user to input multi-state information and requests for recommendations, as described below in more detail. FIG. 3 illustrates an example of a touch panel input device 300. The touch panel device 300 may be interfaced, for example, via the user interface 216 of the receiving device 200 in FIG. 2. The touch panel device 300 allows operation of the receiving device or set top box based on hand movements, or gestures, and actions translated through the panel into commands for the set top box or other control device. This is achieved by the controller 214 generating a touch screen user interface including at least one user selectable image element enabling initiation of at least one operational command. The touch screen user interface may be pushed to the touch screen device 300 via the user interface 216. In some embodiments, the touch screen user interface generated by the controller 214 may be accessible via a webserver executing on one of the user interface 216. The touch panel 300 may serve as a navigational tool to navigate a grid display, as described above for search results. In some embodiments, the touch panel 300 may serve as a display device allowing the user to more directly interact with the navigation through the display of content. The touch panel 300 can also include a camera element and/or at least one audio sensing element.
In some embodiments, the touch panel 300 employs a gesture sensing controller or touch screen enabling a number of different types of user interaction. The inputs from the controller are used to define gestures and the gestures, in turn, define specific contextual commands. The configuration of the sensors may permit defining movement of a user's fingers on a touch screen or may even permit defining the movement of the controller itself in either one dimension or two dimensions. Two- dimensional motion, such as a diagonal, and a combination of yaw, pitch and roll can be used to define any three-dimensional motions, such as a swing. Gestures are interpreted in context and are identified by defined movements made by the user. Depending on the complexity of the sensor system, only simple one-dimensional motions or gestures may be allowed. For instance, a simple right or left movement on the sensor as shown here may produce a fast forward or rewind function. In addition, multiple sensors could be included and placed at different locations on the touch screen. For instance, a horizontal sensor for left and right movement may be placed in one spot and used for volume up/down, while a vertical sensor for up and down movement may be placed in a different spot and used for channel up/down. In this way specific gesture mappings may be used. For example, the touch screen device 300 may recognize alphanumeric input traces which may be automatically converted into alphanumeric text displayable on one of the touch screen device 300 or output via display interface 218 to a primary display device.
FIG. 4 illustrates another example of an input device, input device 400. The input device 400 may, for example, be used to interact with the user interfaces generated by the system and which are output for display by the display interface 218 to a primary display device (e.g. television, monitor, etc). The input device of FIG. 4 may be formed as a remote control having a 12-button alphanumerical keypad 402 and a navigation section 404 including directional navigation buttons and a selector button. The input device 400 may also include a set of function buttons 406 that, when selected, initiate a particular system function (e.g. menu, guide, DVR, etc). In some embodiments, the input device 400 may include a set of programmable application specific buttons 408 that, when selected, may initiate a particularly defined function associated with a particular application executed by the controller 214. Input device 400 may include a display screen 410 that can display information, such as program information, menu information, navigation information, etc. The depiction of the input device in FIG. 4 is merely an example, and it should be appreciated that various input devices may include any number and/or arrangement of buttons that enable a user to interact with the user interface process according to various embodiments. Additionally, it should be noted that users may use either or both of the input devices depicted and described in FIGS. 3 and 4 simultaneously and/or sequentially to interact with the system. Other input devices are considered within the scope of the present disclosure.
In some embodiments, the user input device may include at least one of an audio sensor and a visual sensor. For example, the audio sensor may sense audible commands issued from a user and translate the audible commands into functions to be executed by the user. The visual sensor may sense the user's presence and match user information of the sensed user(s) to stored visual data in the usage database 120 in FIG. 1 . Matching visual data sensed by the visual sensor enables the system to automatically recognize the user's presence and retrieve any user profile information associated with the user. Visual data may also be used by the recommendation system to determine a current state of the user. For example, the recommendation system may analyze the visual data to determine a facial expression of the user, which may allow the determination of the mood the user is in. Additionally, the visual sensor may sense physical movements of at least one user present and translate those movements into control commands for controlling the operation of the system. In this embodiment, the system may have a set of pre-stored command gestures that, if sensed, enable the controller 214 to execute a particular feature or function of the system. An example of a type of gesture command may include the user waving their hand in a rightward direction which may initiate a fast forward command or a next screen command or a leftward direction which may initiate a rewind or previous screen command depending on the current context. This description of physical gestures able to be recognized by the system is merely exemplary and should not be taken as limiting. Rather, this description is intended to illustrate the general concept of physical gesture control that may be recognized by the system and persons skilled in the art could readily understand that the controller may be programmed to specifically recognize any physical gesture and allow that gesture to be tied to at least one executable function of the system.
FIG. 5 is a flowchart illustrating an example of a method of providing multi- state recommendations according to various embodiments. In particular, the method of FIG. 5 is directed to recommending a playlist based on two states. The method may be performed by a recommendation system such as recommendation system 1 17 shown in FIG. 1 . The recommendation system can obtain (501 ) a first state and obtain (502) a second state. For example, in various embodiments the
recommendation system can provide a user with the ability to select a multi-state recommendation through a graphical user interface (GUI) displayed, for example, on display device 1 14 of user system 107. The multi-state recommendation GUI can allow the user to select from among different types of multi-state recommendations for the recommendation system to provide. For example, one type of multi-state recommendation can be a state-inducing recommendation. Another type of multi- state recommendation can be a state-balancing recommendation.
A state-inducing recommendation can be used, for example, when a user is in a particular state (e.g., sad) and desires to be in another state (e.g., happy). A state- inducing recommendation can also be used when a group of users are currently in the same state and a different state is desired. For example, to prepare a group of excited and energetic children for bedtime, the parents may request a playlist that will help the excited children calm down. In some embodiments, if a state-inducing recommendation is requested, the recommendation system can provide the user with a way to inform the recommendation system of the current state and the desired state. For example, the recommendation system can provide the user a list of states (e.g., sad, happy, calm, excited, etc.) from which the user may select a state that identifies the current state and a state that identifies the desired state. In some embodiments, the recommendation system may identify one or more of the current state and the desired state without the need for the user to explicitly identify the state. For example, if the recommendation system is granted access to personal information of the user, such as real-time video capture of the user's face, the user's biometric information, etc., the recommendation system may analyze the user's expressions, heart rate, etc., and estimate the current state of the user is excited and energetic. If the user requests a playlist that will end near the user's bedtime, the recommendation system may determine that the desired state is calm and relaxed.
A state-balancing recommendation can be used, for example, to provide a playlist for multiple users when the users desire different states. For example, a husband may desire a mindless movie, while a wife may desire a thought-provoking movie. In some embodiments, if a state-balancing recommendation is requested, the recommendation system can provide the users with a way to inform the
recommendation system of the different desired states, for example, through a selectable list of states. In some embodiments, the recommendation system can determine one or more of the states without the need for the users to explicitly identify the states. For example, if the recommendation system determines the identities of the users and has access to personal information, such as user preferences, viewing history, etc., the recommendation system may estimate the desired states. The recommendation system can then generate (503) a playlist of two or more content items. The recommendation system can generate the playlist based on certain criteria to help match the content to the first and second states. For example, the recommendation can analyze a database of content items to find a content item that begins at or near the first state. For example, if the first state is a happy state, the recommendation system can select a content item that begins in a happy state. In other words, the recommendation system can select the first content item, in part, by determining that the first state and a beginning state of the first content item satisfy a closeness criteria. The closeness criteria can include, for example, that the states are within a predetermined threshold range of each other. Likewise, the
recommendation system can select a last content item, in part, by determining that the second state and an ending state of the last content item satisfy the closeness criteria. In order to help ensure smoother transitions from one content item to the next, the recommendation system can further require each transition from the ending state of one content item to the beginning state of the next content item satisfies the closeness criteria.
FIGS. 6-10 illustrate example methods of determining content items with beginning and ending states that satisfy closeness criteria according to various embodiments. In these examples, a recommendation system can determine beginning states and ending states of content items based on tags associated with each content item. FIG. 6 illustrates a timeline 601 of a Movie X, which can include tags 603 (Tags A-L) at various time points along the timeline. Tags can include information that characterizes, describes, etc., the movie at the time point. In various embodiments, tags can describe the action that is happening in a movie, for example, "car chase," "family eating breakfast," "rocket blasting off," etc., tags can characterize a movie at that time point moment, and can be used to determine information about the state of a movie. In some embodiments, tags can be generated manually. For example, a person can watch a movie, write descriptions of the movie at various points, and store the descriptions as tags. In some embodiments, tags can be generated automatically. For example, the frames of a movie can be analyzed by a software program to determine various characteristics, such as brightness, motion, audio volume levels, etc., and the program can use the resulting information to create tags. The tags can provide information about a state of the content item at a particular time. FIG. 7 illustrates an example in which each tag 603 of Movie X has been analyzed to determine a level of happiness associated with the tag. Graph 701 shows each tag 603 is associated with a black dot, such as the dot labeled 705, representing a level of happiness on a scale of 0.0 to 1 .0. Each value or range of values on the scale can correspond to a particular state. For example, the scale of 0.0 to 1 .0 can be divided into ranges, and the ranges can correspond to a state 707 on the happiness scale. The ranges of values corresponding to states 707 could be, for example, as follows: 0.0-0.1 - "Extremely Sad"; 0.1 -0.2 - "Very Sad"; 0.2-0.3 - "Sad"; 0.3-0.4 - "A Little Sad"; 0.4-0.6 - "Not Happy or Sad"; 0.6-0.7 - "A Little Happy"; 0.7-0.8 - "Happy"; 0.8-0.9 - "Very Happy"; and 0.9-1 .0 - "Extremely Happy." Each of the tags 603 can have an associated level 705 on the happiness scale, which can correspond to a particular state 707. For example, Tag A may indicate that a wedding scene is happening in the movie, which may be associated with a high level of happiness that falls into the Very Happy state. Various methods can be used to associate a tag with a corresponding state. For example, tags that include
descriptive words could be analyzed to determine keywords associated with various states (e.g., various levels of happiness on a spectrum of sad-to-happy, as in FIG. 7). Tags that include information about the characteristics of the content, such as an amount of visual motion in a movie scene, can be analyzed to determine a
corresponding state, such as a level of action or pace in the scene. In some embodiments, the tags themselves may include an explicit state, for example, that was assigned when the tag was created. FIG. 7 shows an example of a movie that is happy throughout its entire timeline, because all of the happiness levels corresponding to the tags in Movie X correspond to the Happy state or the Very Happy state. FIG. 8 shows an example of a movie, i.e., Movie Y, that is sad throughout its entire timeline. Graph 801 shows all of tags 803 of Movie Y correspond to happiness levels 805 that correspond to the Sad state or the Very Sad state.
In various embodiments, tags can be associated with different states along other spectrums, such as spectrums corresponding to levels of action (e.g., fast- paced vs. slow-paced), energy (e.g., high energy vs. calm), suspense (e.g., high suspense vs. low suspense), thought-provoking (e.g., intellectual vs. mindless), resolution (e.g., many unresolved issues in plot development, character development, unanswered questions, etc. vs. all issues resolved, questions answered, loose ends tied up, etc.), atmosphere (e.g., light vs. dark, open vs. stuffy, etc.), familiarity (e.g., exotic vs. familiar), intricacy (e.g., complicated plot vs. simple plot, many twists vs. straightforward), age-appropriateness (e.g., suitable for all viewers vs. X-rated), etc. In some embodiments, a state can be associated with a combination of spectrums. For example, the state of "Ready for Bed" may be associated with a combination of slow-paced state (e.g., the action spectrum), calm state (e.g., the energy spectrum), and mindless state (e.g., the thought-provoking spectrum).
Likewise, the state of "Ready for Interesting Conversation" may be associated with a combination of intellectual state (e.g., the thought-provoking spectrum), unanswered questions state (e.g., the resolution spectrum), and many twists state (e.g., intricacy spectrum).
In various embodiments, recommendation systems may determine if a user or users desires a state-inducing recommendation, e.g., the first state is a current state and the second state is the desired state. In this case, the recommendation system can generate a playlist to help the user or users move from the current state to the desired state. FIG. 9 illustrates an example representing a generated playlist 900 that includes two movies, Movie Q and Movie R. FIG. 9 shows a graph 901 showing the timeline of Movie Q including tags 903 with associated levels of happiness, and shows a graph 905 showing the timeline of Movie Q including tags 907 with associated levels of happiness. A first state 909 obtained by the recommendation system is shown on graph 901 . First state 909 corresponds to the Sad state. For example, the recommendation system may have provided the user with the question "How do you feel?", and the user may have responded by selecting the Sad state. A second state 91 1 obtained by the recommendation system is shown on graph 905. Second state 91 1 corresponds to the Very Happy state, which may have been obtained likewise by a similar question to the user, e.g., "How would you like to feel?" Additionally, the recommendation system may have asked the user how soon the user would like reach the Very Happy state. The user can respond by indicating, for example, a desired amount of time, a desired number of movies, etc. In this example, the user indicated she desired to reach the Very Happy state by the end of two movies.
To generate playlist 900, the recommendation system can analyze the tags of movies in a movie database, for example, the movie database of an online movie provider such as Netflix®, M-Go®, etc. The analysis can include determining a beginning state and an ending state on the happiness spectrum for each movie. For the selection of the first movie in the playlist, the recommendation system can, for example, determine a first subset of movies in the movie database that have a beginning state that satisfies a closeness criteria with first state 909. The closeness criteria can include, for example, that the beginning state of the first movie in the playlist is within a predetermined threshold of first state 909. For example, the closeness criteria can require that all of the tags in the first 15 minutes of the first movie are associated with the first state, e.g., the Sad state in this example. Of course, one skilled in the art would readily understand that closeness can be determined based on other criteria. For example, in other embodiments, the closeness criteria may include an average happiness value of the tags in a particular range of time, a running average of happiness values, a threshold percentage of tags being associated with a particular state, etc.
For the selection of the last movie in the playlist, the recommendation system can determine a second subset of movies in the movie database that have an ending state that satisfies the closeness criteria with second state 91 1 . The closeness criteria can include, for example, that the ending state of the last movie in the playlist is within a predetermined threshold of second state 909. For example, the closeness criteria can require that all of the tags in the last 15 minutes of the last movie are associated with the second state, e.g., the Very Happy state in this example.
The recommendation system can then determine which movies satisfy the closeness criteria for transitions within the playlist. For example, the closeness criteria may require that each transition between the ending state of one movie in the playlist and the beginning state of the next movie in the playlist is within a
predetermined threshold. If the recommendation system is generating a playlist of only two content items, the recommendation system can compare the ending states of the movies in the first subset with the beginning states of the movies in the second subset to determine which pairs of movies satisfy the closeness criteria for transitions, and these pairs of movies can completely satisfy all of the closeness criteria. If there are more than one pair of movies that satisfy all of the closeness criteria, the recommendation system can apply further criteria to narrow down, e.g., further refine the results, to determine which playlist of two movies to recommend. For example, the recommendation system could determine which playlist of two movies best matches the first and second states and has a smooth transition by iteratively applying increasingly strict closeness criteria until a single playlist of two movies remains. If the recommendation system is generating a playlist of more than two content items, the recommendation system can analyze the beginning and ending states of movies in the remaining portion of the movie database, i.e., movies that are not in the first or second subsets, to determine combinations of movies that can be included in the playlist and satisfy the closeness criteria for transitions. In various embodiments, the recommendation system can also analyze the states in other portions of the movies in the movie database. For example, for each movie the recommendation system can determine an average state for every 10 minute increment in the movie's timeline, e.g., fine-grain data. The recommendation system can use this fine-grain data to aid in the selection of movies for the playlist. The analysis of fine-grain data can depend, for example, on whether the user requested a state-inducing playlist or a state-balancing playlist.
For example, referring again to FIG. 9, playlist 900 is an example of a playlist generated in response to a request for a state-inducing playlist. In this case, the recommendation system can apply additional criteria regarding the fine-grain data. In particular, for a state-inducing playlist, it may be desirable that the level of happiness progresses smoothly from Sad to Very Happy through the timeline of the playlist. In this regard, the selection of movies can be further based on an analysis of the fine- grain data to determine which combination of movies progress smoothly from Sad to Very Happy. For example, a curve-fit analysis of the happiness levels associated with the tags may be performed to determine which combination of movies results in the smoothest curve. As illustrated in FIG. 9, when placed adjacent to each other as in a playlist, the points on graphs 901 and 905 progress in a relatively smooth and constant way from first state 909 to second state 91 1 . Accordingly, the pair of Movie Q and Movie R may be a good choice for a state-inducing playlist. FIG. 10 illustrates an example representing a playlist 1000 generated based on a request for a state-balancing playlist. FIG. 10 includes a graph 1001 showing the timeline of a Movie S including tags 1003 with associated levels of happiness, and a graph 1005 showing the timeline of a Movie T including tags 1007 with associated levels of happiness. A first state 1009 and a second state 101 1 obtained by the recommendation system are shown on graphs 1001 and 1005. First state 1009 corresponds to the Sad state, and second state 101 1 corresponds to the Very Happy state. In this case, for example, the recommendation system may received an indication that two users desire a playlist of two movies. The recommendation system may have provided the question "What kind of movie does the first user want to watch?" The first user may have responded by selecting the Sad state. The recommendation system may have next provided the question "What kind of movie does the second user want to watch?" The second user may have responded by selecting the Very Happy state. It should be noted that the first and second states do not correspond to a starting and ending states, as in a state-inducing
recommendation. Rather, the first and second states in a state-balancing
recommendation can be constant throughout the time the users are playing the playlist, e.g., the first user wants to watch sad movies for the entire playlist. Thus, graphs 1001 and 1005 illustrate that first state 1009 and second state 101 1 each span the entire playlist timeline.
The recommendation system can analyze the movie database to match the beginning and ending of the playlist and the transitions within the playlist according to closeness criteria, similarly to the example of FIG. 9. However, in contrast to state- inducing, the beginning of a state-balancing playlist may match either one of the first or second states, so long as the ending of the playlist matches the other one of the first or second states. In this regard, it should be appreciated that the closeness criteria are the same regardless of whether the first user's desired state is labeled the "first state" and the second user's desired state is labeled the "second state", or vice versa.
Furthermore, fine-grain data may be used differently in state-balancing playlist generation than in generation of state-inducing playlists. For example, in state- balancing it may be desirable that the movies stay at or near one of the desired states for as much time as possible, which may give the first and second users a better experience. In this regard, in addition to determining movie combinations that meet the beginning and ending closeness criteria and meet the transition closeness criteria, the recommendation system can analyze the fine-grain data to determine which combination of movies maximizes the amount of time spent at or near either the first state or the second state. For playlists that include more than two movies, the recommendation system may additionally determine which combination of movies best equalizes the time spent at or near the first and second states. As illustrated in FIG. 10, Movie S begins in the Sad state desired by the first user and remains in the Sad state through most of the movie timeline. Toward the end of Movie S, the state becomes happier until the state is Not Happy or Sad at the end. Movie T begins in a state of Not Happy or Sad, and then jumps into the Very Happy state where it remains for the remainder of the movie. In this way, for example, the first and second users can each watch a movie that remains in their desired state for the majority of the timeline, and the playlist can offer a smooth transition from Sad to Very Happy, which may help prepare both the first and second users to go to the Very Happy state of Movie T from the Sad state of Movie S.
In various embodiments, the recommendation system can receive more than two states. For example, a group of three friends request a playlist of three movies, and the friends desire three different states. As one skilled in the art would readily understand, the same principles may be applied to generate state-balanced playlists for more that two states.
In some embodiments, the same closeness criteria may be used for the beginning, each transition, and the end of the playlist. In some embodiments, different closeness criteria can be used. For example, the recommendation system may use stricter closeness criteria for the beginning and ending of the playlist than the closeness criteria used for transitions. In some embodiments, each closeness criteria may be part of a larger, combined closeness criteria. For example, the closeness criteria may be that an average of differences of closeness (e.g., an average of the difference between the first state and the beginning of the first content item, the difference between the ending/beginning states at each transition, and the difference between the second state and the ending state of the last content item) may not exceed a predetermined threshold. This case may allow, for example, some transitions within the playlist to be less close if the beginning and ending states of the playlist are very close to the first and second states, respectively.
In various embodiments, the recommendation system can apply further constraints to the selection of content items for the playlist. For example, the recommendation system may provide the user with an interface to narrow down the available choices to content items of a particular genre, artist, language, series, etc. In various embodiments, the user may place constraints on the length of the playlist, for example, by specifying a range for total playlist time, by specifying a number of content items in the playlist, etc. For example, the recommendation request may specify that the user is in an excited state and wants to calm down with a playlist of four episodes from a particular TV series.
It should be appreciated by those skilled in the art that the methods described above may be implemented by, for example, by a computing system such as a general purpose computer through computer-executable instructions (e.g., software, firmware, etc.) stored on a computer-readable medium (e.g., storage disk, memory, etc.) and executed by a computer processor. Referring to FIG. 2, for example, software implementing one or more methods shown in the flowcharts could be stored in storage device 212 and executed by controller 214. It should be understood that various elements shown in the figures may be implemented in various forms of hardware, software or combinations thereof. That is, various elements may be implemented in a combination of hardware and software on one or more
appropriately programmed general-purpose devices, which may include a processor, memory and input/output interfaces.
It should also be appreciated that although various examples of various embodiments have been shown and described in detail herein, those skilled in the art can readily devise other varied embodiments that still remain within the scope of this disclosure. All examples and conditional language recited herein are intended for instructional purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure. Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term "processor" or "controller" should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor ("DSP") hardware, read only memory ("ROM") for storing software, random access memory ("RAM"), and nonvolatile storage.
Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
In the claims hereof, any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a combination of circuit elements that performs that function, software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function, etc. The disclosure as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.

Claims

1 . A recommendation system (200) comprising:
a processor (214); and
a memory (220) storing instructions configured to cause the processor to: obtain a first state;
obtain a second state; and
generate a playlist of two or more content items, including a first content item in the playlist and a last content item in the playlist, wherein generating the playlist includes determining that the first state and a beginning state of the first content item satisfy a closeness criteria, determining that the second state and an ending state of the last content item satisfy the closeness criteria, and determining that each transition from an ending state to a beginning state in the playlist satisfies the closeness criteria.
2. The system of claim 1 , wherein generating the playlist further includes obtaining additional states of the first and second content items, and determining that the additional states satisfy additional criteria.
3. The system of claim 2, wherein the additional criteria include smoothness criteria.
4. The system of claim 2, wherein the determining that the additional states satisfy additional criteria includes determining that the additional states satisfy an additional closeness criteria to at least the first state or the second state.
5. The system of claim 1 , wherein the closeness criteria includes a
predetermined threshold between at least the first state and the beginning state of the first content item, the second state and the ending state of the last content item, or at each transition.
6. The system of claim 1 , wherein the first state includes a starting state and the second state includes an ending state.
7. The system of claim 1 , wherein the first state includes a first desired state and the second state includes a second desired state.
8. A non-transitory computer-readable medium (212) storing computer- executable instructions executable to perform a method for providing a
recommendation of a content item of a plurality of content items in a recommendation system, the method comprising:
obtaining a first state;
obtaining a second state;
generating a playlist of two or more content items, including a first content item in the playlist and a last content item in the playlist, wherein generating the playlist includes determining that the first state and a beginning state of the first content item satisfy a closeness criteria, determining that the second state and an ending state of the last content item satisfy the closeness criteria, and determining that each transition from an ending state to a beginning state in the playlist satisfies the closeness criteria.
9. The non-transitory computer-readable medium of claim 8, wherein generating the playlist further includes obtaining additional states of the first and second content items, and determining that the additional states satisfy additional criteria.
10. The non-transitory computer-readable medium of claim 9, wherein the additional criteria include smoothness criteria.
1 1 . The non-transitory computer-readable medium of claim 9, wherein the determining that the additional states satisfy additional criteria includes determining that the additional states satisfy an additional closeness criteria to at least the first state or the second state.
12. The non-transitory computer-readable medium of claim 8, wherein the closeness criteria includes a predetermined threshold between at least the first state and the beginning state of the first content item, the second state and the ending state of the last content item, or at each transition.
13. The non-transitory computer-readable medium of claim 8, wherein the first state includes a starting state and the second state includes an ending state.
14. The non-transitory computer-readable medium of claim 8, wherein the first state includes a first desired state and the second state includes a second desired state.
15. An method for generating a playlist, the method comprising:
obtaining a first state;
obtaining a second state;
generating a playlist of two or more content items, including a first content item in the playlist and a last content item in the playlist, wherein generating the playlist includes determining that the first state and a beginning state of the first content item satisfy a closeness criteria, determining that the second state and an ending state of the last content item satisfy the closeness criteria, and determining that each transition from an ending state to a beginning state in the playlist satisfies the closeness criteria.
16. The method of claim 15, wherein generating the playlist further includes obtaining additional states of the first and second content items, and determining that the additional states satisfy additional criteria.
17. The method of claim 16, wherein the additional criteria include
smoothness criteria.
18. The method of claim 16, wherein the determining that the additional states satisfy additional criteria includes determining that the additional states satisfy an additional closeness criteria to at least the first state or the second state.
19. The method of claim 15, wherein the closeness criteria includes a predetermined threshold between at least the first state and the beginning state of the first content item, the second state and the ending state of the last content item, or at each transition.
20. The method of claim 15, wherein the first state includes a starting state and the second state includes an ending state.
21 . The method of claim 15, wherein the first state includes a first desired state and the second state includes a second desired state.
PCT/US2015/052888 2014-09-30 2015-09-29 Methods and systems for multi-state recommendations WO2016054006A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462057896P 2014-09-30 2014-09-30
US62/057,896 2014-09-30

Publications (1)

Publication Number Publication Date
WO2016054006A1 true WO2016054006A1 (en) 2016-04-07

Family

ID=54266682

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/052888 WO2016054006A1 (en) 2014-09-30 2015-09-29 Methods and systems for multi-state recommendations

Country Status (1)

Country Link
WO (1) WO2016054006A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010027509A1 (en) * 2008-09-05 2010-03-11 Sourcetone, Llc Music classification system and method
US20110239137A1 (en) * 2004-12-30 2011-09-29 Aol Inc. Mood-Based Organization and Display of Instant Messenger Buddy Lists
WO2012019637A1 (en) * 2010-08-09 2012-02-16 Jadhav, Shubhangi Mahadeo Visual music playlist creation and visual music track exploration
US20140172431A1 (en) * 2012-12-13 2014-06-19 National Chiao Tung University Music playing system and music playing method based on speech emotion recognition
US20140282237A1 (en) * 2013-03-14 2014-09-18 Aperture Investments Llc Methods and apparatuses for assigning moods to content and searching for moods to select content

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110239137A1 (en) * 2004-12-30 2011-09-29 Aol Inc. Mood-Based Organization and Display of Instant Messenger Buddy Lists
WO2010027509A1 (en) * 2008-09-05 2010-03-11 Sourcetone, Llc Music classification system and method
WO2012019637A1 (en) * 2010-08-09 2012-02-16 Jadhav, Shubhangi Mahadeo Visual music playlist creation and visual music track exploration
US20140172431A1 (en) * 2012-12-13 2014-06-19 National Chiao Tung University Music playing system and music playing method based on speech emotion recognition
US20140282237A1 (en) * 2013-03-14 2014-09-18 Aperture Investments Llc Methods and apparatuses for assigning moods to content and searching for moods to select content

Similar Documents

Publication Publication Date Title
US11521608B2 (en) Methods and systems for correcting, based on speech, input generated using automatic speech recognition
US11456019B2 (en) Systems and methods for alerting users to differences between different media versions of a story
US20140150023A1 (en) Contextual user interface
US9043702B2 (en) Methods and systems for creating a shaped playlist
US20140282061A1 (en) Methods and systems for customizing user input interfaces
US11600304B2 (en) Systems and methods for determining playback points in media assets
JP7511720B2 (en) Systems and methods for dynamically enabling and disabling biometric devices - Patents.com
US12008056B2 (en) Systems and methods for identifying a meaning of an ambiguous term in a natural language query
US20150177953A1 (en) User interface displaying scene dependent attributes
US20150301693A1 (en) Methods, systems, and media for presenting related content
TW201436543A (en) Method and system for content discovery
US20150012946A1 (en) Methods and systems for presenting tag lines associated with media assets
US10838538B2 (en) Method and apparatus for gesture-based searching
US20150281788A1 (en) Function execution based on data entry
EP3119094A1 (en) Methods and systems for clustering-based recommendations
US9782681B2 (en) Methods and systems for controlling media guidance application operations during video gaming applications
WO2016054006A1 (en) Methods and systems for multi-state recommendations
US20150033269A1 (en) System and method for displaying availability of a media asset
US20150007212A1 (en) Methods and systems for generating musical insignias for media providers
US20150339578A1 (en) A method and system for providing recommendations
WO2015191921A1 (en) Method and system for privacy-preserving recommendations
US20150301699A1 (en) Methods, systems, and media for media guidance
WO2015099745A1 (en) Multiple profile user interface
WO2015191919A1 (en) Method and system for privacy-preserving recommendations

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15777835

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15777835

Country of ref document: EP

Kind code of ref document: A1