WO2018208192A1 - Système et procédés de fourniture d'une recommandation portant sur des éléments de contenu - Google Patents

Système et procédés de fourniture d'une recommandation portant sur des éléments de contenu Download PDF

Info

Publication number
WO2018208192A1
WO2018208192A1 PCT/SE2017/050458 SE2017050458W WO2018208192A1 WO 2018208192 A1 WO2018208192 A1 WO 2018208192A1 SE 2017050458 W SE2017050458 W SE 2017050458W WO 2018208192 A1 WO2018208192 A1 WO 2018208192A1
Authority
WO
WIPO (PCT)
Prior art keywords
communication device
signals
users
recommendation
derived
Prior art date
Application number
PCT/SE2017/050458
Other languages
English (en)
Inventor
Rafia Inam
Erlendur Karlsson
Lackis ELEFTHERIADIS
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/SE2017/050458 priority Critical patent/WO2018208192A1/fr
Publication of WO2018208192A1 publication Critical patent/WO2018208192A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/489Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/487Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4436Power management, e.g. shutting down unused components of the receiver
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras

Definitions

  • the present disclosure relates generally to a first communication device and methods performed thereby for handling a recommendation to a second communication device.
  • the present disclosure also relates generally to the second communication device, and methods performed thereby for handling the recommendation from the first communication device.
  • the present disclosure additionally relates generally to a fourth communication device, and methods performed thereby for handling a characterization of a context of a third communication device.
  • the present disclosure further relates generally to a computer program product, comprising instructions to carry out the actions described herein, as performed by the first communication device, the second
  • the computer program product may be stored on a computer-readable storage medium.
  • Wireless devices within a telecommunications network may be e.g., stations (STAs), User Equipments (UEs), mobile terminals, wireless terminals, terminals, and/or Mobile Stations (MS).
  • Wireless devices are enabled to communicate wirelessly in a cellular communications network or wireless communication network, sometimes also referred to as a cellular radio system, cellular system, or cellular network.
  • the communication may be performed e.g. between two wireless devices, between a wireless device and a regular telephone, and/or between a wireless device and a server via a Radio Access Network (RAN) , and possibly one or more core networks, comprised within the
  • RAN Radio Access Network
  • Wireless devices may further be referred to as mobile telephones, cellular telephones, laptops, or tablets with wireless capability, just to mention some further examples.
  • the wireless devices in the present context may be, for example, portable, pocket-storable, hand-held, computer-comprised, or vehicle-mounted mobile devices, enabled to communicate voice and/or data, via the RAN, with another entity, such as another terminal or a server.
  • the telecommunications network covers a geographical area which may be divided into cell areas, each cell area being served by a network node or Transmission Point (TP), for example, an access node such as a Base Station (BS), e.g. a Radio Base Station (RBS), which sometimes may be referred to as e.g., evolved Node B ("eNB"), "eNodeB", “NodeB”, “B node”, or BTS (Base Transceiver Station), depending on the technology and terminology used.
  • BS Base Station
  • RBS Radio Base Station
  • eNB evolved Node B
  • eNodeB evolved Node B
  • NodeB NodeB
  • B node Base Transceiver Station
  • the base stations may be of different classes such as e.g. Wide Area Base Stations, Medium Range Base Stations, Local Area Base Stations and Home Base Stations, based on transmission power and thereby also cell size.
  • a cell is the
  • the telecommunications network may also be a non-cellular system, comprising network nodes which may serve receiving nodes, such as wireless devices, with serving beams.
  • the expression Downlink (DL) is used for the transmission path from the base station to the wireless device.
  • the expression Uplink (UL) is used for the transmission path in the opposite direction i.e., from the wireless device to the base station.
  • base stations which may be referred to as eNodeBs or even eNBs, may be directly connected to one or more core networks.
  • 3GPP LTE radio access standard has been written in order to support high bitrates and low latency both for uplink and downlink traffic. All data transmission in LTE is controlled by the radio base station.
  • a Recommender System may be understood as a system that may recommend items of potential interest to a user within a particular area or scope.
  • the Recommender systems of today may typically produce a list of recommendations in one of two ways: through collaborative and content-based filtering.
  • Collaborative filtering approaches may build a model from a user's past behavior -such as items previously purchased or selected and/or numerical ratings given to those items-, as well as similar decisions made by other users with similar taste and preferences to those of the user. This model may then be used to predict items, or ratings for items, that the user may have an interest in.
  • Content-based filtering approaches may be understood to utilize a series of discrete characteristics of an item in order to recommend additional items with similar properties. These approaches may often be combined in so called Hybrid Recommender Systems. These existing Recommender Systems use a relatively static sample set of preferences, likes and dislikes of a user, and may result in
  • the object is achieved by a method performed by a first communication device.
  • the method is for handling a recommendation to a second communication device.
  • the first communication device and the second communication device operate in a telecommunications network.
  • the first communication device determines a recommendation for an item of content to be provided by a third communication device operating in the telecommunications network.
  • the determination of the recommendation is based on signals collected by a receiving device located in a space where one or more users of the third communication device are located, during a time period.
  • the signals are at least one of: audio signals and video signals.
  • the first communication device also initiates sending a first indication of the determined recommendation to the second communication device.
  • the object is achieved by a method performed by a fourth communication device.
  • the first communication device and the third communication device may operate in the telecommunications network.
  • the fourth communication device obtains signals collected by the receiving device located in the space where the one or more users of the third communication device are located, during the time period.
  • the signals are at least one of: audio signals and video signals.
  • the fourth communication device determines the characterization of the context of the third communication device by determining one or more factors.
  • the one or more factors are obtained from an analysis of the obtained signals.
  • the one or more factors comprise at least one of: a) one or more characteristics of the one or more users from: number, gender, age, and identity of the one or more users; b) a first mood derived from a tone of one or more voices detected in the audio signals; c) a second mood derived from a first semantic analysis of a language used by one or more voices detected in the audio signals; d) a topic of discussion derived from a second semantic analysis of a language used by one or more voices detected in the audio signals, and e) a third mood derived from at least one of: a body movement and a gesture detected in each of the one or more users.
  • the fourth communication device then initiates sending a second indication of the determined characterization of the context to the first communication device operating in the telecommunications network or another communication device operating in the telecommunications network.
  • the object is achieved by a method performed by the second communication device.
  • the method is for handling the recommendation from the first communication device.
  • the first communication device and the second communication device operate in the telecommunications network.
  • the second communication device receives the first indication for the recommendation for the item of content to be provided by the third communication device operating in the telecommunications network.
  • the recommendation is based on the signals collected by the receiving device located in a space where one or more users of the third
  • the signals are at least one of: audio signals and video signals.
  • the second communication device also initiates providing, to the one or more users, a third indication of the received recommendation on an interface of the second communication device.
  • the object is achieved by the first communication device for handling the recommendation to the second
  • the first communication device and the second communication device are configured to operate in the telecommunications network.
  • the first communication device is further configured to determine the recommendation for the item of content to be provided by the third communication device configured to operate in the telecommunications network.
  • the determination of the recommendation is configured to be based on signals configured to be collected by the receiving device located in the space where the one or more users of the third communication device are located during the time period.
  • the signals are configured to be at least one of: audio signals and video signals.
  • the first network node is also configured to initiate sending the first indication of the recommendation configured to be determined to the second communication device.
  • the object is achieved by the fourth communication device for handling the characterization of the context of a third communication device configured to have the one or more users.
  • the fourth communication device and the third communication device are configured to operate in the telecommunications network.
  • the fourth communication device is further configured to obtain signals configured to be collected by the receiving device located in the space where the one or more users of the third communication device are located, during the time period.
  • the signals are configured to be at least one of: audio signals and video signals.
  • the fourth communication device is also configured to determine the
  • the fourth communication device is also configured to initiate sending the second indication of the characterization of the context configured to be determined to the first communication device configured to operate in the telecommunications network or another communication device configured to initiate sending the second indication of the characterization of the context configured to be determined to the first communication device configured to operate in the telecommunications network or another communication device configured to initiate the second indication of the characterization of the context configured to be determined to the first communication device configured to operate in the telecommunications network or another communication device configured to initiate sending the second indication of the characterization of the context configured to be determined to the first communication device configured to operate in the telecommunications network or another communication device configured
  • the object is achieved by the second communication device for handling the recommendation from the first
  • the first communication device and the second communication device are configured to operate in the telecommunications network.
  • the second communication device is further configured to receive the first indication for the recommendation for the item of content to be provided by the third communication device configured to operate in the telecommunications network.
  • the recommendation is based on signals collected by the receiving device configured to be located in the space where the one or more users of the third communication device are located, during the time period, the signals being configured to be at least one of: audio signals and video signals.
  • the second communication device is also configured to initiate providing, to the one or more users, the third indication of the recommendation configured to be received on the interface of the second communication device.
  • the object is achieved by a computer program.
  • the computer program comprises instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the first communication device.
  • the object is achieved by computer-readable storage medium.
  • the computer-readable storage medium has stored thereon a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the first communication device.
  • the object is achieved by a computer program.
  • the computer program comprises instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the fourth communication device.
  • the object is achieved by computer-readable storage medium.
  • the computer-readable storage medium has stored thereon a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the fourth communication device.
  • the object is achieved by a computer program.
  • the computer program comprises instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the second communication device.
  • the object is achieved by computer-readable storage medium.
  • the computer-readable storage medium has stored thereon a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the second communication device.
  • the first communication device determines the recommendation for the item of content to be provided by the third communication device, based on the signals collected in the space wherein one or more users of the third communication device are located during the time period, the first communication device enables the second communication device to provide a recommendation to the one or more users which is optimally adapted to the current context and mood of the one or more users, and thus provide the one or more users with much more relevant recommendations than those based on collaborative and content-based filtering methods. Therefore, a process of selection of an item of content by the one or more users is shortened, taking less capacity and processing resources from the network, reducing power consumption of the devices involved, e.g., battery consumption in wireless devices, and overall enhancing the satisfaction of the one or more users.
  • the determination of the recommendation by the first communication device may be enabled by the fourth communication device determining the characterization of the context of the third communication device.
  • Figure 1 is a schematic diagram illustrating embodiments of a telecommunications
  • Figure 2 is a flowchart depicting embodiments of a method in a first communication
  • Figure 3 is a flowchart depicting embodiments of a method in a fourth communication device, according to embodiments herein.
  • Figure 4 is a flowchart depicting embodiments of a method in a second communication device, according to embodiments herein.
  • Figure 5 is a schematic diagram illustrating an example of the different components of the telecommunications network and their interactions, according to embodiments herein.
  • Figure 6 is a schematic diagram illustrating another example of the different components of the telecommunications network and their interactions, according to embodiments herein.
  • Figure 7 is a schematic diagram illustrating another example of the different components of the telecommunications network and their interactions, according to embodiments herein.
  • Figure 8 is a schematic diagram illustrating an example of a method performed by
  • Figure 9 is a schematic diagram illustrating another example of a method performed by components of a telecommunications network, according to embodiments herein.
  • Figure 10 is a schematic block diagram illustrating embodiments of a first communication device, according to embodiments herein.
  • Figure 1 1 is a schematic block diagram illustrating embodiments of a fourth
  • Figure 12 is a schematic block diagram illustrating embodiments of a second
  • the existing Recommender Systems use a relatively static sample set of preferences, likes and dislikes of a user, and do not adapt to the contextual setting of the user, which may be characterized by time, type of place, the company the user is in, the mood of the user and the mood of the company the user is in, etc.. Attempts have been made to incorporate contextual information into the recommendation function.
  • Contextual factors that may influence the preference of a user at any given place and time may be factors such as: the type of group the user may be in, such as rowdy men, young women, parents with their young children, parents with their teenage children, a romantic couple, etc, the mood and topic of a discussion the group the user may be in may be having, or the mood and tone of the voices of the people in the group.
  • ASA Audio Scene Analysis
  • VSA Video scene Analysis
  • Embodiments herein may be applicable, but not limited to, areas such as films, TV series, sports programs, music, news, books, online computer games and even placement of advertisements.
  • FIG. 1 depicts a non-limiting example of a telecommunications network 100, sometimes also referred to as a cellular radio system, cellular network or wireless communications system, in which embodiments herein may be implemented.
  • the telecommunications network 100 may for example be a network such as a Long-Term Evolution (LTE), e.g.
  • LTE Long-Term Evolution
  • LTE Frequency Division Duplex (FDD), LTE Time Division Duplex (TDD), LTE Half-Duplex Frequency Division Duplex (HD-FDD), LTE operating in an unlicensed band, WCDMA, Universal Terrestrial Radio Access (UTRA) TDD, GSM network, GERAN network, Ultra-Mobile Broadband (UMB), EDGE network, network comprising of any combination of Radio Access Technologies (RATs) such as e.g. Multi- Standard Radio (MSR) base stations, multi-RAT base stations etc., any 3rd Generation Partnership Project (3GPP) cellular network, Wireless Local Area Network/s (WLAN) or WiFi network/s, Worldwide Interoperability for Microwave Access (WMax), 5G system or any cellular network or system.
  • RATs Radio Access Technologies
  • the telecommunications network 100 may support Information-Centric Networking (ICN).
  • ICN Information-Centric Networking
  • connectivity between the different entities in the telecommunication network 100 of Figure 1 may be enabled as an over-the-top (OTT) connection, using an access network, one or more core networks, and/or one or more intermediate networks, which are not depicted in the Figure, to simplify it.
  • the OTT connection may be transparent in the sense that the participating communication devices through which the OTT connection may pass may be unaware of routing of uplink and downlink communications.
  • the telecommunications network 100 comprises a plurality of communication devices, whereof a first communication device 101 , a second communication device 102, a third communication device 103 and a fourth communication device 104 are depicted in Figure 1.
  • the first communication device 101 may be understood as a first computer system, which may be implemented as a standalone server in e.g., a host computer in the cloud, as depicted in the non-limiting example of Figure 1.
  • the first communication device 101 may in some examples be a distributed node or distributed server, with some of its functions being implemented locally, e.g., by a client manager on a TV set-top-box, and some of its functions implemented in the cloud, by e.g., a server manager.
  • the first communication device 101 may also be implemented as processing resources in a server farm.
  • the first communication device 101 may be under the ownership or control of a service provider, or may be operated by the service provider or on behalf of the service provider.
  • the second communication device 102 may be understood as a second computer system, e.g., a client computer, in the telecommunications network 100.
  • the second communication device 102 may be for example, as depicted in the example of Figure 1 , a wireless device as described below, e.g., a UE. In other examples which are not depicted in Figure 1 , the second communication device 102 may be locally located, e.g., on a TV set-top-box.
  • the second communication device 102 may also be a network node in the telecommunications network 100, as described below, or a computer, e.g., at a content provider of e.g., media content, in the telecommunications network 100.
  • the second communication device 102 may even be the same device as the third communication device 103.
  • the second communication device may have an interface 110, such as e.g., a touch-screen, a button, a remote control, etc...
  • the third communication device 103 may also be understood as a third computer system, which may be implemented as a standalone client computer in e.g., a TV set-top- box in the telecommunications network 100, as depicted in the non-limiting example of Figure 1 , a smart TV, or a wireless device as described below, such as a tablet or a smartphone.
  • the third communication device 103 may in some examples be a distributed node or distributed server, with some of its functions being implemented locally, e.g., by a client manager on a TV set-top-box, and some of its functions implemented in the cloud, by e.g., a server manager on e.g., a media server.
  • the third communication device 103 may be located in or co-located with a device to reproduce media for the one or more users, or reproducing device 120, such as a TV or a media player.
  • the fourth communication device 104 may be understood as a fourth computer system, which may be implemented as a standalone client computer in e.g., a TV set-top- box in the telecommunications network 100, as depicted in the non-limiting example of Figure 1 , a smart TV, or a wireless device as described below, such as a tablet or a smartphone.
  • the fourth communication device 104 may in some examples be a distributed node or distributed server, with some of its functions being implemented locally, e.g., by a client manager on a TV set-top-box, and some of its functions implemented in the cloud, by e.g., a server manager on e.g., a media server.
  • the fourth communication device 104 is co-located with the third communication device 103, on a TV set-top-box, as a client.
  • the fourth communication device 104 may also be located in or co-located with the reproducing device 120.
  • the fourth communication device 104 may be understood as a communication device managing or controlling audio or video signals, e.g., an Audio Scene Analyzer (ASA) and/or a Video Scene Analyzer (VSA).
  • ASA Audio Scene Analyzer
  • VSA Video Scene Analyzer
  • some of the first communication device 101 , the second communication device 102, the third communication device 103 and the fourth communication device 104 may be co-located or be the same device.
  • any of the second communication device 102, the third communication device 103 and the fourth communication device 104 may be the same communication device, or may be co-located.
  • the third communication device 103 and the fourth communication device 104 are co-located. All the possible combinations are not depicted in Figure 1 to simplify the Figure. Additional non-limiting examples of the telecommunications network 100 are presented below, in Figures 5, 6 and 7.
  • Any of network nodes comprised in the telecommunications network 100 may be a radio network node, that is, a transmission point such as a radio base station, for example an eNB, an eNodeB, or an Home Node B, an Home eNode B or any other network node capable to serve a wireless device, such as a user equipment or a machine type communication device in the telecommunications network 100.
  • a radio network node that is, a transmission point such as a radio base station, for example an eNB, an eNodeB, or an Home Node B, an Home eNode B or any other network node capable to serve a wireless device, such as a user equipment or a machine type communication device in the telecommunications network 100.
  • a network node may also be a Remote Radio Unit (RRU), a Remote Radio Head (RRH), a multi-standard BS (MSR BS), or a core network node, e.g., a Mobility Management Entity (MME), Self-Organizing Network (SON) node, a coordinating node, positioning node, Minimization of Driving Test (MDT) node, etc...
  • RRU Remote Radio Unit
  • RRH Remote Radio Head
  • MSR BS multi-standard BS
  • MME Mobility Management Entity
  • SON Self-Organizing Network
  • MDT Minimization of Driving Test
  • the telecommunications network 100 covers a geographical area which, which in 5 some embodiments may be divided into cell areas, wherein each cell area may be served by a radio network node, although, one radio network node may serve one or several cells.
  • Any of the radio network nodes that may be comprised in the telecommunications network 100 may be of different classes, such as, e.g., macro eNodeB, home eNodeB or pico base station, based on transmission power and thereby also cell size.
  • macro eNodeB e.g., macro eNodeB, home eNodeB or pico base station
  • any of the radio network nodes comprised in the telecommunications network 100 may serve receiving nodes with serving beams.
  • Any of the radio network nodes that may be comprised in the telecommunications network 100 may support one or several communication technologies, and its name may depend on the technology and
  • any of the radio network nodes that may be comprised in the telecommunications network 100 may be directly connected to one or more core networks.
  • a plurality of wireless devices may be located in the wireless communication network 100. Any of the wireless devices comprised in the telecommunications network
  • 20 100 may be a wireless communication device such as a UE which may also be known as e.g., mobile terminal, wireless terminal and/or mobile station, a mobile telephone, cellular telephone, or laptop with wireless capability, just to mention some further examples.
  • a wireless communication device such as a UE which may also be known as e.g., mobile terminal, wireless terminal and/or mobile station, a mobile telephone, cellular telephone, or laptop with wireless capability, just to mention some further examples.
  • Any of the wireless devices comprised in the telecommunications network 100 may be, for example, portable, pocket-storable, hand-held, computer-comprised, or a vehicle-
  • M2M Machine-to- Machine
  • a wireless interface such as a printer or a file storage device, modem, or any other radio network unit capable of communicating
  • the telecommunications network 100 is enabled to communicate wirelessly in the telecommunications network 100.
  • the communication may be performed e.g., via a RAN and possibly one or more core networks, comprised within the telecommunications network 100.
  • the telecommunications network also comprises a receiving device 130.
  • the receiving device 130 may be understood as a device capable of detecting and collecting audio signals, such as a microphone, or video signals, such as a camera.
  • the receiving device may be typically collocated with either one of the second communication device 5 102 or the third communication device 103. In the non-limiting example depicted in Figure 1 , the receiving device 130 is co-located with the third communication device 103.
  • the first communication device 101 is configured to communicate within the telecommunications network 100 with the second communication device 102 over a first link 141 , e.g., a radio link or a wired link.
  • the first communication device 101 is
  • each of the first communication device 101 , the second communication device 102, the third communication device 103 and the fourth communication device 104 when implemented as separate devices, may communicate with each other with a respective link, which may be a wired or a wireless link.
  • first link 141 , the second link 142 and the third link 143 may be a direct link or it may go via one or more core networks in the telecommunications network 100, which are not depicted in Figure 1 , or it may go via an optional intermediate network.
  • the intermediate network may be one of, or a combination of more than one of, a public, private or hosted network; the intermediate network, if any, may be a backbone network5 or the Internet; in particular, the intermediate network may comprise two or more subnetworks (not shown).
  • first”, “second”, and/or “third”, “fourth” and “fifth” herein may be understood to be an arbitrary way to denote different elements or entities, and may be0 understood to not confer a cumulative or chronological character to the nouns they modify.
  • Embodiments of a method performed by the first communication device 101 the method being for handling a recommendation to the second communication device 102, will now be described with reference to the flowchart depicted in Figure 2.
  • the first communication device 101 and the second communication device 102 operate in the telecommunications network 100.
  • the method may comprise one or more of the following actions. In some embodiments all the actions may be performed. In some embodiments, one or more actions may be performed. One or more embodiments may be combined, where applicable. All possible combinations are not described to simplify the description. In Figure 2, optional actions are indicated with dashed lines.
  • the first communication device 101 may be understood as a recommender system, performing a recommendation function. That is, as a device determining a recommendation function.
  • the content may be a media content, such as a movie or a song
  • the third communication device 103 may be a media streaming device, e.g., a TV set-top-box, as depicted in Figure 1.
  • the first communication device 101 may determine the recommendation based on contextual factors that may influence a preference of one or more users of the third communication device 103 in any given place and time.
  • the contextual factors may be derived from audio signals and/or video signals.
  • the determination of the recommendation may be based on audio signals and/or video signals collected in a space where the one or more users of the third communication device 103 are located during a time period.
  • the time period may be understood as a certain period of time preceding when the one or more users of the third communication device 103 are to be provided with the recommendation for content, e.g., a few minutes before a family is about to receive a recommendation to watch a movie.
  • the audio signals and/or video signals may be collected by the receiving device 130 located in the space.
  • the space may be defined by e.g., an operator of the
  • the telecommunications network 100 may cover a certain three dimensional zone around the receiving device 130, which may cover for example a room where the third
  • the first communication device 101 may, in this Action, obtain a characterization of a context of the third communication device 103.
  • the characterization may be understood as a series of characteristics defining the particular context for the one or more users of the third communication device 103 in the time period, as described above.
  • Obtaining may be understood as comprising determining, calculating, or receiving the obtained characterization from another communication device in the telecommunications network 100, such as from the fourth communication device 104, e.g., via the second link 142.
  • the obtaining of the characterization is summarized here, but it is described in further detail for the fourth communication device 104, in relation to Figure 3.
  • the description provided for the fourth communication device 104 should be understood to apply to the first communication device 101 as well, for the examples wherein the first communication device 101 may determine the characterization itself.
  • the characterization may be obtained by determining, based on a processing of the signals collected, at least one of: one or more first factors, and one or more second factors, as follows.
  • the signals may be audio signals.
  • the recommendation may be further based on one or more first factors obtained from a first analysis of the audio signals collected.
  • the one or more first factors may comprise at least one of: a) one or more characteristics of the one or more users from: number, gender, age, and identity of the one or more users; b) a first mood derived from a tone of one or more voices detected in the audio signals, wherein the first mood may be understood to correspond to that of at least one of the one or more users; c) a second mood derived from a first semantic analysis of a language used by the one or more voices detected in the audio signals, wherein the second mood may be understood to
  • the one or more factors may be obtained by at least one of the following options.
  • the one or more characteristics may be derived by segmenting the audio signals collected during the time period, into single speaker segments.
  • the first mood may be derived based on a natural language processing of a transcript of the single speaker segments, obtained by Automatic Speech Recognition.
  • the first semantic analysis may be based on one or more first language models.
  • the one or more first language models may be, for example, bag-of-words models or Vector Space Models of sentences, which may enable mood classification through different weighting schemes and similarity measures, where the models may have been trained on mood-labelled training data.
  • the second semantic analysis may be based on one or more second language models.
  • the one or more second language models may be, for example, bag-of-words models or Vector Space Models of sentences that may enable topic classification through different weighting schemes and similarity measures, where the models may have been trained on topic-labelled training data.
  • the signals may be video signals.
  • the recommendation may be further based on one or more second factors obtained from a second analysis of the video signals collected, wherein the one or more second factors comprise at least one of: a) the one or more characteristics of the one or more users from: number, gender, age, and identity of the one or more users; and b) a third mood derived from at least one of: a body movement and a gesture detected in each of the one or more users distinguished in the video signals.
  • the first communication device 101 determines the recommendation for the item of content to be provided by the third communication device 103 operating in the telecommunications network 100.
  • the determination of the recommendation is based on the signals collected by the receiving device 130 located in the space where the one or more users of the third communication device 103 are located, during the time period, the signals being at least one of: audio signals and video signals.
  • the determining 202 of the recommendation may be further based on the obtained characterization of the context in Action 201.
  • the characterization of the context used by the first communication device 101 may be e.g., the latest one available in a Client User Database, if it is not too old. If no up to date characterization is available, first
  • communication device 101 may request from the user, via e.g., a user interface, mood tips, or use an average characterization available in the Client User Database.
  • the first communication device 101 may base the determination of the first communication device 101
  • the first communication device 101 may have compiled a list of
  • the preference profile of the user may comprise information on the preferences of the user, likes and dislikes, and previous choices in different contexts, a list of other users with similar taste and preferences, and other related information, while the content access profile of the user may have information about what content the user has access to, based on the type of subscription the user may have. Given the user context, a
  • recommendation selection may be made from the content available to the user, as stipulated by e.g., the subscription of the user, which may best correlate with the preferences of the user in that context. This may be achieved through different means. Collaborative and content-based filtering may be used on the contextual conditioned user preferences. As more data may become available, another approach may be to compute posterior probabilities for each available content item, conditioned on the profile information of the user in that context.
  • first communication device 101 initiates sending a first indication of the determined recommendation to the second communication device 102.
  • the first indication may be, for example, the list of recommended content mentioned in the previous Action, e.g., a list of recommended movies, or a marking, e.g., an asterisk, next to an already existing list of movies, pointing to those that are recommended, or a streamed preview of one or more recommended movies, for example.
  • the first indication may be presented to the user on the screen through the user interface 110.
  • the sending may be performed via the first link 141.
  • the second communication device 102 may be the third communication device 103. That is, the recommendation may be directly provided on the device where the one or more users may eventually obtain the content, e.g., watch a movie, whether or not it may be the one recommended by the first communication device 101.
  • Embodiments of a method performed by the fourth communication device 104, the method being for handling the characterization of the context of the third communication device 103 having the one or more users will now be described with reference to the flowchart depicted in Figure 3. As stated earlier, the fourth communication device 104 and the third communication device 103 operate in the telecommunications network 100.
  • the content may be a media content
  • the third communication device 103 may be a media streaming device.
  • the method may comprise one or more of the following actions. In some embodiments all the actions may be performed. In some embodiments, one or more actions may be performed. One or more embodiments may be combined, where applicable. All possible combinations are not described to simplify the description. In Figure 3, an optional action is indicated with dashed lines.
  • the fourth communication device 104 may be understood as an Audio Scene Analysis (ASA) System or audio signal analyser, and/or as a Video Scene Analyzer (VSA) system, which may determine the characterization of the context of the third communication device 104 .
  • ASA Audio Scene Analysis
  • VSA Video Scene Analyzer
  • the fourth communication device 104 may initially obtain the signals collected by the receiving device 130 located in the space where the one or more users of the third communication device 103 are located, during the time period.
  • the signals may be at least one of: audio signals and video signals.
  • Obtaining may be understood as comprising collecting, in examples wherein the receiving device 130 may be co-located with the fourth communication device 104, or as receiving from device in the telecommunications network 100, such as from the receiving device 130, e.g., via a wired or wireless link.
  • the one or more factors may be obtained from an analysis of the obtained signals.
  • the one or more factors may comprise at least one of: a) the one or more characteristics of the one or more users from: number, gender, age, and identity of the one or more users; b) the first mood derived from a tone of one or more voices detected in the audio signals; c) the second mood derived from the first semantic analysis of a language used by one or more voices detected in the audio signals; d) the topic of discussion derived from the second semantic analysis of the language used by the one or more voices detected in the audio signals; and e) the third mood derived from at least one of: the body movement and the gesture detected in each of the one or more users.
  • the analysis of the obtained signals may require a general set of language models, gender models and age models, speaker and acoustic models for the enrolled speakers, visual recognition models, such as spatial gesture models, which may be three dimensional (3D) volumetric models or two-dimensional (2D) appearance based models, and facial expression models which may be implemented with convolutional neural nets in a similar way that a general image object classification may be performed.
  • visual recognition models such as spatial gesture models, which may be three dimensional (3D) volumetric models or two-dimensional (2D) appearance based models
  • facial expression models which may be implemented with convolutional neural nets in a similar way that a general image object classification may be performed.
  • the fourth communication device 104 may obtain a model for each one of the one or more factors, based on repeatedly performing the obtaining 301 of the signals and the determining of the one or more factors over a plurality of time periods.
  • obtaining may be understood in this Action 302 as comprising determining, calculating, or building, with for example, machine learning methods.
  • Obtaining may also comprise receiving from another communication device in the telecommunications network 100, e.g., via a wired or wireless link or as retrieving, e.g., from a database, in examples wherein the model may have previously been calculated by the fourth communication device 104 or another communication device in the telecommunications network 100.
  • the fourth communication device 104 may be a distributed node and some of its functionality may be performed on, e.g., a Media Server, and some of its
  • the Media Server may have, in some examples, e.g., a Server Database with ASA and/or VSA models for all users in the system, and the client may always check with the server if its models are up to date and do an update when needed.
  • the speaker, acoustic, and visual models for each enrolled speaker may be continually updated as more training data may become available for training those models.
  • the training may be performed in the media server.
  • the fourth communication device 104 determines the fourth communication device 104
  • the characterization of the context of the third communication device 103 by determining the one or more factors obtained from the analysis of the obtained signals.
  • the one or more factors may comprise any of the one or more first factors, and the one or more second factors, as described earlier.
  • the signals may be audio signals.
  • the fourth communication device 104 is an Audio Scene Analysis (ASA) System
  • the fourth communication device 104 may use state of the art speech analysis and recognition tools to segment the signals collected by the receiving device 130, for example, to segment an audio signal into single speaker segments which may be classified by language, gender, age, speaker identifier, and speech mood and tone. From the different single speaker segments, an estimate may be made of the number of people, gender mix, age mix, which enrolled speakers are present, and the overall speech mood and tone.
  • Automatic Speech Recognition ASR may also be used on these segments to obtain a transcript of what may be being said. Natural language processing may be used on those transcripts to estimate the sentiment in the language and a characterization of the discussion.
  • the signals may be audio signals
  • at least one of the following may apply: a) the one or more characteristics may be derived by segmenting the audio signals collected during the time period, into single speaker segments; b) the first mood may be derived based on a natural language processing of a transcript of the single speaker segments, obtained by Automatic Speech Recognition; c) the first semantic analysis may be based on the one or more first language models; and d) the second semantic analysis may be based the on one or more second language models.
  • the signals may be video signals.
  • the fourth communication device 104 is a Video Scene Analysis (VSA) System
  • the fourth communication device 104 may use, for example, a) facial detection algorithms to identify the people at the location, which may be used to identify the people in the group and thus classify which group it may be; b) facial expression classification algorithms to estimate the facial-mood of the people in the group; and c) movement and gesture analysis methods to estimate the motion-mood of the people in the group.
  • VSA Video Scene Analysis
  • the one or more factors which in these embodiments may be understood to correspond to the one or more second factors, may comprise at least one of : a) the one or more characteristics of the one or more users from: number, gender, age, and identity of the one or more users; and b) the third mood derived from at least one of: the body movement and the gesture detected in each of the one or more users.
  • the time period over which the scene analysis may be performed may typically be a few minutes, and the context characterization for that time period may be sent to a client controller or client Control Function (CLIENT-CTRL) in the fourth communication device 104, e.g., in the set-top-box, which may time-stamp and store the results in a Client User Database.
  • CLIENT-CTRL client Control Function
  • the fourth communication device 104 initiates sending a second indication of the determined characterization of the context to the first communication device 101 operating in the telecommunications network 100 or another communication device 102, 103, 104 operating in the telecommunications network 100.
  • to initiate sending may comprise the fourth communication device 104 sending itself, e.g., via the second link 142, or triggering another network node to send.
  • the second indication may be, for example, a message comprising a code for the determined characterization, or a comprehensive list of the determined one or more factors.
  • the second indication may further indicate the obtained model for each one of the one or more factors.
  • Embodiments of a method performed by a second communication device 102, the method being for handling a recommendation from a first communication device 101 will now be described with reference to the flowchart depicted in Figure 4. As stated earlier, the first communication device 101 and the second communication device 102 operate in a telecommunications network 100.
  • the content may be a media content
  • the third communication device 103 may be a media streaming device.
  • the method may comprise one or more of the following actions. In some embodiments all the actions may be performed. In some embodiments, one or more actions may be performed. One or more embodiments may be combined, where applicable. All possible combinations are not described to simplify the description. Note that in some embodiments, the order of the Actions may be changed. In Figure 4 an optional action is indicated with dashed lines.
  • the second communication device 102 may be understood as a communication device receiving the recommendation determined by the first communication device 101.
  • the second communication device 102 receives the first indication for the recommendation for the item of content to be provided by the third communication device 103 operating in the telecommunications network 100.
  • the recommendation is based on signals collected by a receiving device 130 located in a space where one or more users of the third communication device 103 are located, during the time period.
  • the signals are at least one of: audio signals and video signals.
  • the receiving may be performed, e.g., via the first link 141.
  • the second communication device 102 initiates providing, to the one or more users, a third indication of the received recommendation on an interface 1 10 of the second communication device 102.
  • the third indication similarly to the first indication may be, for example, a list of recommended movies, or a marking, e.g., an asterisk, next to an already existing list of movies, pointing to those that are
  • the second communication device 102 may, in this Action, initiate providing the item of content on the third communication device 103, based on a selection received from the one or more users on the interface 110, the selection being further based on the provided third indication.
  • initiate providing may comprise the second communication device 102 providing itself, e.g., via the third link 143, or triggering another network node to send.
  • particular examples herein may relate to providing a recommendation based on a current context characterization, e.g., number of people, gender mix, age mix, which persons are comprised in the one or more users, speech tone and mood, language mix, discussion mood and topic characterization, etc... and applying machine-learning techniques to train one or more models for context characterization.
  • a current context characterization e.g., number of people, gender mix, age mix, which persons are comprised in the one or more users, speech tone and mood, language mix, discussion mood and topic characterization, etc... and applying machine-learning techniques to train one or more models for context characterization.
  • One advantage of embodiments herein is that the methods described enable providing recommendations that may be optimally adapted to the current context and mood of the one or more users, and thus provide the one or more users with much more relevant recommendations than those based on collaborative and content-based filtering methods.
  • Figure 5 is a schematic block diagram illustrating an interaction between the fourth communication device 104, which in this example is an Audio Scene Analyzer, and the first communication device 101 , referred to in the Figure as a Recommender System.
  • the fourth communication device 104, the Audio Scene Analyzer may receive according to Action 301 , audio signals from the receiving device 130, e.g., microphones, in the space where the one of more users are located.
  • Action 303 it may use speech and natural language analysis tools to characterize current context factors in the space over the time period, such as: number of people, gender mix, age mix, which known persons, mood and tone of voices, language mix, and discussion mood and topic of the discussion.
  • the second indication of the determined characterization of the context may then be sent to the first communication device 101 , according to Action 304.
  • the first communication device 101 obtains the second indication according to Action 201.
  • the Recommender System 202 may retrieve the context dependent preferences of a user that may be available in the user preference profile in e.g., a user database 501 , which may then be used to arrive at a context dependent recommendation of the assets that may be available to the user, as stipulated by e.g., a subscription of the user, and which may be available at e.g., an asset database 502.
  • An asset may be understood as an item of content or content item, such as a movie, a TV series, etc.
  • the preference profile of a user may comprise information relating to the preferences of the user, such as likes and dislikes, and previous choices in different contexts, a list of other users with similar taste and preferences and other related information.
  • the Recommender System here may therefore be understood as being a context and user profile based Recommender System.
  • the first communication device 101 may be co-located, and integrated with the fourth communication device 104 in a same system.
  • a recommender system which may be a TV recommender system.
  • This recommender system may typically be a part of an Internet Protocol Television (IPTV) service solution with some of the functions implemented locally in a set-top-box, and others on a media server in the cloud.
  • IPTV Internet Protocol Television
  • a simplified schematic block diagram of such a system is illustrated in the non-limiting example of Figure 6, depicting function blocks and databases at the client and server side.
  • the fourth communication device 104 is an Audio Scene Analysis system. Audio signals may be obtained, according to Action 301 , from microphones in the space and over the time period.
  • Directional microphones may be obtained, according to Action 301 , from microphones in the space and over the time period.
  • the Audio Scene Analysis is performed by the fourth communication device 104 in the client to provide a characterization of the audio scene according to Action 303, which is then used as part of the input to the first communication device 101 , the recommendation system, on the server.
  • the Recommendation System may, according to Action 203, send the first indication as a list of recommended content items to the second communication device 102 on the client, which is presented to the user, according to Action 402, on the screen of the reproducing device 120, here a TV, through the user interface 110.
  • the user may then be able to inspect the recommended content items through the user interface 110, and choose a content item to watch.
  • the user input may be provided through a remote control interface 601 implemented on a smart phone or a remote controller device.
  • a video renderer 602 may be used to, according to Action 402, initiate providing the item of content on the third communication device 103, based on the selection received from the one or more users.
  • the selected content may be retrieved from a server content database 603.
  • the fourth communication device 104 may have access to a Client User Database (Client User DB) 604 and a Client Audio Scene Analysis Database (Client ASA DB) 605.
  • the ASA models mentioned in Action 302 may be stored locally in the Client ASA DB 605.
  • the Client ASA DB 605 in, for example, Figure 6 may contain all language-, speaker-, gender- and acoustic-models that may be required by the fourth communication device 104 to perform its audio scene analysis, as well as the short term storage of audio data that may be used to train and update these models as more audio may become available.
  • the fourth communication device 104 may have an Application Program Interface (API) for managing the retrieval, training and updating of these models on or from the server.
  • API Application Program Interface
  • the fourth communication device 104 On the Media server, the fourth communication device 104 may have access to the Server ASA Database (DB) 502 with ASA models for all users in the system, and the client may always check with the server if its models are up to date and do an update when needed, as described in Action 302.
  • DB Server ASA Database
  • the user database on the client may contain only user profiles of enrolled users of a particular account, while on the server side, the Server User DB 501 may contain profiles of all users in all accounts with information on the subscribed services and packages and other relevant account information.
  • the context characterization along with the user identification may be sent by the fourth communication device 104 from the client, via a client controller (Client CTRL) 606 to a server controller (Server CTRL) 607 in the media server.
  • Client CTRL 606 and the Server CTRL 607 are described in relation to Figure 7.
  • the first communication device 101 may compile the list of recommended content items from the available content in the Server Content Database (Server Content DB) 603, based on the context characterization and the preference profile of the user. This list may then sent to the client and presented to the user on the screen of the reproducing device 120 through the user interface 110.
  • Server Content Database Server Content Database
  • Figure 7 gives a more detailed description of the different control functions in a recommender system according to a non-limiting example herein, wherein functions of the first communication device 101 , the second communication device 102, the third communication device 103, and the fourth communication device 104 are integrated into a same system, with some functions implemented locally on a Media Client, and some of the functions implemented on a Media Server, e.g., in the cloud.
  • Figure 7 depicts some of the main function managers and databases at the client and server side, as well as the interfaces between the paired control managers in each control management layer on the device and the server for each one of the communication devices.
  • the fourth communication device 104 is also an Audio Scene Analysis system.
  • the fourth communication device 104 may comprise a Client ASA Manager 701 and a Server (Serv) ASA Manager 702, which may manage the Audio Scene Analysis functionality of the system, as explained in the section above for the fourth communication device 104 in relation to Figure 3.
  • Serv Server
  • the first communication device 101 may comprise a Client Recommender (Recm) Manager 703 and a Serv Recm Manager 704, which may manage the recommendation functionality of the system, as explained in the section above for the first communication device 101 in relation to Figure 2.
  • a Client Recommender (Recm) Manager 703 may manage the recommendation functionality of the system, as explained in the section above for the first communication device 101 in relation to Figure 2.
  • the third communication device 103 may comprise a Client Streaming (Strm) Manager 705 and a Serv Strm Manager 706, which may manage the streaming functionality of the system, which would control the streaming of the chosen content item from the server to the client. This may include functionality such as continuously optimizing the user experience for the available bandwidth and capabilities of rendering devices in the client.
  • a Client Streaming (Strm) Manager 705 and a Serv Strm Manager 706, which may manage the streaming functionality of the system, which would control the streaming of the chosen content item from the server to the client. This may include functionality such as continuously optimizing the user experience for the available bandwidth and capabilities of rendering devices in the client.
  • the second communication device 102 may comprise a Client User (Usr) Manager 707 and a Serv Usr Manager 708, which may manage the user accounts functionality as has been explained in the sections above. This entails authenticating the users, verifying what functionality they may be entitled to through their subscription account, enabling that functionality in the system, and keeping all account and user information secure and updated.
  • One function in particular that may be managed by these managers may be the User Enrolment into the system, which may be understood to have the purpose of initializing a) some of the user profile information that may be used to enable personalized recommendations and b) provide audio data that may be used to adapt ASA models to the user.
  • Each account may have several users, and each user may need to be enrolled into the system by providing, for example, a) an initial set of TV programs and films they like, and, b) one or more speech samples of a set of predefined sentences.
  • the data from a) may be used to initialize the preference profile of the user and the data from b) may be used to adapt the Speaker Models (SM) and Acoustic Models (AM) that may be used in the Audio Scene Analyzer.
  • SM Speaker Models
  • AM Acoustic Models
  • Security may be a concern for the user.
  • the user may not feel comfortable allowing the system to store the audio captured in their environment on remote servers, even if it is only temporary until enough data has been stored to perform updates for the user of an ASA model and or a VSA model.
  • two alternative solutions may be provided.
  • a first solution there may be no storage of audio on a remote server and no training of ASA and/or VSA models, where the captured speech and/or video signals may only be used to generate the current context in the ASA and/or VSA module on the client, and then deleted. Since this solution prohibits full training of ASA and/or VSA models on the server with machine learning algorithms, the user specific ASA and/or VSA models may not be optimal, resulting in poorer recommendation results.
  • temporary storage of audio and/or video data on the server may be allowed and used for the training of ASA and/or VSA models, where the captured speech and/or video may be used to generate the current context, and then sent to the cloud for temporary storage. There it may be used to train the ASA and/or VSA models for better recommendations in the future.
  • This speech and/or video data may only be used for the purpose of training ASA and/or VSA models of speech and/or video, and a user- agreement may cover this use of the data.
  • the second communication device 102 may comprise the interface 1 10, a Client Usr Interface (Intf), which may manage the interface between the one or more users and the system, where user input may be acquired through a remote control interface implemented on a smart phone or a remote control device 601 and graphical objects displayed on the TV screen of a reproducing device 120 with the assistance of the video renderer 602.
  • a Client Usr Interface Intf
  • the second communication device 102 may comprise the interface 1 10, a Client Usr Interface (Intf), which may manage the interface between the one or more users and the system, where user input may be acquired through a remote control interface implemented on a smart phone or a remote control device 601 and graphical objects displayed on the TV screen of a reproducing device 120 with the assistance of the video renderer 602.
  • Figure 8 depicts a non-limiting example of a process of context characterization generation, for an integrated system according to embodiments herein.
  • the fourth communication device 104 is also an Audio Scene Analysis system.
  • Figure 8 describes a sequence diagram of the execution of a scene analysis cycle to perform online speech analysis to generate a current Context Characterisation. Please note that for both sequence diagrams of Figure 8 and 9, for the purpose of simplicity, components blocks are depicted at a higher level of abstraction than those presented in Figure 7 and the details are considered implicitly. For example,
  • Client_CTRL 606 and Server_CTRL 607 are depicted, while the internal components of both of these controls are not depicted.
  • the fourth communication device 104 in the system automatically checks for the updated Models to be used for the Context Characterization process in Action 302.
  • Action 302 may comprise steps 2-4 and 6-7 of Figure 8. Note that this checking and consequently downloading of the models to the Client_ASA database 605 may be performed at different occasions: for example models may be downloaded when the user starts a session, as shown in this figure, or they may be downloaded after a particular time, e.g., once a day or once a week, based on the user settings that the user may set during the user enrolment process.
  • the Client_CRTL 606 asks the Server_CTRL 607 for the new Models.
  • the Server_CTRL 607 checks for the new Models in the Server_ASA database 502 that stores the Models.
  • a continuous scanning is done in Action 301 , and audio is sent to the Client_CTRL 606 for the time period.
  • the time period may be based on a user setting and may vary.
  • the Client_CTRL 606 asks for the Models from Client_ASA database 605.
  • the Client_ASA database 605 sends Models back.
  • Action 303 may start when the Client_CTRL 606 of the fourth
  • Action 303 may comprise steps 8-10 of Figure 8.
  • the ASA MGR 701 performs the analysis.
  • Action 304 may start when the ASA 104 sends this Context
  • Action 304 may comprise steps 1 1-14 of Figure 8.
  • the Client_CTRL 606 stores this Characterization in the Client_User database 604 to be used further
  • the Client_CTRL 606 forwards this Characterization along with User identification to the Server_CTRL 607, according to Action 303. 14.
  • the Server_CTRL 607 stores this Characterization along with User identification in the Server_User database 501 to be used further.
  • Figure 9 illustrates a non-limiting example of a sequence diagram describing the execution of the recommendation function of a recommendation system such as the first 5 communication device 101 , according to embodiments herein.
  • Figure 9 describes a sequence diagram to perform an online recommendation based on the current Context Characterisation, as provided for example by the fourth communication device 104.
  • a user of the one or more users asks for the recommendation.
  • the Client_CTRL 606 of the first communication device 101 forwards the request to the Server_CTRL 607.
  • the Server_CTRL 607 sends the User ID to the Server_User database
  • the Server_User database 501 sends back the preferences of the user. Note that these preferences are made by the user in the initial user enrolment and are also based on the details of the user's subscription.
  • the Server_CTRL 607 which may have obtained 20 the characterization of the context as described in step 11-12 of Figure 8, asks the
  • Server_Content database 603 to provide a list of recommendations.
  • Action 202 may comprise steps 5-7 of Figure 9.
  • the Server_Content database 603 sends back an initial list of Recommendations.
  • the Server_CTRL 607 chooses from the initial list based on the user's preferences and creates a Recommendation list.
  • Action 203 may start when the Server_CTRL 607 sends the
  • the Client_CTRL 606 displays the list on TV/screen of the reproducing 30 device 120 using the User Interface modules.
  • the user selects an item from the displayed list.
  • the selected item is sent to the Client_CTRL 606.
  • the selected item is sent to the Server_CTRL 607.
  • the selected item is sent to the Server_Content database 603. 14.
  • the rendering starts from the Server_Content database 603 on the TV/Screen of the reproducing device 120, represented as TV through User Interface (TVthrouUI) in the Figure.
  • TVthrouUI User Interface
  • the latest data to train the Models is sent from the Server_CTRL 607 to the Server_ASA database 502 whenever needed/set by the user.
  • the first communication device 101 for handling the recommendation to the second communication device 102 may comprise the following arrangement depicted in Figure 10. As stated earlier, the first communication device 101 and the second communication device 102 are configured to operate in the telecommunications network 100.
  • the content may be a media content
  • the third communication device 103 may be a media streaming device.
  • the first communication device 101 is further configured to, e.g., by means of an determining module 1001 configured to, determine the recommendation for the item of content to be provided by the third communication device 103 configured to operate in the telecommunications network 100; the determination of the recommendation is configured to be based on signals configured to be collected by the receiving device 130 located in the space where the one or more users of the third communication device 103 are located, during the time period; the signals are configured to be at least one of: the audio signals and the video signals.
  • an determining module 1001 configured to, determine the recommendation for the item of content to be provided by the third communication device 103 configured to operate in the telecommunications network 100; the determination of the recommendation is configured to be based on signals configured to be collected by the receiving device 130 located in the space where the one or more users of the third communication device 103 are located, during the time period; the signals are configured to be at least one of: the audio signals and the video signals.
  • the recommendation may be further configured to be based on one or more first factors configured to be obtained, e.g., by means of an determining module 1001 further configured to, from a first analysis of the audio signals configured to be collected, wherein the one or more first factors may comprise at least one of:
  • a second mood configured to be derived from a first semantic analysis of a language used by the one or more voices configured to be detected in the audio signals
  • a topic of discussion configured to be derived from a second semantic analysis of a language used by the one or more voices configured to be detected in the audio signals.
  • determining module 1001 further configured to, at least one of:
  • the one or more characteristics may be configured to be derived by segmenting the audio signals which may be configured to be collected during the time period, into single speaker segments;
  • the first mood may be configured to be derived based on a natural language processing of a transcript of the single speaker segments, which may be configured to be obtained by Automatic Speech Recognition;
  • the first semantic analysis may be configured to be based on one or more first language models
  • the second semantic analysis may be configured to be based on one or more second language models.
  • the recommendation may be further based on one or more second factors configured to be obtained, e.g., by means of the determining module 1001 further configured to, from a second analysis of the video signals configured to be collected, wherein the one or more second factors may comprise at least one of:
  • the one or more characteristics of the one or more users from: number, gender, age, and identity of the one or more users;
  • a third mood configured to be derived from at least one of: a body movement and a gesture configured to be detected in each of the one or more users configured to be distinguished in the video signals.
  • the first communication device 101 is further configured to, e.g., by means of an initiating module 1002 configured to, initiate sending the first indication of the
  • the first communication device 101 may be further configured to, e.g., by means of an obtaining module 1003 configured to, obtain the characterization of the context of the third communication device 103 by determining, based on the processing of the signals configured to be collected, at least one of: the one or more first factors, and the one or more second factors.
  • the determining of the recommendation may be configured to be further based on the characterization of the context configured to be obtained.
  • the second communication device 102 may be the third communication device 103.
  • the embodiments herein may be implemented through one or more processors, such as a processor 1004 in the first communication device 101 depicted in Figure 10, together with computer program code for performing the functions and actions of the embodiments herein.
  • the program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the first communication device 101.
  • a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the first communication device 101.
  • One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick.
  • the computer program code may furthermore be provided as pure program code on a server and downloaded to the first communication device 101.
  • the first communication device 101 may further comprise a memory 1005
  • the memory 1005 is arranged to be used to store obtained information, store data, configurations, and applications etc. to perform the methods herein when being executed in the first communication device 101.
  • the first communication device 101 may receive information from the second communication device 102, the third communication device 103, the fourth communication device 104, and/or any of the pertinent databases described above, through a receiving port 1006.
  • the receiving port 1006 may be, for example, connected to one or more antennas in first communication device 101.
  • the first communication device 101 may receive information from another structure in the telecommunications network 100 through the receiving port 1006. Since the receiving port 1006 may be in communication with the processor 1004, the receiving port 1006 may then send the received information to the processor 1004.
  • the receiving port 1006 may also be configured to receive other information from other communication devices or structures in the telecommunications network 100.
  • the processor 1004 in the first communication device 101 may be further configured to transmit or send information to e.g., the second communication device 102, the third communication device 103, the fourth communication device 104, and/or any of the pertinent databases described above, through a sending port 1007, which may be in communication with the processor 1004, and the memory 1005.
  • a sending port 1007 which may be in communication with the processor 1004, and the memory 1005.
  • the determining module 1001 , the initiating module 1002, and the obtaining module 1003 described above may refer to a combination of analog and digital modules, and/or one or more processors configured with software and/or firmware, e.g., stored in memory, that, when executed by the one or more processors, such as the processor 1004, perform as described above.
  • processors may be included in a single Application-Specific Integrated Circuit (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a System-on-a-Chip (SoC).
  • SoC System-on-a-Chip
  • the processor 1004 may be implemented as several processors, such as the Client CTRL 606, or more specifically, the Client RECM MGR 703, and the SERV CTRL 607, or more specifically, the SERV RECM MGR 704, distributed among several separate components, such as the Media Client and the Media Server.
  • the different modules 1001-1003 described above may be implemented as one or more applications running on one or more processors such as the processor 1004.
  • the methods according to the embodiments described herein for the first communication device 101 may be respectively implemented by means of a computer program 1008 product, comprising instructions, i.e., software code portions, which, when executed on at least one processor 1004, cause the at least one processor 1004 to carry out the action described herein, as performed by the first communication device 101.
  • the computer program 1008 product may be stored on a computer-readable storage medium 1009.
  • the computer-readable storage medium 1009, having stored thereon the computer program 1008, may comprise instructions which, when executed on at least one processor 1004, cause the at least one processor 1004 to carry out the action described herein, as performed by the first communication device 101.
  • the computer-readable storage medium 1009 may be a non-transitory computer-readable storage medium, such as a CD ROM disc, or a memory stick.
  • the computer program 1008 product may be stored on a carrier containing the computer program 1008 just described, wherein the carrier is one of an electronic signal, optical signal, radio signal, or the computer-readable storage medium 1009, as described above.
  • the fourth communication device 104 for handling the characterization of the context of the third communication device 103 configured to have one or more users may comprise the following arrangement depicted in Figure 11. As stated earlier, the fourth communication device 104 and the third communication device 103 are configured to operate in the telecommunications network 100.
  • the content may be a media content
  • the third communication device 103 may be a media streaming device.
  • the fourth communication device 104 is configured to, e.g., by means of an
  • obtaining module 1101 configured to, obtain the signals configured to be collected by the receiving device 130 located in the space where the one or more users of the third communication device 103 are located, during the time period.
  • the signals are configured to be at least one of: audio signals and video signals.
  • the fourth communication device 104 is further configured to, e.g., by means of a determining module 1102 configured to, determine the characterization of the context of the third communication device 103 by determining one or more factors configured to be obtained from an analysis of the signals configured to be obtained.
  • the one or more factors may comprise at least one of: a) the one or more characteristics of the one or more users from: number, gender, age, and identity of the one or more users; b) the first mood configured to be derived from the tone of one or more voices configured to be detected in the audio signals; c) the second mood configured to be derived from the first semantic analysis of a language used by one or more voices configured to be detected in the audio signals; d) the topic of discussion configured to be derived from the second semantic analysis of a language used by the one or more voices configured to be detected in the audio signals, e) the third mood configured to be derived from at least one of: the body movement and the gesture configured to be detected in each of the one or more users.
  • the signals may be configured to be audio signals, and, e.g., by means of the determining module 1102 further configured to, at least one of: a) the one or more characteristics may be configured to be derived by segmenting the audio signals configured to be collected during the time period, into single speaker segments; b) the first mood may be configured to be derived based on the natural language processing of the transcript of the single speaker segments, configured to be obtained by Automatic Speech Recognition; c) the first semantic analysis may be configured to be based on the one or more first language models; and d) the second semantic analysis may be configured to be based on the one or more second language models.
  • the signals are configured to be video signals and, the one or more factors may comprise at least one of: a) the one or more characteristics of the one or more users from: number, gender, age, and identity of the one or more users; and b) the third mood configured to be derived from at least one of: the body movement and 5 the gesture configured to be detected in each of the one or more users.
  • the fourth communication device 104 is further configured to, e.g., by means of an initiating module 1103 configured to, initiate sending the second indication of the characterization of the context configured to be determined to a first communication device 101 configured to operate in the telecommunications network 100 or another 10 communication device 102, 103, 104 configured to operate in the telecommunications network 100.
  • the fourth communication device 104 may be further configured to, e.g., by means of an obtaining module 1104 configured to, obtain the model for each one of the one or more factors, based on repeatedly performing the obtaining of the signals and the
  • indication may be configured to further indicate the obtained model for each one of the one or more factors.
  • processors such as a processor 1105 in the fourth communication device 104 depicted in Figure 11 ,
  • the program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the fourth communication device 104.
  • a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the fourth communication device 104.
  • One such carrier may be in the form of a CD ROM disc.
  • the computer program code may furthermore be provided as pure program code on a server and downloaded to the fourth communication device 104.
  • the fourth communication device 104 may further comprise a memory 1106 comprising one or more memory units.
  • the memory 1 106 is arranged to be used to store
  • the fourth communication device 104 may receive information from the first communication device 101 , the second communication device 35 102, the third communication device 103, and/or any of the pertinent databases described above, through a receiving port 1107.
  • the receiving port 1107 may be, for example, connected to one or more antennas in fourth communication device 104.
  • the fourth communication device 104 may receive
  • the receiving port 1 107 may also be configured to receive other information.
  • the processor 1 105 in the fourth communication device 104 may be further
  • a sending port 1108 which may be in communication with the processor 1105, and the memory 1106.
  • processors configured with software and/or firmware, e.g., stored in memory, that, when executed by the one or more processors such as the processor 1 105, perform as described above.
  • processors such as the processor 1 105, perform as described above.
  • processors as well as the other digital hardware,
  • ASIC Application-Specific Integrated Circuit
  • processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a System-on-a-Chip (SoC).
  • SoC System-on-a-Chip
  • the processor 1 105 may be implemented as several processors, such as the Client CTRL 606, or more specifically, the Client ASA MGR 701 , and the SERV CTRL
  • the SERV ASA MGR 702 distributed among several separate components, such as the Media Client and the Media Server.
  • the different modules 1 101-1 104 described above may be implemented as one or more applications running on one or more processors such as the processor 1105.
  • the methods according to the embodiments described herein for the fourth communication device 104 may be respectively implemented by means of a computer program 1109 product, comprising instructions, i.e., software code portions, which, when executed on at least one processor 1105, cause the at least one processor 1105 to carry out the action described herein, as performed by the fourth communication device 104.
  • the computer program 1 109 product may be stored on a computer-readable storage medium 1 110.
  • the computer-readable storage medium 11 10, having stored thereon the computer program 1109, may comprise instructions which, when executed on at least one processor 1 105, cause the at least one processor 1 105 to carry out the action described herein, as performed by the fourth communication device 104.
  • the computer-readable storage medium 1 110 may be a non-transitory computer-readable storage medium 11 10, such as a CD ROM disc, or a memory stick.
  • the computer program 1109 product may be stored on a carrier containing the computer program 1 109 just described, wherein the carrier is one of an electronic signal, optical signal, radio signal, or the computer-readable storage medium 1 110, as described above.
  • the second communication device 102 for handling the recommendation from the first communication device 101 may comprise the following arrangement depicted in Figure 12. As stated earlier, the first communication device 101 and the second communication device 102 may be configured to operate in the telecommunications network 100.
  • the content may be a media content
  • the third communication device 103 may be a media streaming device.
  • the second communication device 102 is configured to, e.g., by means of a receiving module 1201 configured to, receive the first indication for the recommendation for the item of content to be provided by the third communication device 103 configured to operate in the telecommunications network 100.
  • the recommendation is based on the signals collected by the receiving device 130 configured to be located in the space where one or more users of the third communication device 103 are located, during the time period.
  • the signals are configured to be at least one of: audio signals and video signals.
  • the second communication device 102 is further configured to, e.g., by means of an initiating module 1202 configured to, initiate providing, to the one or more users, the third indication of the recommendation configured to be received on the interface 110 of the second communication device 102.
  • the second communication device 102 may be further configured to, e.g., by means of the initiating module 1202 configured to, initiate providing the item of content on the third communication device 103, based on a selection configured to be received from the one or more users on the interface 110.
  • the selection may be further configured to be based on the third indication configured to be provided.
  • the embodiments herein may be implemented through one or more processors, such as a processor 1203 in the second communication device 102 depicted in Figure 12, together with computer program code for performing the functions and actions of the embodiments herein.
  • the program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the second communication device 102.
  • a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the second communication device 102.
  • One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick.
  • the computer program code may furthermore be provided as pure program code on a server and downloaded to the second communication device 102.
  • the second communication device 102 may further comprise a memory 1204 comprising one or more memory units.
  • the memory 1204 is arranged to be used to store obtained information, store data, configurations, schedulings, and applications etc. to perform the methods herein when being executed in the second communication device
  • the second communication device 102 may receive information from the first communication device 101 , the third communication device 103, and/or the fourth communication device 104, through a receiving port 1205.
  • the receiving port 1205 may be, for example, connected to one or more antennas in second communication device 102.
  • the second communication device 102 may receive information from another structure in the telecommunications network 100 through the receiving port 1205. Since the receiving port 1205 may be in communication with the processor 1203, the receiving port 1205 may then send the received information to the processor 1203.
  • the receiving port 1205 may also be configured to receive other information.
  • the processor 1203 in the second communication device 102 may be further configured to transmit or send information to e.g., the first communication device 101 , the third communication device 103, and/or the fourth communication device 104, through a sending port 1206, which may be in communication with the processor 1203, and the memory 1204.
  • the receiving module 1201 , and the initiating module 1202 described above may refer to a combination of analog and digital modules, and/or one or more processors configured with software and/or firmware, e.g., stored in memory, that, when executed by the one or more processors such as the processor 1203, perform as described above.
  • processors as well as the other digital hardware, may be included in a single Application-Specific Integrated Circuit (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a System-on-a-Chip (SoC).
  • SoC System-on-a-Chip
  • the processor 1203 may be implemented as several processors, distributed among several separate components, such as the Media Client and the Media Server.
  • the different modules 1201-1202 described above may be implemented as one or more applications running on one or more processors such as the processor 1203.
  • the methods according to the embodiments described herein for the second communication device 102 may be respectively implemented by means of a computer program 1207 product, comprising instructions, i.e., software code portions, which, when executed on at least one processor 1203, cause the at least one processor 1203 to carry out the action described herein, as performed by the second communication device 102.
  • the computer program 1207 product may be stored on a computer-readable storage medium 1208.
  • the computer-readable storage medium 1208, having stored thereon the computer program 1207, may comprise instructions which, when executed on at least one processor 1203, cause the at least one processor 1203 to carry out the actions described herein, as performed by the second communication device 102.
  • the computer-readable storage medium 1208 may be a non-transitory computer-readable storage medium 1208, such as a CD ROM disc, or a memory stick.
  • the computer program 1207 product may be stored on a carrier containing the computer program 1207 just described, wherein the carrier is one of an electronic signal, optical signal, radio signal, or the computer-readable storage medium 1208, as described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Telephonic Communication Services (AREA)

Abstract

La présente invention concerne un procédé exécuté par un premier dispositif de communication (101), le procédé étant destiné à traiter une recommandation à un deuxième dispositif de communication (102). Les premier (101) et deuxième (102) dispositifs de communication fonctionnent dans un réseau de télécommunication (100). Le premier dispositif de communication (101) détermine (202) une recommandation portant sur un élément de contenu devant être fourni par un troisième dispositif de communication (103) fonctionnant dans le réseau de télécommunication (100). La détermination de la recommandation est basée sur des signaux collectés pendant une période de temps par un dispositif de réception (130) situé dans un espace dans lequel se trouvent un ou plusieurs utilisateurs du troisième dispositif de communication (103). Les signaux sont des signaux audio et/ou vidéo. Le premier dispositif de communication (101) initie (203) l'envoi d'une première indication de la recommandation déterminée au deuxième dispositif de communication (102).
PCT/SE2017/050458 2017-05-08 2017-05-08 Système et procédés de fourniture d'une recommandation portant sur des éléments de contenu WO2018208192A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/SE2017/050458 WO2018208192A1 (fr) 2017-05-08 2017-05-08 Système et procédés de fourniture d'une recommandation portant sur des éléments de contenu

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SE2017/050458 WO2018208192A1 (fr) 2017-05-08 2017-05-08 Système et procédés de fourniture d'une recommandation portant sur des éléments de contenu

Publications (1)

Publication Number Publication Date
WO2018208192A1 true WO2018208192A1 (fr) 2018-11-15

Family

ID=64105476

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2017/050458 WO2018208192A1 (fr) 2017-05-08 2017-05-08 Système et procédés de fourniture d'une recommandation portant sur des éléments de contenu

Country Status (1)

Country Link
WO (1) WO2018208192A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140359647A1 (en) * 2012-12-14 2014-12-04 Biscotti Inc. Monitoring, Trend Estimation, and User Recommendations
US20150271571A1 (en) * 2014-03-18 2015-09-24 Vixs Systems, Inc. Audio/video system with interest-based recommendations and methods for use therewith
US20160337701A1 (en) * 2015-05-17 2016-11-17 Surewaves Mediatech Private Limited System and method for automatic content recognition and audience measurement for television channels and advertisements
US20160345044A1 (en) * 2015-05-19 2016-11-24 Rovi Guides, Inc. Methods and systems for recommending a display device for media consumption

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140359647A1 (en) * 2012-12-14 2014-12-04 Biscotti Inc. Monitoring, Trend Estimation, and User Recommendations
US20150271571A1 (en) * 2014-03-18 2015-09-24 Vixs Systems, Inc. Audio/video system with interest-based recommendations and methods for use therewith
US20160337701A1 (en) * 2015-05-17 2016-11-17 Surewaves Mediatech Private Limited System and method for automatic content recognition and audience measurement for television channels and advertisements
US20160345044A1 (en) * 2015-05-19 2016-11-24 Rovi Guides, Inc. Methods and systems for recommending a display device for media consumption

Similar Documents

Publication Publication Date Title
US11336938B2 (en) Pre-positioning of streaming content onto communication devices for future content recommendations
JP6133369B2 (ja) 要求を検出し要求ベースのマルチメディアブロードキャストマルチキャストサービスを確立するための方法および装置
CN103596065B (zh) 装置定向能力交换信令和多媒体内容的服务器适应性修改
US11483727B2 (en) Subscriber data analysis and graphical rendering
CN111149387A (zh) 小区重选方法及装置、通信设备
US11689771B2 (en) Customized recommendations of multimedia content streams
CN106575343B (zh) 基于客户端所确定的邻近客户端设备之间的关系来触发通信动作
US20230254713A1 (en) Antenna farm intelligent software defined networking enabled dynamic resource controller in advanced networks
CN113132344A (zh) 广播和管理呼叫参与
WO2022248118A1 (fr) Autorisation de fonctions de réseau de consommateur
WO2017147273A1 (fr) Système de gestion de messagerie personnalisée destiné à renforcer l'implication d'un utilisateur dans un réseau auquel il est abonné
US20210211978A1 (en) Integrated access and backhaul link selection
CN103168439B (zh) 用于调整在多播网络上传输的内容安排的方法和装置
WO2018208192A1 (fr) Système et procédés de fourniture d'une recommandation portant sur des éléments de contenu
JP2024518705A (ja) ネットワークマニフェストに基づくai/mlモデル配信
US10003659B2 (en) Efficient group communications leveraging LTE-D discovery for application layer contextual communication
US20230179816A1 (en) Personal media content insertion
US11810595B2 (en) Identification of life events for virtual reality data and content collection
US20220172417A1 (en) Allocating and extrapolating data for augmented reality for 6g or other next generation network
US20230177416A1 (en) Participant attendance management at events including virtual reality events
WO2023186301A1 (fr) Appareil et procédés en son sein, dans un réseau de communication
WO2023057849A1 (fr) Ré- entraînement de modèle d'apprentissage automatique (ml) dans un réseau cœur 5g

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17909045

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17909045

Country of ref document: EP

Kind code of ref document: A1