WO2016153865A1 - Method and apparatus for providing content recommendation - Google Patents

Method and apparatus for providing content recommendation Download PDF

Info

Publication number
WO2016153865A1
WO2016153865A1 PCT/US2016/022534 US2016022534W WO2016153865A1 WO 2016153865 A1 WO2016153865 A1 WO 2016153865A1 US 2016022534 W US2016022534 W US 2016022534W WO 2016153865 A1 WO2016153865 A1 WO 2016153865A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture element
video
search
recommendation
frame
Prior art date
Application number
PCT/US2016/022534
Other languages
French (fr)
Inventor
Krystle SWAVING
Adam BALEST
Samir Ahmed
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Publication of WO2016153865A1 publication Critical patent/WO2016153865A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4826End-user interface for program selection using recommendation lists, e.g. of programs or channels sorted out according to their score
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4333Processing operations in response to a pause request
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies

Definitions

  • the present principles generally relate to an apparatus and a method for providing content recommendation.
  • a video is played on a device.
  • the user may pause the video and choose a frame of the video.
  • the user may then select and extract one or more picture elements from the frame or even the whole frame.
  • a recommendation is then provided in response to the extracted one or more picture elements.
  • Other picture elements not from the extracted frame and representing different search criteria may also be added to the search to derive the recommendation.
  • Different streaming media sites e.g., Hulu, Netflix, M-GO, and etc.
  • the users may perform the media search on these websites typically by typing in a query string related to e.g. , movie or show titles, or other keywords, using a keyboard on their PCs, laptops, cellphones, or various other user devices. Once found, such media can be downloaded and/or streamed to the user's consumption device.
  • a user may use the M-Go service to receive recommendations about the different media assets that are available from M- Go.
  • recommendations may be made, e.g., in response to a user's profile of consumption and/or purchases of media assets, and/or criteria that a user specifies using textual input (e.g., enter in "sports", "adventure” or "Tom Cruise”) as noted before.
  • textual input e.g., enter in "sports", "adventure” or “Tom Cruise”
  • the profile/query information is then inputted into a recommendation engine whereby algorithms are used to develop and output media asset recommendations.
  • the present invention recognizes that the existing systems and methods, however, do not provide an easy and intuitive way for a user to enter relevant search items directly from a video stream being watched. Hence, there is a need to improve the existing systems and methods for processing input video information to provide content recommendations.
  • an apparatus comprising:
  • a user input configured to receive a first user command, a second user command, and a third user command
  • a processor configured to play a video in response to the first user command and pause the video in response to a second user command for displaying a frame of the video wherein the frame comprising picture elements; the processor extracts at least one picture element from the frame in response to a third user command; and the processor provides a recommendation in response to the extracted at least one picture element.
  • a method comprising: playing a video on a device;
  • a computer program product stored in a non-transitory computer-readable storage media comprising computer-executable instructions for:
  • FIG. 1 shows an exemplary process according to the present principles
  • FIG. 2 shows an example system according to the present principles
  • FIG. 3 shows an exemplary user interface screen and its functions according to the present principles
  • FIG. 4 shows another exemplary user interface screen and its functions according to the present principles.
  • FIG. 5 shows another exemplary user interface screen and its functions according to the present principles.
  • the examples set out herein illustrate exemplary embodiments of the present principles. Such examples are not to be construed as limiting the scope of the invention in any manner.
  • FIG. 2 shows an exemplary system according to the present principles.
  • FIG. 2 represents a system capable of receiving and processing user inputs and providing, in response, various media assets for streaming or downloading.
  • various user devices 260-1 to 260-n in FIG. 2 may communicate with an exemplary server 205 over a communication network 250 such as the internet, a wide area network (WAN), and/or a local area network (LAN).
  • Server 205 may communicate with user devices 260-1 to 260-n in order to provide relevant information such as data, webpages, media contents, and etc., not available on the local user devices 260-1 to 260- n.
  • Server 205 may also provide additional processing of information when the processing is not available and/or capable of being conducted on the local user devices 260-1 to 260-n.
  • server 205 may be a computer having a processor 210 such as, e.g., an Intel processor, running an appropriate operating system such as, e.g., Windows 2008 R2, Windows Server 201 2 R2, Linux operating system, and etc.
  • Devices 260-1 to 260-n may access different media assets, web pages, services or databases provided by server 205 using, e.g., HTTP protocol.
  • HTTP protocol e.g., HTTP protocol.
  • a well-known web server software application which may be run by server 205 to provide web pages is Apache HTTP Server software available from http://www.apache.org.
  • server 205 may also provide media content services similar to, e.g., Amazon.com, Netflix, or M-GO.
  • Server 205 may use a streaming protocol such as e.g., Apple HTTP Live Streaming (HLS) protocol, Adobe Real-Time Messaging Protocol (RTMP), Microsoft Silverlight Smooth Streaming Transport Protocol, and etc., to transmit various media assets such as, e.g., video programs, audio programs, movies, TV shows, software, games, electronic books, electronic magazines, electronic articles, and etc., to an end-user device 260-1 for purchase and/or view via streaming.
  • HLS Apple HTTP Live Streaming
  • RTMP Real-Time Messaging Protocol
  • Microsoft Silverlight Smooth Streaming Transport Protocol and etc.
  • a server administrator may interact with and configure server 205 to run different applications using user Input/Output (I/O) devices 21 5 (e.g., a keyboard and/or a display) as well known in the art.
  • I/O Input/Output
  • various webpages, data, media assets and their associated metadata may be stored in a database 225 and accessed by processor 210 as needed.
  • Database 225 may reside in appropriate non-transitory storage media, such as, e.g., one or more hard drives and/or other suitable memory devices, as well known in the art.
  • computer program products for the server 205 may also be stored in such non-transitory storage media.
  • element 225 of server 205 may also represent a search engine so that media recommendations may be made, e.g., in response to a user's profile of consumption and/or purchases of media assets, and/or criteria that a user specifies using textual input (e.g., enter in "sports", "adventure” or "Tom Cruise”).
  • the profile/query information is then inputted into the recommendation engine 225 whereby search algorithms are used to develop and output media asset recommendations.
  • Server 205 is connected to network 250 through a communication interface 220 for communicating with other servers or web sites (not shown) and to one or more user devices 260-1 to 260-n, as shown in FIG. 2.
  • server components such as, e.g., ROM, RAM, power supply, cooling fans, etc.
  • User devices 260-1 to 260-n shown in FIG. 2 may comprise one or more of, e.g., a PC, a laptop, a tablet, a cellphone, and etc.
  • One of such devices may be, e.g., a Microsoft Windows 7, Windows 8, or Windows 10 computer/tablet, an Android phone/tablet (e.g., Samsung S4, S5, S6, or Google Nexus 7 tablet), or an Apple IOS phone/tablet (e.g., IPhone 6 or IPad).
  • a Microsoft Windows 7, Windows 8, or Windows 10 computer/tablet an Android phone/tablet (e.g., Samsung S4, S5, S6, or Google Nexus 7 tablet), or an Apple IOS phone/tablet (e.g., IPhone 6 or IPad).
  • an Apple IOS phone/tablet e.g., IPhone 6 or IPad
  • An exemplary user device 260-1 in FIG. 2 comprises a processor 265 for processing various data and for controlling various functions and components of the device 260-1 , including video decoding and processing to play and display a downloaded or streamed video.
  • the video processing capabilities of processor 265 also perform the pausing of the video and extract one or more picture elements from the paused video frame in response to user input as to be described in more detail below.
  • device 260-1 also comprises user Input/Output (I/O) devices 280 which may comprise, e.g., a touch and/or a physical keyboard for inputting user data, and/or a display, and/or a speaker for outputting visual and/or audio user data and feedback.
  • I/O user Input/Output
  • Device 260-1 also comprises memory 285 which may represent both a transitory memory such as RAM, or a non-transitory memory such as a ROM, a hard drive or a flash memory, for processing and storing different files and information as necessary, including computer program products, webpages, user interface information, and/or user profiles as shown in FIG. 3 to FIG. 5 to be described later.
  • Device 260-1 also comprises a communication interface 270 for connecting and communicating to/from server 205 and/or other devices, via, e.g., network 250 using e.g., a connection through a cable network, a FIOS network, a Wi-Fi network, and/or a cellphone network (e.g., 3G, 4G, LTE), and etc.
  • FIG. 5 illustrate exemplary user interface screens 300, 400 and 500 respectively, and their functions according to the present principles. These user interface screens and functions may be controlled and/or provided by e.g., processor 265 in device 260-1 of FIG. 2 and/or processor 21 0 in web server 205 remotely.
  • FIG. 3 to FIG. 5 will be described in detailed below in connection with the exemplary process as shown in FIG. 1 .
  • FIG. 1 represents a flow diagram of an exemplary process 100 according to the present principles.
  • Process 100 may be implemented as a computer program product comprising computer executable instructions which may be executed by, e.g., a processor 265 in device 260-1 in FIG. 2 and/or a processor 210 in server 205 of FIG 2.
  • the computer program product having the computer-executable instructions may be stored in non- transitory computer-readable storage media of the respective device 260-1 and/or server 205.
  • the exemplary control program shown in FIG. 1 when executed, facilitates processing and displaying of user interfaces screens shown, for example, in FIG. 3 to FIG. 5, and controls their respective functions and interactions with a user.
  • the exemplary process shown in FIG. 1 may also be implemented using a combination of hardware and software (e.g., a firmware implementation), and/or executed using logic arrays or ASIC.
  • a user may request, using one of the user I/O devices 280 (e.g., a mouse) shown in device 260-1 of FIG. 2, to download a video from a website 205 of FIG. 2 and play the video on a user device 260-1 of FIG. 2.
  • the user I/O devices 280 e.g., a mouse
  • a user can select to play a video 310 by moving a selector icon 305 using e.g., a mouse, and selects the play icon 330 on screen 300 of the user device 260-1 .
  • the movie "The Outsiders" is now being played on a display area 350 of FIG.
  • the notification area 340 of FIG. 3 indicates to the user the status of the video which confirms that the video The Outsiders 310 is being played.
  • a user may pause the video 310 of FIG. 3 which was being played as described at step 1 10 of FIG. 1 above.
  • the user may pause the video 310 of FIG. 3 by selecting a pause icon 410 of an user interface screen 400 shown in FIG. 4.
  • pausing the video 31 0 will cause a frame of the video 41 0 to be displayed in an display area 450 as shown in FIG. 4.
  • Paused video frame 410 shown in FIG. 4 comprises a plurality of exemplary picture elements 420-1 to 420-n.
  • a picture element may be an actor or actress appearing on the frame of the video being paused.
  • picture element 420-1 in video frame 410 represents the actor Tom Cruise which appears on the frame of the video from the movie "The Outsiders" being paused.
  • Another example of a picture element in the video frame 410 is a car 420-2 which may represent, e.g., search criteria of car-related or racing-related search terms.
  • Picture element 420-n of a cloud in video frame 410 may represent, e.g., weather or storm related search terms or criteria.
  • one of more of the picture elements 420-1 to 420-n shown on video frame 41 0 in FIG. 4 may be extracted to a search area 440 of screen 400 of FIG. 4.
  • the picture element 420-1 representing the actor Tom Cruise has been extracted and moved to the search area 440 using e.g., a selector icon 405 via a mouse.
  • search area 440 displays the one or more of the extracted pictures elements and use them for a search as to be described in more detail below.
  • the whole frame may be selected and moved into the search area 440, and all of the picture elements on the video frame 41 0 may therefore be used as search criteria or terms.
  • a recommendation is provided in response to the one of more of the picture elements extracted from the paused video frame 410 at step 130 above.
  • an actors/actresses recommendation box 460 appears on screen 400 of FIG. 4.
  • the recommendation box may list different actors/actresses suggested by a search engine and/or recommendation algorithm in response to the input of the picture element Tom Cruise 420-1 .
  • the search engine and/or recommendation algorithm may produce a list of persons which have co-stared with Tom Cruise in a movie.
  • the recommendation may be based on another algorithm which determines that people who like Tom Cruise the actor also like the listed actors/actresses, or that people who like movies by Tom Cruise also like the movies stared by the list of persons. For example, in the recommendation box 460 of FIG. 4, actress Michelle Monaghan 461 -n is listed since she co-stared with Tom Cruise in the movie Mission: Impossible.
  • a Movies or media content recommendation box 470 also appears on screen 400 which displays, e.g., a list of movies which Tom Cruise has stared in as the media recommendations to the user, in response to the picture element of Tom Cruise 420-1 being extracted into the search area 440.
  • the recommendations which substantially or exactly match one or more of the selected picture elements in search area 440 will be highlighted in the respective recommendation boxes 460 and 470.
  • facial and object recognition processing of a video frame may be utilized to convert different picture elements of a video frame into associated search strings or criteria to be used by, e.g., a search engine 225 in server 205 of FIG. 2, in order to return the recommended results to a user.
  • facial recognition software include FotoTiger for Facebook by Applied Recognition, Face Recognition-FastAccess app from Sensible Vision, Inc., Face Recognition SDK Demo from Vinisoft, and etc.
  • object or image recognition software include TapTapSee available on Apple App store, Android Image Recognition SDK by Catchoom, and etc.
  • facial and object recognition apps or other software may be used to provide the facial and image recognition functionalities in accordance with the present principles to identify actors, actresses and/or different objects on a paused video frame in order to provide corresponding search strings or search criteria for performing a search in a search engine.
  • Metadata which describe the video frame may be used to indicate the different picture elements which may be selectable and extractable by a user and also to indicate the corresponding search representations of the different picture elements.
  • EXIF Exchange Image file Format
  • other types of metadata for used in a picture or a video may be used.
  • EXIF is a standard that specifies the formats for images, sound, and ancillary tags used by digital cameras (including smartphones), scanners and other systems handling image and sound files recorded by digital cameras. Therefore, the metadata correspond to one of more of the extracted picture elements in search area 440 may be used by e.g., a search engine 225 shown in server 205 of FIG. 2 to return the recommended results to a user.
  • one or more additional picture elements which are outside of the paused video frame 410 may be added to the search area by a user.
  • This example is illustrated further in a user interface display screen 500 of FIG. 5. Similar to as described before, all of the picture elements added to and shown in search area 544 of FIG. 5, including one or more picture elements extracted from the paused video frame 510, may be used together as search criteria in a search engine 225 to provide resulting recommendations to the user, as shown at steps 130 to 160 of FIG. 1 .
  • an equipment field 590 on screen 500 lists additional different picture elements which may be added by a user to search area 544, such as a basketball 591 -1 which may represent, e.g., sports genre search criterion, a cupcake 591 -2 which may represent food show genre search criterion, a gun 591 -3 which may represent action genre search criterion, and etc.
  • a basketball 591 -1 which may represent, e.g., sports genre search criterion
  • a cupcake 591 -2 which may represent food show genre search criterion
  • a gun 591 -3 which may represent action genre search criterion
  • movies with action genre such as Mission Impossible 571 -n may be added to user recommendations (e.g., highlighted) in movies recommendation box 570 as shown in FIG. 5.
  • a clothing search criteria field 580 may also be presented to a user on device 260-1 of FIG. 2. Different pieces of clothing may be used to represent different additional search criteria.
  • a track outfit 581 -1 may represent e.g., sports genre
  • a mobster outfit 581 -2 may represent crime genre
  • a business shoe 581 -3 may represent business related genre
  • an emotion criteria field 595 may be used to select different emotion icons represent different search criteria. For example, a happy face icon 596- 1 may represent comedy genre, a sad face icon 596-2 may represent a dramatic tearjerker genre, and etc.
  • a user may also select and move one of the actor/actresses icons 560-1 to 560-n in the actors/actresses recommendation box 560 into the search area 544 as shown in FIG. 5.
  • FIG. 5 shows that a user has added Michelle Monaghan 560-n, the female co-star of the Mission: Impossible movie, into the search area 544.
  • Mission: impossible 571 -n is highlighted and recommended to the user since, e.g., it is the movie which stars both Tom Cruise and Michelle Monaghan.
  • media services may be deployed in an environment such as virtual reality (VR) where the ability to use textual input becomes much more difficult.
  • VR virtual reality
  • a user wearing a headset such as the Oculus Rift manipulates their environment by using their head to look around, their arms/hands to manipulate objects, and their legs for mobility.
  • body movements can be captured using sensors.
  • the VR headset changes the perceived environment in response to such body movements.
  • a user may select a specific representation of an actor Tom Cruise and model the body into a specific position and the face into a different position using 3D VR.
  • the representation of the actor can also have various pieces of equipment such as a guitar corresponding to music genre, a baseball bat corresponding to sports genre, weapons corresponding to adventure or action genre and the like.
  • the equipment may be placed within the hands of the actor using VR, where the equipment is used for specifying a specific genre of content.
  • the representations of body position, facial, and clothing may be pre-defined by a user, a media service, and the like such that examples of what each body position/facial/clothing representation means may be provided to a search engine and/or a search algorithm in order to obtain the corresponding recommendations.
  • the body position of the various actors and/or facial modifications may be used for selecting the emotional aspect of content.
  • This information about emotion may be provided by a technique such as e.g., described in WO2014051644A1 titled “Context-Based Content Recommendations.”
  • the equipment/clothing may be used for selecting different genres as described above.
  • the suggestion search engine uses such input to have recommendations made to a user.
  • a user may pick up using 3D virtual reality the Tom Cruise representation and model his face so that he is smiling.
  • the recommendation media would use Tom Cruise as a seed in the actor field and "happy" in the emotion field.
  • Media assets that may be recommended may be e.g., Risky Business, Cocktail, Tropic Thunder, and the like.
  • Tom Cruise is modeled to wear adventure gear using VR.
  • the recommended media would use Tom Cruise as a seed in the actor field and adventure in the genre field.
  • Media Assets such as, e.g., Minority Report, Mission Impossible l-IV, and the like may be recommended to a user in response to the provided representation of the actor.
  • a representation of Tom Cruise and Nicole Kidman are placed within the search area using VR.
  • Media assets will be recommended are movies that have both Tom Cruise and Nicole Kidman in them such as, e.g., Eyes Wide Open, Days of Thunder, Far and Away, and the like.
  • a representation of Steven Spielberg may be used to have movies recommended that have him listed as a director for such assets.
  • the embodiments described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed above may also be implemented in other forms (for example, an apparatus or program).
  • An apparatus may be implemented in, for example, appropriate hardware, software, and firmware.
  • the methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs”), and other devices that facilitate communication of information between end-users.
  • PDAs portable/personal digital assistants
  • the appearances of the phrase “in one embodiment” or “an exemplary embodiment” or “in an embodiment” or “in one implementation” or “in an implementation”, as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment.
  • Determining the information may include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory.
  • Accessing the information may include one or more of, for example, receiving the information, retrieving the information (for example, from memory), storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
  • Receiving is, as with “accessing”, intended to be a broad term.
  • Receiving the information may include one or more of, for example, accessing the information, or retrieving the information (for example, from memory).
  • “receiving” is typically involved, in one way or another, during operations such as, for example, storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
  • implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted.
  • the information may include, for example, instructions for performing a method, or data produced by one of the described embodiments.
  • a signal may be formatted to carry the bitstream of a described embodiment.
  • Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal.
  • the formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream.
  • the information that the signal carries may be, for example, analog or digital information.
  • the signal may be transmitted over a variety of different wired and/or wireless links, as is known.
  • the signal may be stored on a processor-readable medium.

Abstract

The present principles generally relate to an apparatus and a method for providing content recommendation. In one exemplary embodiment, a video is played on a device. The user may pause the video and choose a frame of the video. The user may then select and extract one or more picture elements from the frame or even the whole frame. A recommendation is then provided in response to the extracted one or more picture elements. Other picture elements not from the extracted frame and representing different search criteria may also be added to the search to derive the recommendation.

Description

METHOD AND APPARATUS FOR PROVIDING CONTENT
RECOMMENDATION
BACKGROUND OF THE INVENTION
Field of the Invention
The present principles generally relate to an apparatus and a method for providing content recommendation. In one exemplary embodiment, a video is played on a device. The user may pause the video and choose a frame of the video. The user may then select and extract one or more picture elements from the frame or even the whole frame. A recommendation is then provided in response to the extracted one or more picture elements. Other picture elements not from the extracted frame and representing different search criteria may also be added to the search to derive the recommendation.
Background Information
Different streaming media sites (e.g., Hulu, Netflix, M-GO, and etc.) currently provide various user interfaces for users to search different media assets such as television shows, movies, music, and etc., for view and/or purchase. The users may perform the media search on these websites typically by typing in a query string related to e.g. , movie or show titles, or other keywords, using a keyboard on their PCs, laptops, cellphones, or various other user devices. Once found, such media can be downloaded and/or streamed to the user's consumption device.
In addition, for example, a user may use the M-Go service to receive recommendations about the different media assets that are available from M- Go. Such recommendations may be made, e.g., in response to a user's profile of consumption and/or purchases of media assets, and/or criteria that a user specifies using textual input (e.g., enter in "sports", "adventure" or "Tom Cruise") as noted before. The profile/query information is then inputted into a recommendation engine whereby algorithms are used to develop and output media asset recommendations. SUMMARY OF THE INVENTION
The present invention recognizes that the existing systems and methods, however, do not provide an easy and intuitive way for a user to enter relevant search items directly from a video stream being watched. Hence, there is a need to improve the existing systems and methods for processing input video information to provide content recommendations.
In accordance with an aspect of the present principles, an apparatus is presented, comprising:
a user input configured to receive a first user command, a second user command, and a third user command;
a processor configured to play a video in response to the first user command and pause the video in response to a second user command for displaying a frame of the video wherein the frame comprising picture elements; the processor extracts at least one picture element from the frame in response to a third user command; and the processor provides a recommendation in response to the extracted at least one picture element.
In another exemplary embodiment, a method is presented, comprising: playing a video on a device;
pausing the video for displaying a frame of the video, the frame comprising picture elements;
extracting at least one picture element from the frame; and
providing a recommendation in response to the extracted at least one picture element.
In another exemplary embodiment, a computer program product stored in a non-transitory computer-readable storage media is presented, comprising computer-executable instructions for:
playing a video on a device;
pausing the video for displaying a frame of the video, the frame comprising picture elements;
extracting at least one picture element from the frame; and providing a recommendation in response to the extracted at least one picture element.
DETAILED DESCRIPTION OF THE DRAWINGS
The above-mentioned and other features and advantages of the present invention, and the manner of attaining them, will become more apparent and the invention will be better understood by reference to the following description of embodiments of the invention taken in conjunction with the accompanying drawings, wherein:
FIG. 1 shows an exemplary process according to the present principles;
FIG. 2 shows an example system according to the present principles;
FIG. 3 shows an exemplary user interface screen and its functions according to the present principles;
FIG. 4 shows another exemplary user interface screen and its functions according to the present principles; and
FIG. 5 shows another exemplary user interface screen and its functions according to the present principles. The examples set out herein illustrate exemplary embodiments of the present principles. Such examples are not to be construed as limiting the scope of the invention in any manner.
DETAILED DESCRIPTION FIG. 2 shows an exemplary system according to the present principles.
As illustrated, FIG. 2 represents a system capable of receiving and processing user inputs and providing, in response, various media assets for streaming or downloading. For example, various user devices 260-1 to 260-n in FIG. 2 may communicate with an exemplary server 205 over a communication network 250 such as the internet, a wide area network (WAN), and/or a local area network (LAN). Server 205 may communicate with user devices 260-1 to 260-n in order to provide relevant information such as data, webpages, media contents, and etc., not available on the local user devices 260-1 to 260- n. Server 205 may also provide additional processing of information when the processing is not available and/or capable of being conducted on the local user devices 260-1 to 260-n. As an example, server 205 may be a computer having a processor 210 such as, e.g., an Intel processor, running an appropriate operating system such as, e.g., Windows 2008 R2, Windows Server 201 2 R2, Linux operating system, and etc.
Devices 260-1 to 260-n may access different media assets, web pages, services or databases provided by server 205 using, e.g., HTTP protocol. A well-known web server software application which may be run by server 205 to provide web pages is Apache HTTP Server software available from http://www.apache.org.
Likewise, examples of well-known media server software applications include Adobe Media Server and Apple HTTP Live Streaming (HLS) Server. Using media server software as mentioned above and/or other open or proprietary server software, server 205 may also provide media content services similar to, e.g., Amazon.com, Netflix, or M-GO. Server 205 may use a streaming protocol such as e.g., Apple HTTP Live Streaming (HLS) protocol, Adobe Real-Time Messaging Protocol (RTMP), Microsoft Silverlight Smooth Streaming Transport Protocol, and etc., to transmit various media assets such as, e.g., video programs, audio programs, movies, TV shows, software, games, electronic books, electronic magazines, electronic articles, and etc., to an end-user device 260-1 for purchase and/or view via streaming. In addition, a server administrator may interact with and configure server 205 to run different applications using user Input/Output (I/O) devices 21 5 (e.g., a keyboard and/or a display) as well known in the art. Furthermore, various webpages, data, media assets and their associated metadata may be stored in a database 225 and accessed by processor 210 as needed. Database 225 may reside in appropriate non-transitory storage media, such as, e.g., one or more hard drives and/or other suitable memory devices, as well known in the art. Similarly, computer program products for the server 205 may also be stored in such non-transitory storage media. As mentioned before, element 225 of server 205 may also represent a search engine so that media recommendations may be made, e.g., in response to a user's profile of consumption and/or purchases of media assets, and/or criteria that a user specifies using textual input (e.g., enter in "sports", "adventure" or "Tom Cruise"). The profile/query information is then inputted into the recommendation engine 225 whereby search algorithms are used to develop and output media asset recommendations.
Server 205 is connected to network 250 through a communication interface 220 for communicating with other servers or web sites (not shown) and to one or more user devices 260-1 to 260-n, as shown in FIG. 2. In addition, one skilled in the art would readily appreciate that other server components, such as, e.g., ROM, RAM, power supply, cooling fans, etc., may also be needed, but are not shown in FIG. 2 to simplify the drawing. User devices 260-1 to 260-n shown in FIG. 2 may comprise one or more of, e.g., a PC, a laptop, a tablet, a cellphone, and etc. One of such devices may be, e.g., a Microsoft Windows 7, Windows 8, or Windows 10 computer/tablet, an Android phone/tablet (e.g., Samsung S4, S5, S6, or Google Nexus 7 tablet), or an Apple IOS phone/tablet (e.g., IPhone 6 or IPad). A detailed block diagram of an exemplary user device according to the present principles is illustrated in block 260-1 of FIG. 2 as Device 1 .
An exemplary user device 260-1 in FIG. 2 comprises a processor 265 for processing various data and for controlling various functions and components of the device 260-1 , including video decoding and processing to play and display a downloaded or streamed video. The video processing capabilities of processor 265 also perform the pausing of the video and extract one or more picture elements from the paused video frame in response to user input as to be described in more detail below. In additional, device 260-1 also comprises user Input/Output (I/O) devices 280 which may comprise, e.g., a touch and/or a physical keyboard for inputting user data, and/or a display, and/or a speaker for outputting visual and/or audio user data and feedback. Device 260-1 also comprises memory 285 which may represent both a transitory memory such as RAM, or a non-transitory memory such as a ROM, a hard drive or a flash memory, for processing and storing different files and information as necessary, including computer program products, webpages, user interface information, and/or user profiles as shown in FIG. 3 to FIG. 5 to be described later. Device 260-1 also comprises a communication interface 270 for connecting and communicating to/from server 205 and/or other devices, via, e.g., network 250 using e.g., a connection through a cable network, a FIOS network, a Wi-Fi network, and/or a cellphone network (e.g., 3G, 4G, LTE), and etc. FIG. 3 to FIG. 5 illustrate exemplary user interface screens 300, 400 and 500 respectively, and their functions according to the present principles. These user interface screens and functions may be controlled and/or provided by e.g., processor 265 in device 260-1 of FIG. 2 and/or processor 21 0 in web server 205 remotely. FIG. 3 to FIG. 5 will be described in detailed below in connection with the exemplary process as shown in FIG. 1 .
FIG. 1 represents a flow diagram of an exemplary process 100 according to the present principles. Process 100 may be implemented as a computer program product comprising computer executable instructions which may be executed by, e.g., a processor 265 in device 260-1 in FIG. 2 and/or a processor 210 in server 205 of FIG 2. The computer program product having the computer-executable instructions may be stored in non- transitory computer-readable storage media of the respective device 260-1 and/or server 205. The exemplary control program shown in FIG. 1 when executed, facilitates processing and displaying of user interfaces screens shown, for example, in FIG. 3 to FIG. 5, and controls their respective functions and interactions with a user. One skilled in the art can readily recognize that the exemplary process shown in FIG. 1 may also be implemented using a combination of hardware and software (e.g., a firmware implementation), and/or executed using logic arrays or ASIC.
In an exemplary embodiment shown in FIG. 1 , at step 1 10, a user may request, using one of the user I/O devices 280 (e.g., a mouse) shown in device 260-1 of FIG. 2, to download a video from a website 205 of FIG. 2 and play the video on a user device 260-1 of FIG. 2. This is also illustrated on an exemplary display screen 300 of FIG. 3. As shown in FIG. 3, a user can select to play a video 310 by moving a selector icon 305 using e.g., a mouse, and selects the play icon 330 on screen 300 of the user device 260-1 . Once the play icon 330 is selected, the movie "The Outsiders" is now being played on a display area 350 of FIG. 3 by the device 260-1 . The notification area 340 of FIG. 3 indicates to the user the status of the video which confirms that the video The Outsiders 310 is being played. At step 120 of FIG. 1 , a user may pause the video 310 of FIG. 3 which was being played as described at step 1 10 of FIG. 1 above. The user may pause the video 310 of FIG. 3 by selecting a pause icon 410 of an user interface screen 400 shown in FIG. 4. As shown in FIG. 4, pausing the video 31 0 will cause a frame of the video 41 0 to be displayed in an display area 450 as shown in FIG. 4. In addition, in a non-limited embodiment, the video display area 350 in FIG. 3 which occupies a larger area of the display screen 300 of a user device 260-1 when the video is being played, will be resized into a smaller display area 450 as shown in FIG. 4, so that other user interface elements 460 to 490 may also be displayed on the device's screen as will be explained in more detail below.
Paused video frame 410 shown in FIG. 4 comprises a plurality of exemplary picture elements 420-1 to 420-n. One example of a picture element may be an actor or actress appearing on the frame of the video being paused. For example, picture element 420-1 in video frame 410 represents the actor Tom Cruise which appears on the frame of the video from the movie "The Outsiders" being paused. Another example of a picture element in the video frame 410 is a car 420-2 which may represent, e.g., search criteria of car-related or racing-related search terms. Picture element 420-n of a cloud in video frame 410 may represent, e.g., weather or storm related search terms or criteria.
At step 130 of FIG. 1 , one of more of the picture elements 420-1 to 420-n shown on video frame 41 0 in FIG. 4 may be extracted to a search area 440 of screen 400 of FIG. 4. In the example of FIG. 4, the picture element 420-1 representing the actor Tom Cruise has been extracted and moved to the search area 440 using e.g., a selector icon 405 via a mouse. At step 140, search area 440 displays the one or more of the extracted pictures elements and use them for a search as to be described in more detail below. In one non-limiting example, not shown, instead of selecting individual picture elements 420-1 to 420-n of a paused video frame 41 0, the whole frame may be selected and moved into the search area 440, and all of the picture elements on the video frame 41 0 may therefore be used as search criteria or terms.
At step 150, a recommendation is provided in response to the one of more of the picture elements extracted from the paused video frame 410 at step 130 above. For example, when a user extracts and moves Tom Cruise picture element 420-1 into the search area 440, an actors/actresses recommendation box 460 appears on screen 400 of FIG. 4. The recommendation box may list different actors/actresses suggested by a search engine and/or recommendation algorithm in response to the input of the picture element Tom Cruise 420-1 . For example, the search engine and/or recommendation algorithm may produce a list of persons which have co-stared with Tom Cruise in a movie. As another example, the recommendation may be based on another algorithm which determines that people who like Tom Cruise the actor also like the listed actors/actresses, or that people who like movies by Tom Cruise also like the movies stared by the list of persons. For example, in the recommendation box 460 of FIG. 4, actress Michelle Monaghan 461 -n is listed since she co-stared with Tom Cruise in the movie Mission: Impossible.
Similarly, a Movies or media content recommendation box 470 also appears on screen 400 which displays, e.g., a list of movies which Tom Cruise has stared in as the media recommendations to the user, in response to the picture element of Tom Cruise 420-1 being extracted into the search area 440. In a non-limiting embodiment according to the present principles, the recommendations which substantially or exactly match one or more of the selected picture elements in search area 440 will be highlighted in the respective recommendation boxes 460 and 470. That is, for example, since the selected picture element 420-1 is Tom Cruise from the movie The Outsider, element 461 -1 "Tom Cruise" in the actors/actresses recommendation box 460 and element 471 -1 "The Outsider" in the movies and media recommendation box 470 are both highlighted to emphasize that the recommendations match exactly the search criteria corresponding to one or more of the extracted picture elements from video frame 410.
One skilled in the art can readily appreciate that, for example, facial and object recognition processing of a video frame may be utilized to convert different picture elements of a video frame into associated search strings or criteria to be used by, e.g., a search engine 225 in server 205 of FIG. 2, in order to return the recommended results to a user. Examples of facial recognition software include FotoTiger for Facebook by Applied Recognition, Face Recognition-FastAccess app from Sensible Vision, Inc., Face Recognition SDK Demo from Vinisoft, and etc. Examples of object or image recognition software include TapTapSee available on Apple App store, Android Image Recognition SDK by Catchoom, and etc. One skilled in the art may readily appreciate that one or more of these exemplary facial and object recognition apps or other software may be used to provide the facial and image recognition functionalities in accordance with the present principles to identify actors, actresses and/or different objects on a paused video frame in order to provide corresponding search strings or search criteria for performing a search in a search engine.
Also, in another exemplary embodiment, metadata which describe the video frame may be used to indicate the different picture elements which may be selectable and extractable by a user and also to indicate the corresponding search representations of the different picture elements. For example, EXIF (Exchange Image file Format) information and other types of metadata for used in a picture or a video may be used. As is well known in the art, EXIF is a standard that specifies the formats for images, sound, and ancillary tags used by digital cameras (including smartphones), scanners and other systems handling image and sound files recorded by digital cameras. Therefore, the metadata correspond to one of more of the extracted picture elements in search area 440 may be used by e.g., a search engine 225 shown in server 205 of FIG. 2 to return the recommended results to a user.
At step 160 of FIG. 1 , one or more additional picture elements which are outside of the paused video frame 410 (that is, they are not extracted from the paused video frame) may be added to the search area by a user. This example is illustrated further in a user interface display screen 500 of FIG. 5. Similar to as described before, all of the picture elements added to and shown in search area 544 of FIG. 5, including one or more picture elements extracted from the paused video frame 510, may be used together as search criteria in a search engine 225 to provide resulting recommendations to the user, as shown at steps 130 to 160 of FIG. 1 .
For example, an equipment field 590 on screen 500 lists additional different picture elements which may be added by a user to search area 544, such as a basketball 591 -1 which may represent, e.g., sports genre search criterion, a cupcake 591 -2 which may represent food show genre search criterion, a gun 591 -3 which may represent action genre search criterion, and etc. Accordingly, when a user selects and moves, e.g., gun icon 591 -3 into search area 544, using selector icon 505 in FIG. 5, movies with action genre such as Mission Impossible 571 -n may be added to user recommendations (e.g., highlighted) in movies recommendation box 570 as shown in FIG. 5.
Likewise, a clothing search criteria field 580 may also be presented to a user on device 260-1 of FIG. 2. Different pieces of clothing may be used to represent different additional search criteria. For example, a track outfit 581 -1 may represent e.g., sports genre, a mobster outfit 581 -2 may represent crime genre, a business shoe 581 -3 may represent business related genre, and etc. Similarly, an emotion criteria field 595 may be used to select different emotion icons represent different search criteria. For example, a happy face icon 596- 1 may represent comedy genre, a sad face icon 596-2 may represent a dramatic tearjerker genre, and etc.
In another exemplary embodiment, a user may also select and move one of the actor/actresses icons 560-1 to 560-n in the actors/actresses recommendation box 560 into the search area 544 as shown in FIG. 5. For example, FIG. 5 shows that a user has added Michelle Monaghan 560-n, the female co-star of the Mission: Impossible movie, into the search area 544. In response, Mission: impossible 571 -n is highlighted and recommended to the user since, e.g., it is the movie which stars both Tom Cruise and Michelle Monaghan.
Additionally, according to the present principles, media services may be deployed in an environment such as virtual reality (VR) where the ability to use textual input becomes much more difficult. In a VR space, a user wearing a headset such as the Oculus Rift manipulates their environment by using their head to look around, their arms/hands to manipulate objects, and their legs for mobility. Such body movements can be captured using sensors. The VR headset changes the perceived environment in response to such body movements.
The present principles described above are well suited for use in the virtual reality environment to provide recommendations for media assets. For example, a user may select a specific representation of an actor Tom Cruise and model the body into a specific position and the face into a different position using 3D VR. Similar to what has been described above, the representation of the actor can also have various pieces of equipment such as a guitar corresponding to music genre, a baseball bat corresponding to sports genre, weapons corresponding to adventure or action genre and the like. The equipment may be placed within the hands of the actor using VR, where the equipment is used for specifying a specific genre of content. The representations of body position, facial, and clothing may be pre-defined by a user, a media service, and the like such that examples of what each body position/facial/clothing representation means may be provided to a search engine and/or a search algorithm in order to obtain the corresponding recommendations.
Similar to what has been described above, the body position of the various actors and/or facial modifications (smiling for happy movie/frowning for a sad movie) may be used for selecting the emotional aspect of content. This information about emotion may be provided by a technique such as e.g., described in WO2014051644A1 titled "Context-Based Content Recommendations." The equipment/clothing may be used for selecting different genres as described above. The suggestion search engine uses such input to have recommendations made to a user.
For example, a user may pick up using 3D virtual reality the Tom Cruise representation and model his face so that he is smiling. The recommendation media would use Tom Cruise as a seed in the actor field and "happy" in the emotion field. Media assets that may be recommended may be e.g., Risky Business, Cocktail, Tropic Thunder, and the like. In another example, Tom Cruise is modeled to wear adventure gear using VR. The recommended media would use Tom Cruise as a seed in the actor field and adventure in the genre field. Media Assets such as, e.g., Minority Report, Mission Impossible l-IV, and the like may be recommended to a user in response to the provided representation of the actor. In yet another example, a representation of Tom Cruise and Nicole Kidman are placed within the search area using VR. Media assets will be recommended are movies that have both Tom Cruise and Nicole Kidman in them such as, e.g., Eyes Wide Open, Days of Thunder, Far and Away, and the like.
It is noted that other representations of objects and/or people may be used in accordance with the described principles. For example, a representation of Steven Spielberg may be used to have movies recommended that have him listed as a director for such assets.
The foregoing has provided by way of exemplary embodiments and non-limiting examples a description of the method and systems contemplated by the inventor. It is clear that various modifications and adaptations may become apparent to those skilled in the art in view of the description. However, such various modifications and adaptations fall within the scope of the teachings of the various embodiments described above.
The embodiments described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed above may also be implemented in other forms (for example, an apparatus or program). An apparatus may be implemented in, for example, appropriate hardware, software, and firmware. The methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs"), and other devices that facilitate communication of information between end-users.
Reference to "one embodiment" or "an embodiment" or "an exemplary embodiment" or "one implementation" or "an implementation" of the present principles, as well as other variations thereof, mean that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present principles. Thus, the appearances of the phrase "in one embodiment" or "an exemplary embodiment" or "in an embodiment" or "in one implementation" or "in an implementation", as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment.
Additionally, this application or its claims may refer to "determining" various pieces of information. Determining the information may include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory.
Further, this application or its claims may refer to "accessing" various pieces of information. Accessing the information may include one or more of, for example, receiving the information, retrieving the information (for example, from memory), storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
Additionally, this application or its claims may refer to "receiving" various pieces of information. Receiving is, as with "accessing", intended to be a broad term. Receiving the information may include one or more of, for example, accessing the information, or retrieving the information (for example, from memory). Further, "receiving" is typically involved, in one way or another, during operations such as, for example, storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
As will be evident to one of skill in the art, implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted. The information may include, for example, instructions for performing a method, or data produced by one of the described embodiments. For example, a signal may be formatted to carry the bitstream of a described embodiment. Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal. The formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream. The information that the signal carries may be, for example, analog or digital information. The signal may be transmitted over a variety of different wired and/or wireless links, as is known. The signal may be stored on a processor-readable medium.
While several embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the functions and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the present embodiments. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings herein is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereof, the embodiments disclosed may be practiced otherwise than as specifically described and claimed. The present embodiments are directed to each individual feature, system, article, material and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials and/or methods, if such features, systems, articles, materials and/or methods are not mutually inconsistent, is included within the scope of the present embodiment.

Claims

1 . A method for providing content recommendation, the method comprising:
playing a video on a device;
pausing the video for displaying a frame of the video, the frame comprising picture elements;
extracting at least one picture element from the frame; and
providing a recommendation in response to the extracted at least one picture element.
2. The method of claim 1 wherein the recommendation is based on a search from a search engine.
3. The method of claim 1 , further comprising:
displaying the extracted at least one picture element in a search area for use in the search.
4. The method of claim 2, further comprising:
adding an additional picture element to the search and providing the recommendation from the search engine based additionally on the additional picture element.
5. The method of claim 4, wherein the additional picture element is provided from information outside of the frame of the video.
6. The method of claim 1 , wherein the extracted at least one picture element represents at least one of: a person, an emotion, a piece of clothing, and a piece of equipment.
7. The method of claim 5, wherein the additional picture element represents at least one of: a person, an emotion, a piece of clothing, and a piece of equipment.
8. The method of claim 7, wherein the piece of equipment is sports related and the recommendation is sports related content.
9. An apparatus for providing content recommendation, comprising:
a user input configured to receive a first user command, a second user command, and a third user command;
a processor configured to play a video in response to the first user command and pause the video in response to a second user command for displaying a frame of the video wherein the frame comprising picture elements; the processor extracts at least one picture element from the frame in response to a third user command; and the processor provides a recommendation in response to the extracted at least one picture element.
10. The apparatus of claim 9, wherein the recommendation is based on a search from a search engine.
1 1 . The method of claim 9, comprising:
the processor further configured to display the extracted at least one picture element in a search area for use in the search.
12. The method of claim 10, comprising:
the processor further configured to add an additional picture element to the search in response to another user command and provide the recommendation from the search engine based additionally on the additional picture element.
13. The method of claim 12, wherein the additional picture element is provided from information outside of the frame of the video.
14. The method of claim 9, wherein the extracted at least one picture element represents at least one of: a person, an emotion, a piece of clothing, and a piece of equipment.
15. The method of claim 13, wherein the additional picture element represents one of: a person, an emotion, a piece of clothing, and a piece of equipment.
16. The method of claim 15, wherein the piece of equipment is sports related and the recommendation is sports related content.
17. A computer program product stored in non-transitory computer-readable storage media comprising computer-executable instructions for:
playing a video on a device;
pausing the video for displaying a frame of the video, the frame comprising picture elements;
extracting at least one picture element from the frame; and
providing a recommendation in response to the extracted at least one picture element.
18. The computer program product of claim 17, wherein the recommendation is based on a search from a search engine.
19. The computer program product of claim 17, further comprising:
displaying the extracted at least one picture element in a search area for use in the search.
20. The computer program product of claim 17, further comprising:
adding an additional picture element to the search and providing the recommendation from the search engine based additionally on the additional picture element.
PCT/US2016/022534 2015-03-25 2016-03-16 Method and apparatus for providing content recommendation WO2016153865A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562138007P 2015-03-25 2015-03-25
US62/138,007 2015-03-25

Publications (1)

Publication Number Publication Date
WO2016153865A1 true WO2016153865A1 (en) 2016-09-29

Family

ID=55646886

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/022534 WO2016153865A1 (en) 2015-03-25 2016-03-16 Method and apparatus for providing content recommendation

Country Status (1)

Country Link
WO (1) WO2016153865A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699274A (en) * 2020-12-25 2021-04-23 北京达佳互联信息技术有限公司 Object searching method and device and computer storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2680164A1 (en) * 2012-06-28 2014-01-01 Alcatel-Lucent Content data interaction
WO2014051644A1 (en) 2012-09-28 2014-04-03 Thomson Licensing Context-based content recommendations

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2680164A1 (en) * 2012-06-28 2014-01-01 Alcatel-Lucent Content data interaction
WO2014051644A1 (en) 2012-09-28 2014-04-03 Thomson Licensing Context-based content recommendations

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699274A (en) * 2020-12-25 2021-04-23 北京达佳互联信息技术有限公司 Object searching method and device and computer storage medium

Similar Documents

Publication Publication Date Title
JP6677781B2 (en) Content display method, device and storage medium
US10324940B2 (en) Approximate template matching for natural language queries
US20200014979A1 (en) Methods and systems for providing relevant supplemental content to a user device
KR102148339B1 (en) Digital media content management system and method
CN107087224B (en) Content synchronization apparatus and method
US9424471B2 (en) Enhanced information for viewer-selected video object
US20130124551A1 (en) Obtaining keywords for searching
WO2019231559A1 (en) Interactive video content delivery
EP2728859B1 (en) Method of providing information-of-users' interest when video call is made, and electronic apparatus thereof
EP2575372A2 (en) Method of managing contents and image display device using the same
US11630862B2 (en) Multimedia focalization
WO2016064670A1 (en) Systems and methods for generating media asset recommendations using a neural network generated based on consumption information
US20200021872A1 (en) Method and system for switching to dynamically assembled video during streaming of live video
US20140372424A1 (en) Method and system for searching video scenes
EP3525475A1 (en) Electronic device and method for generating summary image of electronic device
US9733795B2 (en) Generating interactive menu for contents search based on user inputs
CN109076265B (en) Display device providing crawling function and operation method thereof
JP2020520489A (en) Video display device and video display method
WO2016153865A1 (en) Method and apparatus for providing content recommendation
US11843829B1 (en) Systems and methods for recommending content items based on an identified posture
Wilkinson Media of things: supporting the production and consumption of object-based media with the internet of things
WO2016094206A1 (en) Method and apparatus for processing information
KR20220066724A (en) An electronic apparatus and a method of operating the electronic apparatus
KR20240060207A (en) Method and apparatus for scene analysis in contents streaming system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16713684

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16713684

Country of ref document: EP

Kind code of ref document: A1