WO2018055727A1 - Serveur et son procédé de commande, et programme informatique - Google Patents

Serveur et son procédé de commande, et programme informatique Download PDF

Info

Publication number
WO2018055727A1
WO2018055727A1 PCT/JP2016/078012 JP2016078012W WO2018055727A1 WO 2018055727 A1 WO2018055727 A1 WO 2018055727A1 JP 2016078012 W JP2016078012 W JP 2016078012W WO 2018055727 A1 WO2018055727 A1 WO 2018055727A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
user
information
evaluation information
scene
Prior art date
Application number
PCT/JP2016/078012
Other languages
English (en)
Japanese (ja)
Inventor
アディヤン ムジビヤ
Original Assignee
楽天株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 楽天株式会社 filed Critical 楽天株式会社
Priority to JP2018540558A priority Critical patent/JP6501241B2/ja
Priority to PCT/JP2016/078012 priority patent/WO2018055727A1/fr
Publication of WO2018055727A1 publication Critical patent/WO2018055727A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data

Definitions

  • the present invention relates to a server, a control method thereof, and a computer program.
  • Patent Document 1 discloses a technique of analyzing a moving image, specifying a keyword, and creating a database.
  • the present invention reduces the number of search times of the user by providing a suggestion of the content that has not been viewed by the user and matches the user's preference based on the evaluation made by the user in the past. It is possible to save resources and reduce the processing load on the server.
  • a server corresponding to one of the embodiments of the invention for solving the above problem is a server that provides a suggestion of content to a user terminal, For each of a plurality of users, storage means for storing evaluation information of content that the user has viewed in the past; Processing means for executing processing for determining first content to be suggested to the first user among the plurality of users based on the evaluation information; Communication means for transmitting information related to the first content to the user terminal of the first user, The processing means includes Generating preference information representing the preference of the first user based on the evaluation information of the content evaluated by the first user in the past; Search the storage means based on the preference information for content having evaluation information corresponding to the preference of the first user, The searched content is determined as the first content.
  • a server corresponding to another embodiment of the invention for solving the above-mentioned problem is a server that provides content to a user terminal, For each of a plurality of users, storage means for storing evaluation information of content that the user has viewed in the past; Processing means for executing processing for determining first content to be suggested to the first user among the plurality of users based on the evaluation information; Communication means for transmitting information related to the first content to the user terminal of the first user, The processing means includes Among the content evaluated by the first user in the past, the second content that is a standard for suggestion is specified, A second user other than the first user who is evaluating the second content among the plurality of users is identified; Based on the evaluation information of the content viewed by the first user in the past and the evaluation information of the content viewed by the second user, the common degree of preference between the first user and the second user is determined, Among the contents viewed by a third user determined to have a high degree of commonality among the second users, content that the first user has not yet viewed is determined as the
  • the present invention by providing a suggestion of content that has not been viewed by the user and that matches the user's preference based on the evaluation that the user has made in the past, the number of searches of the user can be reduced. Communication resources can be saved and the processing load on the server can be reduced.
  • the block diagram which shows an example of the hardware constitutions of the server 103 corresponding to embodiment of invention.
  • the block diagram which shows the apparatus structure of the client 101 corresponding to embodiment of invention.
  • the flowchart which shows an example of the content provision process in the server 103 corresponding to embodiment of invention.
  • the figure which shows an example of the data structure of the information registered into the user information database 104 corresponding to embodiment of invention.
  • the flowchart which shows an example of the flow of a transmission process of a suggestion production
  • FIG. 1 is a block diagram showing an overall configuration of a content providing system corresponding to the present embodiment.
  • the content providing system is configured by connecting a user terminal and a server to a network.
  • the server manages user information in addition to content.
  • the server may provide a portal site, and the user may be a registered user of the portal site.
  • Clients 101a, 101b, and 101c are operated by a user and, after receiving user authentication by the server, can view and evaluate content managed by the server. It is a user terminal that can.
  • the server 103 is a device that authenticates the user of the client 101 and provides content to the client 101 used by the authenticated user.
  • the client 101 and the server 103 are each connected to the network 102 and can communicate with each other.
  • the network 102 can be constructed as, for example, the Internet, a local area network (LAN), or a wide area network (WAN).
  • the Internet is a network in which networks all over the world are connected to each other. However, the network 102 may be a network that can be connected only within a specific organization, such as an intranet.
  • a user information database 104 and a content database 105 are connected to the server 103.
  • the client 101 can view the content provided by the server 103 by a user operation.
  • the viewing of the content can be executed by streaming that is simultaneously reproduced while being downloaded, for example.
  • the client 101 is a user terminal, an information processing apparatus, or a communication apparatus, and includes, for example, a notebook personal computer, a desktop personal computer, a portable information terminal, a mobile phone, a smartphone, and a tablet terminal. It is assumed that the client 101 is installed with so-called Internet browser software (streaming content can be played back with a plug-in) or a player application for playing back streaming content.
  • the client 101 is connected to the network 102 by wireless data communication means such as a wireless LAN or LTE.
  • the network 102 may be further configured to be accessible by a LAN including a network cable such as Ethernet (registered trademark).
  • the server 103 manages the user information database 104, holds registration information of each user of the client 101, and has an authority to receive the streaming service when each user wants to view content. It can be determined whether or not.
  • the server 103 also manages content data stored in the content database 105, and provides (sends) specified content to the client in response to a streaming request from the client 101.
  • the evaluation for the content obtained from the user is accumulated in the content database 105 and managed as content evaluation information.
  • the server 103 can generate a content suggestion (also referred to as a proposal, recommendation, or recommendation) for the user of the client 101 and transmit it to the client 101.
  • the server 103 is connected to the user information database 104 and the content database 105 via, for example, a LAN.
  • Each of the user information database 104 and the content database 105 is an information processing apparatus in which predetermined database software is installed, and manages various data.
  • the user information database 104 manages registration information for each user. Specifically, a user identifier (user ID) for uniquely identifying the user, user registration information for determining whether or not the user is a registered user (for example, a set of a user name and a password), and the user has already viewed Content identification information (content ID) for identifying the evaluated content and the evaluated content is stored in association with each other.
  • user ID user identifier
  • content ID Content identification information
  • the content database 105 stores and manages content data provided from the server 103 to the client 101.
  • the content data includes data such as images, moving images, and sounds. These content data are assigned content IDs for uniquely identifying the content data.
  • evaluation information such as a rating, a comment, and an attribute assigned by a user who viewed the corresponding content is also stored in association with the content ID. Details of the evaluation information will be described later with reference to FIG.
  • the server 103, the user information database 104, and the content database 105 are described as being realized by physically independent information processing apparatuses, but the embodiment of the present invention is not limited thereto. It is not limited. For example, these may be realized by a single information processing apparatus.
  • each device such as the server 103 may be configured redundantly or distributedly by a plurality of information processing devices.
  • the user information database 104 is described as being connected to the server 103 via a LAN or the like, for example, the user information database 104 may be configured to be able to communicate with the server 103 via the network 102 or an intranet (not shown). The same applies to the relationship between the server 103 and the content database 105.
  • the user information managed by the user information database 104 and the content-related data managed by the content database 105 may be managed integrally.
  • FIG. 2 is a block diagram illustrating a device configuration of the server 103.
  • a CPU 200 executes an application program, an operating system (OS), a control program, and the like stored in a hard disk device (hereinafter referred to as HD) 205, and stores information, files, etc. necessary for executing the program in a RAM 202.
  • Control to temporarily store. Controls data transmission / reception with an external device via the interface 208, analyzes and processes data received from the external device, and generates data (including processing requests and data requests) to be transmitted to the external device. .
  • OS operating system
  • HD hard disk device
  • ROM 201 stores various data such as a basic I / O program and an application program for executing streaming.
  • the RAM 202 temporarily stores various data and functions as a main memory, work area, and the like of the CPU 200.
  • the external storage drive 203 is an external storage drive for realizing access to a recording medium, and can load a program or the like stored in the medium (recording medium) 204 into the computer system.
  • the media 204 is, for example, a floppy (registered trademark) disk (FD), CD-ROM, CD-R, CD-RW, PC card, DVD, Blu-ray (registered trademark), IC memory card, MO, memory.
  • FD floppy (registered trademark) disk
  • CD-ROM compact disc-read only memory
  • CD-R compact disc
  • CD-RW Compact Disc
  • PC card digital versatile disc
  • DVD digital versatile disc
  • Blu-ray registered trademark
  • the external storage device 205 uses an HD (hard disk) that functions as a large-capacity memory.
  • the HD 205 stores application programs, OS, control programs, related programs, and the like.
  • a nonvolatile storage device such as a flash (registered trademark) memory may be used instead of the hard disk.
  • the instruction input device 206 corresponds to a keyboard, a pointing device (such as a mouse), a touch panel, or the like.
  • the output device 207 outputs a command input from the instruction input device 206, a response output of the server 103 in response thereto, and the like.
  • the output device 207 includes a display, a speaker, a headphone terminal, and the like.
  • a system bus 209 manages the flow of data in the information processing apparatus.
  • the interface (hereinafter referred to as I / F) 208 plays a role of mediating exchange of data with an external device.
  • the I / F 208 can include a wireless communication module, which includes an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset.
  • Well-known circuitry including a subscriber identity module card, memory, and the like.
  • a wired communication module for wired connection can be included.
  • the wired communication module enables communication with other devices via one or more external ports. It can also include various software components that process the data.
  • the external port is coupled to another device directly via Ethernet, USB, IEEE1394, or indirectly via a network.
  • the database 210 is connected to the system bus 309 and stores and manages predetermined data under the control of the CPU 200.
  • the database 210 is a general term for the user information database 104 or the content database 105.
  • the program may be loaded from the HD 205 in which the program is already installed into the RAM 202 every time the corresponding program is operated in order to execute the processing corresponding to the present embodiment. It is also possible to record the program according to the present embodiment in the ROM 201, configure it as a part of the memory map, and execute it directly by the CPU 200. Furthermore, the corresponding program and related data can be directly loaded from the medium 204 into the RAM 202 and executed.
  • FIG. 3 is a block diagram illustrating an example of the hardware configuration of the client 101.
  • the user information database 104 and the content database 105 as the information processing apparatus described above may also be configured with a similar or equivalent hardware configuration.
  • the functions and applications of the CPU 300, ROM 301, RAM 302, external storage drive 303, media 304, HD 305, instruction input device 306, output device 307, I / F 308, system bus 309, and the relationship between them are shown in FIG. Similar or equivalent to that described.
  • the CPU 300 of the client 101 expands the content data provided by streaming from the server 103 and stored in the RAM 302, and converts it into data in a format that can be output by the output device 307.
  • the RAM 302 temporarily stores content data provided by streaming from the server 103.
  • the user can instruct the client 101 to input a command or the like for controlling the device. For example, it is possible to input an instruction to start streaming, an evaluation of the viewed content, and the like.
  • the output device 307 can output and display the video of the content data developed by the CPU 300 on a display, and output the sound from a speaker or a headphone terminal.
  • FIG. 4 is a flowchart showing an example of the overall flow of content providing processing in the server 103, corresponding to the embodiment of the invention.
  • the processing corresponding to FIG. 4 is realized by the CPU 200 executing a processing program stored in the HD 205 or the database 210 by the server 103.
  • FIG. 5 is a diagram illustrating an example of a data configuration of the user information database 104.
  • FIG. 6 is a diagram illustrating an example of a data configuration of the content database 105.
  • Communication between the client 101 and the server 103 can be realized by using a communication function of a web browser executed on the client 101 or a communication function of a plug-in (extension program) of the web browser.
  • a communication function of a web browser executed on the client 101 or a communication function of a plug-in (extension program) of the web browser.
  • it can be realized according to an HTTP protocol using Java (registered trademark) Script.
  • Flash or the like may be used, or communication according to another protocol other than HTTP may be used.
  • the CPU 200 receives a login request from the client 101 via the network 102.
  • the login request transmitted by the client 101 includes at least user registration information including a user name and a password for specifying a user who operates the client 101.
  • the CPU 200 extracts the user registration information from the login request.
  • step S ⁇ b> 403 it is determined whether the user who has made the login request is registered in the user information database 104. Specifically, based on the acquired user registration information, it is determined whether or not registration information matching the combination of the user name and the password exists in the user information database 104 and the user ID associated with the registration information can be specified. To do.
  • the user information database 104 includes a user ID 501 for uniquely identifying each user, user registration information comprising a set of a user name 502 and a password 503 for determining that the user is a registered user, and evaluations evaluated by each user in the past. It is composed of evaluated content 504 that is information for identifying already-viewed content and viewed content 505 that is information for identifying each user's viewed content. A predetermined user identifier uniquely assigned to each user is registered in the user ID 501. In the user name 502 and the password 503, a user name and password arbitrarily set by the user are registered and associated with the user ID 501.
  • identification information for identifying the content that the user has evaluated in the past is registered.
  • identification information for uniquely identifying content is referred to as “content ID”.
  • the content ID is identification information given to data units such as specific audio data, moving image data, and image data, for example.
  • information for identifying all the content viewed by the user is registered regardless of whether or not a comment has been made.
  • step S ⁇ b> 405 the CPU 200 determines whether a digest playback request for content included in the content selection screen has been received from the client 101. If the digest reproduction request is not received from the client 101 (“NO” in S405), the process proceeds to S407. When the digest reproduction request is received (“YES” in S405), the CPU 200 transmits a digest reproduction screen to the client 101 in S406, and generates and transmits a digest according to the input received from the client 101 via the screen. . Thereafter, the process returns to S405 and the processing is continued. Details of the processing in S406 will be described later with reference to FIG.
  • the CPU 200 determines whether a content viewing request has been received from the client 101 or not.
  • the user of the client 101 refers to the content selection screen received on S404 and displayed on the display, selects the content that he / she wants to view with reference to the suggestion, and transmits the viewing request from the client 101 to the server 103. If it is determined in S407 that the CPU 200 has received a viewing request ("YES" in S407), the process proceeds to S408, and if no viewing request is received ("NO" in S407), the process returns to S405 to perform the process. continue.
  • the CPU 200 identifies the content to be streamed and its content data based on the content ID included in the received viewing request, and starts streaming the content.
  • the server 103 transmits the specified content data to the client 101.
  • the client 101 decodes the received content data and reproduces and displays it on the display.
  • the user can input an evaluation or the like for the reproduction content, and the evaluation or the like by the user is transmitted from the client 101 to the server 103.
  • the CPU 200 receives the evaluation transmitted from the client 101 to the server 103.
  • FIG. 7 is a flowchart illustrating an example of processing related to suggestion generation and transmission of a content selection screen corresponding to the embodiment.
  • step S701 the CPU 200 refers to the user information database 104 and identifies content that the user has evaluated in the past. For example, in the example of the data configuration of the user information database 104 illustrated in FIG. 5, identification information of evaluated content for each user is registered as the evaluated content 504, and this information is acquired. In subsequent S ⁇ b> 702, the CPU 200 extracts evaluation information indicating the details of evaluation performed by the user for each identified content.
  • FIG. 6 shows an example of the data structure of the evaluation information table corresponding to the embodiment.
  • the evaluation information table 600 is held in the content database 105.
  • the evaluation information table 600 includes a content ID 601 that is identification information for uniquely identifying content registered in the content database 105, a user ID 602 that is identification information for uniquely identifying a user, and a content of each user assigned to each content. It includes a rating 603 showing the evaluation in stages, a comment 604 given by each user for each content, and an attribute 605 for each content. In S702, the rating, the comment, and the attribute are extracted as evaluation information in association with the content ID and the user ID.
  • Rating 603 indicates the evaluation or score for the entire content in multiple stages. For example, in the case of five-level evaluation, the highest evaluation is “5” and the lowest evaluation is “1”.
  • the number of steps may be arbitrary. For example, it may be 10 steps or 3 steps.
  • the comment 604 is information arbitrarily input by the user regarding the content, and can include an arbitrary text input result as well as selecting one of a plurality of stages such as rating.
  • the comment 604 may be generated as an evaluation for the entire content, or may be generated for a specific scene (or segment, the same applies hereinafter) of the content. In the latter case, the content can be divided into a plurality of scenes in advance, and the comment and evaluation arbitrarily input by the user during the reproduction of the content can be associated with any one of the scenes.
  • the attribute 605 is information commonly given to a plurality of users for the same content. Attributes can also be called tags or metadata.
  • a specific keyword that frequently appears in a comment may be included, and regardless of whether or not it frequently appears in a comment, the server 103 extracts it for each scene based on information given in advance and a machine learning method described later. Information may be included. Further, if the content is a genre, or if the content is a movie, it can include a movie director, performers, release year, and any other information related to the movie.
  • the attribute may be registered in association with the scene included in the content.
  • information extracted from audio / video included in a scene can be registered as an attribute.
  • the type of the scene can be specified by voice recognition processing. More specifically, a scene containing car engine sound, brake sound, collision sound, etc. can be identified as a “car chase” scene by voice analysis, so “car chase” is an attribute of the scene. Can be registered.
  • the speech may be converted into characters, and the scene type may be specified from the obtained character string. More specifically, if it is a scene where lovers talk about love, it is possible to extract a character string indicating affection expression from the words, and specify the scene as a “love” scene based on the extracted character string be able to.
  • the scene type can be specified by analyzing the video of the scene. For example, for a scene where the main character is crying, the face recognition process identifies the person's face and determines whether the person's facial expression changes or tears flow, thereby making the scene a “sad” scene. Can be determined. Further, by combining image recognition and voice recognition, the scene type can be further subdivided and specified.
  • scene types such as “battle” scene, “dance” scene, “love” scene, “serious” scene, “tense” scene, “happy” scene, “sad” scene, etc. are identified and Can be registered as an attribute.
  • the automatic scene specification may be executed based on machine learning. Further, the scenes to be registered may be only a part of the scenes instead of all the scenes.
  • the attribute registration target can be determined based on the user's scene evaluation. For example, a scene whose average evaluation value by the user is a certain value (high evaluation) or a certain value (low evaluation) can be registered.
  • words frequently used in comments regarding a specific scene may be registered as attributes in association with the scene. For example, when the word “dance” appears in comments of many users regarding a scene with content, “dance” is registered in association with the scene. As a result, when a search is performed for the attribute 805 using the keyword “dance”, the scene can be specified.
  • FIG. 9 is a diagram showing an example of associating scenes and comments in the content.
  • the content 900 is divided into a plurality of scenes (scene 1 to scene n), and arbitrary character input and evaluation (here, five levels) are given to each scene as comments. Comments can be input freely by each user.
  • the entire comment information includes a comment ID 1001, an overall comment 1002, and a scene comment unit 1003.
  • the comment ID 1001 is information for uniquely identifying a comment, and may be determined based on the relationship between the corresponding content and the user who generated the comment.
  • the overall comment 1002 includes a comment for the entire content.
  • the scene comment unit 1003 is configured to include a comment 1004 for each scene included in the content.
  • the comment 1004 for each scene includes a scene ID 1005 which is identification information (scene number or the like) for specifying each scene in the content, a comment 1006 input by the user for the scene, and a user input for the scene.
  • Evaluation 1007 is included.
  • the value of the evaluation 1007 may be given an intermediate value as a default. For example, it is possible to set “3” for a five-level evaluation and “2” for a three-level evaluation.
  • the attribute is information given not for each user but for each content. Therefore, it can be managed as an attribute for each scene in the same data structure as in FIG. In this case, even if an attribute ID is registered instead of the comment ID 1001, an entire attribute is replaced instead of the entire comment 1002, and an attribute of each scene is registered as an attribute of each scene instead of the comment 1004 for each scene. good.
  • the content attributes may be managed in the data structure of FIG. 10 in association with the comment for each user. In that case, since the attribute of each scene can be included in the data structure shown in FIG. 10 in association with the scene ID 1004, the comment, evaluation, and attribute of each scene can be managed in association with the scene ID.
  • the CPU 200 determines content to be recommended (suggested) to the user according to the evaluation information extracted in S702. Details of the method for determining the content to be suggested will be described later with reference to FIGS.
  • the CPU 200 determines content to be recommended (suggested) to the user according to the evaluation information extracted in S702. Details of the method for determining the content to be suggested will be described later with reference to FIGS.
  • the CPU 200 generates a content selection screen for selecting content to be viewed by the user.
  • subsequent S ⁇ b> 705 the CPU 200 transmits the generated content selection screen to the client 101.
  • FIG. 8A is a screen for selecting content to be viewed from the suggested content.
  • the screen 800 displays content information 801 and a thumbnail image 802 of the corresponding content.
  • the content information 801 may be generated from information included in the attribute 605 or may be separately stored in the content database 105 in association with the content.
  • As the thumbnail image 802, an image stored in advance in the content database 105 in association with the content can be used.
  • a viewing request can be transmitted from the client 101 to the server 103. Further, a separate playback button may be prepared.
  • the thumbnail 802 can be used to instruct the start of digest viewing.
  • a digest viewing request can be transmitted from the client 101 to the server 103.
  • the digest may be played back in the area where the thumbnail 802 is displayed in the layout of FIG. 8A, or may be played back by starting up a screen as shown in FIG. 14 as will be described later.
  • the content viewing request transmitted from the client 101 to the server 103 is recognized by the CPU 200 of the server 103 in S407 of FIG. 4, and when the streaming viewing of the content is started in S408, the content viewing request shown in FIG.
  • a content viewing screen is displayed on the display of the client 101.
  • viewing content received by the client 101 from the server 103 by streaming is displayed in the content display area 811.
  • the slide bar 812a and the rating 812b are input areas for inputting, for example, five levels of evaluation, and the comment 813 is a character input area for inputting a comment.
  • the input of the rating and the comment is associated with the scene displayed in the content display area 811 in the content being reproduced, and is transmitted from the client 101 to the server 103.
  • the evaluation (rating) can be input by using two types of methods using the screen 810.
  • color information corresponding to the evaluation may be given to the slide bar. For example, the color may be red as the position of the slide bar is higher, and may be blue as the position of the slide bar is lower.
  • the assignment of the position and the color can be arbitrarily set by the user.
  • the edge 815 of the display area 811 may be given a color corresponding to the color of the slide bar. As a result, the display area 811 is bordered by the color corresponding to the position of the slide bar set by the user, and it is possible to more intuitively grasp the own evaluation of the scene.
  • the position of the slide bar 812a moved by the user may be returned to the center position (3 of 5 levels in the evaluation) every time the scene is switched.
  • the slide bar can be white or colorless at the center position, or any color designated by the user. Therefore, the color given to the edge 815 of the display area 811 as a default before the evaluation is white, colorless, or a user's favorite color, and is less likely to hinder the user from viewing the content.
  • the user can directly input a numerical value corresponding to the evaluation.
  • a numerical value corresponding to the evaluation For example, when the evaluation “5” is input, when the control of the rating 812b is clicked, 1 to 5 are displayed, and 5 can be selected.
  • the numeric value corresponding to the input key is regarded as an input to the rating 812b.
  • the display of the position of the slide bar 812a may be controlled so as to be interlocked with the input numerical value.
  • a numerical value corresponding to the position of the slide bar 812a may be displayed in the column of the rating 812b.
  • the check box 814 can be selected when a comment is input halfway through the content and thereafter it is desired to omit the comment.
  • comments on the content of other users with similar preferences to the user are supplemented as user comments, and comment information as shown in FIG. 10 is generated for the entire content.
  • the other users who make up the comment can be users who have the same or similar evaluation up to the scene commented by the viewing user. Or it can be set as the other user with high commonality of preference which is mentioned later with reference to FIG.
  • FIG. 11 and 13 are flowcharts illustrating an example of processing for determining content to be suggested to the user corresponding to the present embodiment.
  • FIG. 12 is a diagram illustrating an example of a display screen for designating an item that is a reference for content suggestion.
  • step S ⁇ b> 1101 the CPU 200 acquires content evaluation information evaluated by the user from the evaluation information table 600 of the content database 105. Since the user ID 602 is registered in the evaluation information table 600, if the user ID is specified, the user's evaluation information can be acquired.
  • the CPU 200 groups the contents according to the information of the rating 603 of the acquired evaluation information. When the rating has five levels, five groups with the rating values (scores) “5”, “4”, “3”, “2”, and “1” are generated.
  • the CPU 200 creates a subgroup of content having a common attribute based on the information in the attribute 605 in the group generated in S1102. If the content is a movie, subgroups are created based on the commonality of the movie genre, performers, directors, and other arbitrary attributes. Here, when the number of contents included in the subgroup does not exceed a predetermined value, the subgroup based on the attribute may not be generated. For example, when a user likes a specific director, a specific performer, or a specific genre, the subgroups are created.
  • the CPU 200 extracts a common keyword from the information in the comment 604 for the content included in the subgroup created in S1103.
  • the keyword may be a keyword used for all of the content included in the subgroup, or may be a keyword used for a certain percentage of content.
  • the CPU 200 generates preference information based on the combination of ratings, attributes, and keywords extracted by the above processing. For example, preference information such as a rating of “5”, an attribute “musical”, and a keyword “dance” can be generated. Note that the type and number of information included in the attributes and keywords are not limited to one. If there are multiple pieces of common information, they can be included. The more information that is included, the higher the accuracy of the suggestion.
  • overlapping words when an overlapping word is contained in an attribute and a keyword, you may make it include only in any one.
  • overlapping words may be included only in attributes.
  • the word may be included in the attribute only when the word is used to create a subgroup based on the attribute, and may be included in the keyword in other cases.
  • the processing from S1101 to S1105 is not performed in real time, and the preference information may be generated in advance for each user.
  • the CPU 200 searches the evaluation information table 600 for content having evaluation information associated with the preference information generated in S1105.
  • the content having the musical attribute and including the dance in the comment is specified, and the rating is “ Select the item with “5”.
  • an average score of ratings is calculated, and if the difference between the average score and 5 is smaller than a predetermined threshold value, the rating may be used. This is because there is a variation in the rating, and it is necessary to consider the variation when averaged.
  • the highest rating score the score with the largest number of selected users, or another user having a preference similar to the user can be specified, the score of the other user may be referred to.
  • a plurality of preference information may be generated according to a combination of rating, attribute, and keyword for the same user.
  • the check box 1201 and 1202 can specify on the screen 1200 whether the reference is based on preference information or content that the user has viewed in the past as a suggestion creation criterion.
  • a suggestion check box 1201 based on preference information is designated, a rating, an attribute, and a keyword can be further designated.
  • only the rating, only the attribute, and only the keyword can be designated, or any combination may be designated.
  • the combination pattern may be limited to combinations in the generated preference information.
  • preference information corresponding to the combination of items specified on the screen 1200 is selected, and content search is performed.
  • the user when the suggestion check box 1202 based on the content the user has viewed in the past is designated, the user can designate any of the content that the user has evaluated in the past. In this case, another user having a high degree of commonality with the user is specified based on the designated content, and a suggestion target is determined from the content that the other user has already viewed.
  • the processing at this time will be described with reference to FIG. In the following description, a user who requests a content suggestion for distinction is called a first user, and other users are called second users.
  • the CPU 200 receives from the client 101 the designation of the content serving as a suggestion reference input by the first user.
  • a screen 1200 of FIG. 12 is displayed on the display of the client 101, and it is possible to accept the specification of the content that is the reference of the suggestion input by the first user.
  • the client 101 transmits the accepted designation to the server 103.
  • the CPU 200 refers to the evaluation information table 600 and extracts the second user who has already evaluated the content specified in S1301. If there are a plurality of corresponding second users, all are extracted.
  • CPU 200 selects one of the extracted second users. Thereafter, the CPU 200 processes one by one for the selected second user.
  • the commonality of content evaluated by the first user and the second user is determined.
  • the CPU 200 counts the number of contents (Ncc) evaluated in common by the first user and the second user among the contents evaluated by the selected second user. For example, if the first user has evaluated 10 contents and the selected second user has evaluated 5 contents, Ncc of the second user is 5.
  • the CPU 200 determines whether or not Ncc counted in S1303 is larger than a predetermined threshold (Nth_cc). If it is determined that the value is larger than the threshold value, it is determined that the degree of commonality of evaluated contents is high, and the process proceeds to S1306. On the other hand, if it determines with it being below a threshold value, a process will progress to S1311.
  • the degree of matching of the ratings is determined for the content determined to be evaluated in common by the first user and the second user. Specifically, in S1306, the CPU 200 determines the number of contents that have been given the same rating as the rating of the first user for the contents determined to be evaluated in common by the first user and the second user in S1304 ( Nsr) is counted. In the above example, if the evaluations of the first user and the second user match among the three contents among the five contents commonly evaluated by the first user and the second user, the Nsr of the second user Becomes 3.
  • the CPU 200 determines whether or not Nsr counted in S1305 is greater than a predetermined threshold (Nth_sr). If it is determined that the value is larger than the threshold value, it is determined that the rating match between the first user and the second user is high, and the process proceeds to S1308. On the other hand, if it determines with it being below a threshold value, a process will progress to S1311.
  • the CPU 200 determines whether or not Nsk counted in S1308 is greater than a predetermined threshold (Nth_sk).
  • the CPU 200 identifies content that has not been viewed by the first user from among the content that has been evaluated by the second user in which Ncc, Nsr, and Nsk have exceeded the threshold values in the above processing.
  • Whether or not the content is unviewed can be determined based on the information of the viewed content 505 in the user information database 104.
  • unviewed content may be set as a suggestion target, or among unviewed content, content that is related to the content specified in S1301 may be further narrowed down and set as a suggestion target.
  • the relevance can be determined based on, for example, the attribute information of the content and the matching degree of the comment with reference to the evaluation information table 600.
  • the CPU 200 determines whether or not there is an unprocessed second user among the second users extracted in S1302, and if there is an unprocessed second user, the process returns to S1303 and continues processing. On the other hand, if there is no unprocessed second user, the process ends. In this way, if the second user who is determined to have high preference commonality can be specified, the evaluation performed by the second user is evaluated for the remaining portion of the content that the first user has evaluated only halfway. You may divert. This point is as described in relation to the explanation regarding the check box 814 in FIG. 8B.
  • FIG. 14 shows an example of a digest playback screen when the content digest is played back.
  • a digest playback screen 1400 is displayed on the display of the client 101.
  • a digest display area 1401 displays a digest of content received by the client 101 from the server 103.
  • the slider 1402 is an input unit for setting the level of detail of the digest, and the length of the digest is changed according to the level of detail. The higher the detail level, the longer the digest, and the lower the detail level, the shorter the digest. At that time, the longest and shortest lengths can be set in advance. Further, “length” may be used as a reference instead of the level of detail. That is, a higher digest corresponds to a longer digest, and a lower detail corresponds to a shorter digest.
  • the rating 1403 is an area for setting a rating given to each scene of the content, which is referred to when determining the content of the digest.
  • the rating 1403 can be input in, for example, five levels.
  • a digest can be generated only from a scene whose rating is evaluated as “5” in the content.
  • a digest can be generated only from scenes whose rating is evaluated as “1” in the content.
  • the rating can be specified as a range as well as a single value. For example, it is possible to specify 4 or more, 3 or more, and 2 or less. In that case, a digest can be generated from a scene that has obtained an evaluation corresponding to that range.
  • the keyword 1404 is a keyword input area for searching for an attribute or comment given to each scene of the content, which is referred to when determining the content of the digest.
  • the user can enter any keyword here.
  • a user who likes a dance scene can input the word “dance” as a keyword, so that a scene including “dance” in a comment or attribute can be included in the digest.
  • the level of detail, rating, and keyword input can be set independently, and all can be set, or any one or more can be set. For example, “high” can be input as the degree of detail, “5” as the rating, and “dance” as the keyword.
  • Check box 1405 can be selected when the user desires to generate a digest based on his / her preference.
  • the content to be viewed is determined according to the flowchart of FIG. 11, there is user preference information used for searching for the content. Therefore, when the check box 1405 is selected, a digest having a corresponding evaluation can be extracted from the content based on the preference information to generate a digest.
  • user preference information is generated in advance according to FIG. 11, an arbitrary item can be selected and specified from the generated preference information.
  • step S ⁇ b> 1501 the CPU 200 receives user setting information input from the client 101 via the screen 1400 illustrated in FIG. 14. The user can input one or any combination of items 1403 to 1405.
  • step S1502 the CPU 200 sets the digest length according to the received user setting.
  • the length of the digest based on the detail level “high” is the set longest value. Considering that a digest is created between 30 seconds and 5 minutes, for example, it is set to 5 minutes.
  • step S1503 the CPU 200 extracts a corresponding scene from the scenes constituting the content according to the specified rating.
  • the rating of the scene may be the average value when the evaluation is received from a plurality of users, or may be the maximum value, or the rating having the largest number of evaluated users is set as the value of the scene. Also good. For example, if “5” is the most, even if there are other “2” or “1”, the rating of the scene can be “5”.
  • the CPU 200 extracts a scene according to the specified keyword.
  • the keyword search can be performed on the comment 1006 of each scene as shown in FIG. 10 and the attributes given to each scene.
  • the CPU 200 refers to the evaluation information table 600 and acquires evaluation information of the content to be digest generated. If there are multiple pieces of evaluation information, obtain all of them.
  • a comment is searched for each scene in units of comment IDs. If a keyword frequently appears as a comment for a certain scene in evaluations from a plurality of users, the scene is extracted. For example, consider a case where there are 10 comments (10 comment IDs) from 10 users.
  • the corresponding content is composed of 10 scenes, and a comment is given to each. That is, there are 10 comments for each scene.
  • the CPU 200 performs a keyword search for the comment of each scene of the evaluation information, and calculates a keyword hit rate. If the keyword is included in all 10 people, the scene is naturally determined as an extraction target. When the hit rate falls below a certain ratio (for example, 80%), the scene can be excluded from the extraction target. In this way, scenes to be included in the digest can be extracted from a plurality of scenes based on the keyword hit rate. Further, a search is performed for the attribute of each scene, and when a keyword is included in the attribute of the scene, the scene can be extracted. Further, when a keyword is included in the attribute, the hit rate may be determined again by multiplying the hit rate of the comment by a certain coefficient N (number of 1 or more). For example, even when the hit rate of a comment of a scene is less than 80%, if a keyword is included in the attribute of the scene, the hit rate is multiplied by a coefficient N to determine whether it exceeds 80%. Also good.
  • N number of 1 or more
  • the CPU 200 searches for the comment of each scene of the evaluation information based on the rating, attribute, and keyword included in the user preference information, and determines the corresponding scene as an extraction target.
  • the processing here can be performed in the same manner as the processing in S1503 and S1504.
  • the CPU 200 ranks the scenes extracted as described above. For example, the higher the ranking, the higher the ranking. Also, the higher the keyword hit rate, the higher the ranking.
  • the order of the extracted scenes is determined by comprehensively considering the rating and the hit rate.
  • the CPU 200 integrates the scenes extracted as described above and creates a digest. For example, the scenes extracted by the above processing are arranged in order of the scene number or randomly. As a result of the integration, if the digest length does not reach the length set in S1502, the digest templates prepared in advance are combined to match the set digest length.
  • the scene length is shortened from a lower rank scene according to the order determined in S1507, or the scene itself is omitted.
  • the digest length is adjusted to match the set length.
  • the digest is not only generated from the integration of the extracted scenes, but a digest template may always be used. As a result, it is possible to generate a digest that suits the user's preference by inserting a scene according to the user's preference while giving a sense of unity to the overall impression of the digest.
  • a digest of content corresponding to the user's preference can be provided, so that the user's interest in the content can be increased.
  • the creation of a digest requires evaluation for each scene, but it is possible to compensate for a scene for which a user has not entered a comment by using another user's comment. it can.
  • the suggestion function according to the present embodiment it is possible to efficiently provide content that suits the user's preference, and it is possible to save communication resources by reducing the number of communication between the client and the server. Become. In addition, since the number of searches in the server can be reduced, the server processing load can be reduced.
  • the present invention is not limited to the above-described embodiment, and various changes and modifications can be made without departing from the spirit and scope of the present invention. Therefore, in order to make the scope of the present invention public, the following claims are attached.
  • the information processing apparatus according to the present invention can also be realized by a computer program that causes one or more computers to function as the information processing apparatus.
  • the computer program can be provided / distributed by being recorded on a computer-readable recording medium or through a telecommunication line.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

L'invention concerne un serveur qui suggère un contenu à un terminal d'utilisateur, le serveur comprenant : un moyen de stockage pour stocker des informations d'évaluation d'un contenu visualisé par un utilisateur dans le passé, pour chaque utilisateur d'une pluralité d'utilisateurs ; un moyen de traitement pour exécuter un traitement qui détermine, sur la base des informations d'évaluation, un premier élément de contenu à suggérer à un premier utilisateur parmi la pluralité d'utilisateurs ; et un moyen de communication pour transmettre, au terminal d'utilisateur du premier utilisateur, des informations concernant le premier élément de contenu.
PCT/JP2016/078012 2016-09-23 2016-09-23 Serveur et son procédé de commande, et programme informatique WO2018055727A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2018540558A JP6501241B2 (ja) 2016-09-23 2016-09-23 サーバ及びその制御方法、並びにコンピュータプログラム
PCT/JP2016/078012 WO2018055727A1 (fr) 2016-09-23 2016-09-23 Serveur et son procédé de commande, et programme informatique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/078012 WO2018055727A1 (fr) 2016-09-23 2016-09-23 Serveur et son procédé de commande, et programme informatique

Publications (1)

Publication Number Publication Date
WO2018055727A1 true WO2018055727A1 (fr) 2018-03-29

Family

ID=61689508

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/078012 WO2018055727A1 (fr) 2016-09-23 2016-09-23 Serveur et son procédé de commande, et programme informatique

Country Status (2)

Country Link
JP (1) JP6501241B2 (fr)
WO (1) WO2018055727A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023002608A1 (fr) * 2021-07-21 2023-01-26 株式会社ソニー・インタラクティブエンタテインメント Dispositif de distribution de vidéo, procédé de distribution de vidéo et programme de distribution de vidéo

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1016991A2 (fr) * 1998-12-28 2000-07-05 Kabushiki Kaisha Toshiba Dispositif et procédé de mise a disposition d'informations, et dispositif de reception d'informations
JP2005242417A (ja) * 2004-02-24 2005-09-08 Matsushita Electric Ind Co Ltd コンテンツ視聴装置及びコンテンツ要約作成方法
JP2009064365A (ja) * 2007-09-10 2009-03-26 Sharp Corp お勧め情報提供方法
US20100312906A1 (en) * 2007-05-08 2010-12-09 Koninklijke Philips Electronics N.V. Method and system for enabling generation of a summary of a data stream
JP2011114555A (ja) * 2009-11-26 2011-06-09 Sharp Corp コンテンツ配信装置、コンテンツ視聴装置、コンテンツ配信方法およびコンテンツ視聴方法
EP2960812A1 (fr) * 2014-06-27 2015-12-30 Thomson Licensing Procédé et appareil de création d'un résumé vidéo

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000250944A (ja) * 1998-12-28 2000-09-14 Toshiba Corp 情報提供方法、情報提供装置、情報受信装置、並びに情報記述方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1016991A2 (fr) * 1998-12-28 2000-07-05 Kabushiki Kaisha Toshiba Dispositif et procédé de mise a disposition d'informations, et dispositif de reception d'informations
JP2005242417A (ja) * 2004-02-24 2005-09-08 Matsushita Electric Ind Co Ltd コンテンツ視聴装置及びコンテンツ要約作成方法
US20100312906A1 (en) * 2007-05-08 2010-12-09 Koninklijke Philips Electronics N.V. Method and system for enabling generation of a summary of a data stream
JP2009064365A (ja) * 2007-09-10 2009-03-26 Sharp Corp お勧め情報提供方法
JP2011114555A (ja) * 2009-11-26 2011-06-09 Sharp Corp コンテンツ配信装置、コンテンツ視聴装置、コンテンツ配信方法およびコンテンツ視聴方法
EP2960812A1 (fr) * 2014-06-27 2015-12-30 Thomson Licensing Procédé et appareil de création d'un résumé vidéo

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHIROTA, Y. ET AL.: "A TV Program Generation System Using Digest Video Scenes and a Scripting Markup Language", PROCEEDINGS OF THE 34TH HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES 2001, 6 January 2001 (2001-01-06), XP010549769 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023002608A1 (fr) * 2021-07-21 2023-01-26 株式会社ソニー・インタラクティブエンタテインメント Dispositif de distribution de vidéo, procédé de distribution de vidéo et programme de distribution de vidéo

Also Published As

Publication number Publication date
JP6501241B2 (ja) 2019-04-17
JPWO2018055727A1 (ja) 2019-06-24

Similar Documents

Publication Publication Date Title
US11076206B2 (en) Apparatus and method for manufacturing viewer-relation type video
JP5903187B1 (ja) 映像コンテンツ自動生成システム
JP2019525272A (ja) 自然言語クエリのための近似的テンプレート照合
US20100114979A1 (en) System and method for correlating similar playlists in a media sharing network
US20120066235A1 (en) Content processing device
US20130222526A1 (en) System and Method of a Remote Conference
US11775580B2 (en) Playlist preview
KR20140126556A (ko) 감성 기반 멀티미디어 재생을 위한 장치, 서버, 단말, 방법, 및 기록 매체
JP5964722B2 (ja) カラオケシステム
CN108475260A (zh) 基于评论的媒体内容项的语言识别的方法、系统和介质
JP2014082582A (ja) 視聴装置、コンテンツ提供装置、視聴プログラム、及びコンテンツ提供プログラム
JP2013025555A (ja) 情報処理装置、情報処理システム、情報処理方法、及び、プログラム
KR102197739B1 (ko) 연기자 정보 제공 방법
EP2160032A2 (fr) Appareil d'affichage de contenu et procédé d'affichage de contenu
WO2012173021A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
WO2018055727A1 (fr) Serveur et son procédé de commande, et programme informatique
JP2012178028A (ja) アルバム作成装置、アルバム作成装置の制御方法、及びプログラム
JP6568665B2 (ja) サーバ及びその制御方法、並びにコンピュータプログラム
KR102570134B1 (ko) 숏폼 클립 생성 방법 및 시스템
WO2014103374A1 (fr) Dispositif de gestion d'informations, serveur et programme de commande
JP2007265362A (ja) コンテンツ再生リスト検索システム、コンテンツ再生リスト検索装置、及びコンテンツ再生リスト検索方法
JP2002304420A (ja) 視聴覚コンテンツ配信システム
US10467231B2 (en) Method and device for accessing a plurality of contents, corresponding terminal and computer program
KR101490507B1 (ko) 동영상 제작 방법 및 장치
JP5834514B2 (ja) 情報処理装置、情報処理システム、情報処理方法、および、プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16916795

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018540558

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16916795

Country of ref document: EP

Kind code of ref document: A1