WO2009145226A1 - Procédé de gestion de contenus, procédé de modification automatique de contenus, programme de gestion de contenus, programme de modification automatique de contenus, serveur, dispositif d’information et système de modification automatique de contenus - Google Patents
Procédé de gestion de contenus, procédé de modification automatique de contenus, programme de gestion de contenus, programme de modification automatique de contenus, serveur, dispositif d’information et système de modification automatique de contenus Download PDFInfo
- Publication number
- WO2009145226A1 WO2009145226A1 PCT/JP2009/059706 JP2009059706W WO2009145226A1 WO 2009145226 A1 WO2009145226 A1 WO 2009145226A1 JP 2009059706 W JP2009059706 W JP 2009059706W WO 2009145226 A1 WO2009145226 A1 WO 2009145226A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- content
- feature point
- information
- server
- scenario
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
- G11B27/105—Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25866—Management of end-user data
- H04N21/25883—Management of end-user data being end-user demographical data, e.g. age, family status or address
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/436—Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
- H04N21/43615—Interfacing a Home Network, e.g. for connecting the client to a plurality of peripherals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
Definitions
- the present invention relates to a content management method, a content automatic editing method, a content management program, a content automatic editing program, a server, an information device, and a content automatic editing system suitable for automatically collecting and editing content scattered on a home network. .
- video content a moving image file, a still image file, a video digital content, and the like are referred to as “video content”.
- content includes not only video content but also data composed of a markup document, audio data, document data, a worksheet, or a combination thereof.
- a large amount of video content stored in such a large-capacity storage device includes many similar contents. Therefore, viewing all video content is tedious and time consuming. Therefore, a plurality of video contents are edited and integrated by using an authoring tool or the like and combined into one video content (hereinafter referred to as “integrated video content”). Examples of apparatuses that edit a plurality of video contents to create integrated video contents are disclosed in Japanese Patent Laid-Open No. 2005-303840 (hereinafter referred to as “Patent Document 1”) and Japanese Patent Laid-Open No. 2000-125253 (hereinafter referred to as “ Patent Document 2 ”)) and the like.
- the moving image editing apparatus described in Patent Document 1 has a moving image storage unit in which a plurality of moving images are stored.
- the editing device searches for a moving image from the moving image storage unit according to the search condition input by the user, and displays the search result as a list.
- the user selects a desired moving image from the displayed list, and completes one video by arranging the selected moving images in time series.
- the moving image editing apparatus described in Patent Document 2 extracts a portion (cutout range) including a scene specified by the user from all moving image materials, and creates a composite list in which the extracted cutout ranges are listed. Then, the cutout range corresponding to the designated scene is continuously reproduced according to the created synthesis list. As described above, in Patent Document 2, a series of operations from selection of a moving image to creation of a list and editing and reproduction are automatically performed on the apparatus side.
- the present invention has been made in view of the above circumstances, and its object is to provide a content management method, a content automatic editing method suitable for automatically collecting and editing video content held in each information device, A content management program, a content automatic editing program, a server, an information device, and a content automatic editing system are provided.
- a content management method that solves the above-described problem is executed by a server connected via a home network to a plurality of information devices that store content evaluated for at least one feature point.
- a method for managing content of an information device a feature point evaluation collecting step for collecting feature point evaluation of each content from a plurality of information devices, a feature point evaluation storing step for storing the collected feature point evaluation, and an information device
- a content selection step of searching for a stored feature point evaluation and selecting a content having a feature point evaluation suitable for the request, and notifying information equipment of location information of the selected content And a content location information notifying step.
- the content management method also includes a scenario storing step for storing a plurality of scenarios associated with at least one feature point evaluation, and a scenario having a feature point evaluation suitable for the request from among the plurality of scenarios when requested.
- the method may further include a scenario selection step of selecting. In this case, in the content selection step, content is selected based on the selected scenario.
- the content selection step is configured to select each scene based on the feature point evaluation of each scene of the selected scenario. Content may be selected.
- a content collection step of collecting the content from each information device based on the location information and a content editing step of editing the collected content are included.
- the information device automatically collects and edits the content without the user's operation by accessing the server (that is, the feature point evaluation of each centrally managed content) and acquiring the location information of the content. Can do.
- the content automatic editing method may further include a content reproduction step of reproducing the edited content.
- the content automatic editing method may further include a content evaluation step of evaluating the content for at least one feature point when storing the content.
- an automatic content editing method that solves the above problem includes a plurality of information devices that store content evaluated for at least one feature point, and the plurality of information via a home network.
- the present invention relates to a method for automatically editing content by linking processing with a server connected to a device.
- the steps executed by the server in the content automatic editing method include the following steps.
- a feature point evaluation collecting step for collecting feature point evaluations of each content from a plurality of information devices a feature point evaluation storing step for storing the collected feature point evaluations, and when content is requested from the information device, A content selection step of searching the stored feature point evaluation and selecting content having a feature point evaluation suitable for the request, and a content location information notification step of notifying the information device of location information of the selected content are included. It is.
- the steps executed by the information device in the content automatic editing method include the following steps.
- a content requesting step for requesting content from the server and content that collects the content from each information device based on the location information when the location information of the content having feature point evaluation suitable for the request is notified from the server
- a collection step and a content editing step for editing the collected content are included.
- a content management program that solves the above-described problems is a program for causing a computer to execute each step of the content management method described above.
- a content management program that solves the above-described problem is a program for causing a computer to execute each step of the content automatic editing method described above.
- a server that solves the above problem is a server connected via a home network to a plurality of information devices that store content evaluated for at least one feature point, Feature point evaluation collecting means for collecting feature point evaluation of each content from the information device, feature point evaluation storing means for storing the collected feature point evaluation, and feature point evaluation storage when content is requested from the information device Content selection means for searching for means and selecting content having a feature point evaluation suitable for the request, and content location information notification means for notifying information equipment of location information of the selected content .
- the server includes a scenario storage unit for storing a plurality of scenarios associated with at least one feature point evaluation, and a scenario having a feature point evaluation suitable for the request among the plurality of scenarios when requested by an information device. Further, it may be configured to further include scenario selection means for selecting. In this case, the content selection means selects content based on the selected scenario.
- the content selection unit is configured to select the content of each scene based on the feature point evaluation of each scene of the selected scenario. May be.
- an information device that solves the above-described problem includes another information device that stores content evaluated for at least one feature point, and the content stored in each information device.
- an information device connected via a home network to a server managed in association with the server, and content request means for requesting content from the server and location information of the content having feature point evaluation suitable for the request are notified from the server
- the information processing apparatus includes a content collection unit that collects the content from each information device based on the location information, and a content editing unit that edits the collected content.
- the information device configured as described above may further include a content playback unit that plays back the edited content.
- the information device may further include a content storage unit that stores content, and a content evaluation unit that evaluates the content with respect to at least one feature point when the content is stored in the content storage unit.
- the content processed in the server or information device includes, for example, video content.
- video content is an example, and other examples include markup documents, audio data, document data, worksheets, or data composed of a combination thereof.
- an automatic content editing system that solves the above problem is connected to the plurality of information devices described above and the plurality of information devices via a home network
- the present invention relates to a system having a server configured to select content of each scene based on feature point evaluation.
- the content editing means of the information device provided in the system is configured to perform editing processing so that the content selected for each scene is played back in the order determined by the scenario.
- content automatic editing method According to the content management method, content automatic editing method, content management program, content automatic editing program, server, information device, and content automatic editing system of the present invention, video content held in each information device is automatically collected, Since editing is performed, the user's operation and work load are reduced.
- FIG. 1 is a network configuration diagram for explaining the present embodiment. As shown in FIG. 1, the network according to the present embodiment is constructed by a home LAN (Local Area Network) 1 and an external network 2.
- LAN Local Area Network
- a gateway server 10 and a plurality of information devices are arranged in the home LAN 1, a gateway server 10 and a plurality of information devices (HDD recorder 21, TV (Television) 22, notebook PC 23, home server 100, etc.) are arranged.
- the gateway server 10 has switching and routing functions, interconnects information devices in the home LAN 1, and communicates with terminals arranged on the external network 2 or other networks not shown. can do.
- the home LAN 1 is also connected to information devices other than the HDD recorder 21, TV 22, and notebook PC 23.
- Information home appliances such as a microwave oven and a refrigerator are also connected.
- information home appliances are also referred to as information devices for convenience.
- All information devices connected to the home LAN 1 are installed with middleware and client software compliant with common technical specifications related to the home network, and have a configuration suitable for home LAN connection.
- each information device complies with DLNA (Digital Living Network Alliance) which is a common technical specification
- the HDD recorder 21 is a DMS (Digital Media Server)
- the TV 22 is a DMP (Digital Media Player).
- the notebook PC 23 functions as a DMS and a DMP, respectively.
- the home server 100 is a desktop PC, and functions as a DMS and a DMP like the notebook PC 23.
- the home LAN may be configured by information equipment that conforms to other technical specifications such as Havi (Home Audio / Video Interoperability), Jini, and the like.
- the home server 100 can collect and store contents stored in each information device in the home LAN 1 including the HDD recorder 21.
- content stored in an information device (for example, mobile phone 24) arranged on the external network 2 can be acquired and stored via the external network 2 and the gateway server 10. Accumulation of content in the home server 100 is performed according to settings of the home server 100 and each information device, user operation, or the like.
- FIG. 2 is a block diagram showing the configuration of the home server 100.
- Each element constituting the home server 100 is mutually connected to the CPU 120 via the system bus 110. After powering on the home server 100, the CPU 120 accesses necessary hardware via the system bus 110.
- a CPU 120 accesses a ROM (Read-Only Memory) 130.
- the CPU 120 loads a boot loader stored in the ROM 130 into a RAM (Random Access Memory) 140 and starts it.
- the CPU 120 that activated the boot loader then loads an OS (Operating System) stored in the HDD 150 into the RAM 140 and activates it.
- OS Operating System
- each element performs various processes by cooperating as necessary under resource and process management by the OS.
- various resident programs stored in the HDD 150 are resident in the RAM 140.
- Such resident programs include a content collection program 151, an integrated video content creation program 152, a feature point learning program 153, a scenario learning program 154, a family profile learning program 155, and the like according to the features of the present invention. Hereinafter, these resident programs will be described.
- FIG. 3 is a flowchart showing content collection processing executed by the content collection program 151.
- the content collection program 151 sends a request message requesting video content (for example, MPEG-2 (Moving Picture Experts ⁇ Group phase 2) format) to the network interface 160 when it resides in the RAM 140, for example.
- a request message for example, MPEG-2 (Moving Picture Experts ⁇ Group phase 2) format
- the network interface 160 When it resides in the RAM 140, for example.
- step 1 the same segment, that is, each DMS in the home LAN 1 individually (or by multicast) is transmitted (step 1.
- S the step is abbreviated as “S” in this specification and the drawings).
- the information device that has received the request message refers to its own content list and the like to check whether the video content has been updated or added since the previous request message was received. If there is an updated or added video content, the video content is uploaded to the home server 100.
- the content collection program 151 stores the video content received from each terminal in the content database 156 in the HDD 150, and simultaneously analyzes the video content to evaluate various feature points (S2, S3).
- the feature points are, for example, “grandma” (subject person), “two people” (subject number), “laughter” (subject facial expression), “eating” (subject motion), “yukata” (subject) ), “Outdoor” (shooting location), “Evening” (shooting time), “Weekend” (shooting date and time), “Looking down” (position and angle relationship between subject and photographer), “Birthday” (family) This is an element characterizing video content such as “Zoom in” (shooting pattern).
- the feature points include not only the characteristics of video and audio recorded in the video content, but also information such as the photographer of the video content and the shooting date and time.
- feature points of images and sounds in video content are extracted and digitized by a known recognition algorithm such as motion recognition, facial expression recognition, object recognition, and speech recognition. Information regarding various feature points of each video content digitized in this way is called a video feature parameter.
- a family profile (to be described later) stored in the family profile database 159 is referred to, and a subject (for example, “younger brother”) or a photographer (for example, “father” voice) Is identified.
- the shooting date and time and the photographer can also be acquired from the time stamp and properties of the video content file.
- the family profile it is possible to estimate the age of the subject at the time of photographing (that is, the photographing time of the video content) from the height and the face of the subject.
- the content collection program 151 generates a list of video feature parameters (hereinafter referred to as “meta information”), which is a result of analyzing the video content for each feature point in this way (S4). Specifically, using a predetermined function corresponding to each feature point, a video feature parameter indicating evaluation of video content related to the feature point is calculated.
- FIG. 4 shows an example of meta information generated by the process of S4. As shown in FIG. 4, the meta information is composed of video feature parameter groups corresponding to feature points such as “father”, “mother”, “sister”, “brother”, and so on. In the present embodiment, various video feature parameters are expressed by numerical values of 0 to 1, for example.
- the meta information in FIG. 4 indicates that the video content is an audioless video having, for example, an older sister and a younger brother as main subjects and includes many zoomed-in scenes.
- the generated meta information is stored in the meta information database 157 in the HDD 150 (S5).
- the video content held by each information device in the same segment is automatically collected by the content collection program 151, and feature point analysis is performed by the content collection program 151 to generate meta information.
- video content held by information devices in other segments (for example, the mobile phone 24) is not automatically collected.
- Video contents held by information devices in other segments are uploaded and stored in the home server 100 only when the information devices are manually operated.
- the home server 100 and each information device may be set and changed so that the video content held by the information device in the same segment is uploaded and stored in the home server 100 only when a manual operation is performed.
- FIG. 5 is a flowchart showing integrated video content creation processing executed by the integrated video content creation program 152.
- the integrated video content creation program 152 creates integrated video content based on the scenario.
- FIGS. 6A and 6B show examples of scenarios (scenarios 1581 and 1582) stored in the scenario database 158 in the HDD 150, respectively.
- Each scenario is composed of one or more scene definitions S.
- the scene definition S includes a plurality of types of scene feature parameters that define scene features (video feature parameters required for video content to be assigned to the scene), allocation time parameters for each scene, and the like.
- the scene feature parameter is expressed by a value of 0 to 1 like the video feature parameter. For example, the scene definition S defining the scene 1 in the scenario 1581 in FIG.
- a scenario feature parameter similar to the scene feature parameter is also associated with the scenario itself.
- the scenario feature parameter is calculated based on, for example, scene feature parameters of each scene definition constituting the scenario. Alternatively, it may be a parameter expressing the flow of the entire scenario given independently of the scene feature parameter of each scene definition.
- Each scenario and scene definition is stored in advance in the scenario database 158 as a template, for example. Scenarios and scene definitions can be created independently by the user using a dedicated editor, for example, but those distributed by video equipment manufacturers etc. can be downloaded from a server on the Internet, for example. .
- the user operates a client such as the TV 22 or the notebook PC 23 connected to the home LAN 1 and inputs the theme of the video content to be viewed, the characters, the total playback time, and the like.
- the information input at this time is transmitted to the home server 100.
- the CPU 120 starts execution of the integrated video content creation program 152.
- the input information is referred to as “content creation instruction information”.
- the integrated video content creation program 152 first accesses the scenario database 158, refers to each scenario, and selects a scenario suitable for the content creation instruction information (S11).
- a scenario suitable for the content creation instruction information is “sister” or “birthday”, for example, a scenario (scenario 1581 in this case) where the scenario feature parameters of “sister” and “birthday” are both greater than or equal to a predetermined value (eg, 0.6). ) Is selected.
- the integrated video content creation program 152 subsequently accesses the meta information database 157 and searches for meta information suitable for each scene of the selected scenario (S12). For example, for the scene 1 of the scenario 1581, the meta information of the video content captured in 2002 when the video feature parameters of “sister” and “birthday” are equal to or larger than a predetermined value is searched.
- the integrated video content creation program 152 accesses the content database 156 and reads the video content corresponding to the searched meta information for each scene of the selected scenario (S13). For each scene, for example, video content corresponding to the meta information having the highest search order is read.
- the search order of meta information is determined according to the degree of coincidence between the scene definition S (specifically, the scene feature parameter) and the meta information.
- the integrated video content creation program 152 clips and arranges the read video content for each scene, and generates a scene video (S14).
- corresponding scene videos are generated for each of the scenes 1 to 20 of the scenario 1581.
- a video for 25 seconds in video content shot on the sister's birthday in 2002 is clipped as a scene video.
- the starting point of clipping on the time axis of the video content is set at random, for example. Also, since video with a long shooting time tends to be redundant in the latter half, the video in the first half is clipped with priority.
- the integrated video content creation program 152 creates a series of video content, that is, an integrated video content by arranging the generated scene videos in order of scenes 1 to 20 and connecting adjacent scene videos (S15). A visual effect may be enhanced by using a switching effect or the like for connection between scene images.
- the created integrated video content is transmitted to the client (that is, the transmission source of the content creation instruction information).
- the client decodes and reproduces the received integrated video content using a video codec. Note that the user can arbitrarily save the created integrated video content in the HDD 150.
- the integrated video content creation program 152 when executed, the integrated video content is automatically created, so there is no complexity in the video creation work, but the video content is feature point evaluation (meta information) and scenario selection.
- the video content is feature point evaluation (meta information) and scenario selection.
- the user can edit, for example, each scenario in the scenario database 158 so that the integrated video content is created as intended.
- considerable trial and error is required to improve the content of the integrated video content by such scenario editing work. That is, the solution of the above problem by scenario editing work is not effective because the editing work is complicated and difficult.
- the home server 100 includes a feature point learning program 153, a scenario learning program 154, a family profile learning program 155, etc. in order to improve the content of the integrated video content while eliminating the complexity of the video creation work.
- a learning program has been implemented.
- FIG. 7 is a flowchart showing the feature point learning process executed by the feature point learning program 153. As shown in FIG. 7, the feature point learning program 153 stays in the RAM 140 and then monitors the generation of meta information by the content collection program 151 and the update of predetermined shared information of the client (S21 to S23).
- the feature point learning program 153 detects that the meta information is generated by the content collection program 151 (S21, S22: YES)
- the feature point learning program 153 uses, for example, an algorithm applying a TF-IDF (Term Frequency-Inverse Document Frequency) method.
- the conversion coefficient of the function used in the process of S4 in FIG. 3, that is, the coefficient for converting the feature point of the video content into the video feature parameter group is updated (S24).
- the tendency of the video content stored in the content database 156 is analyzed based on all the meta information stored in the meta information database 157. For example, let us consider a case where an analysis result indicating that there are many smile images is obtained.
- the feature point learning program 153 updates the conversion coefficient so that the weight of the video feature parameter of “laughter” is lightened so that the feature of “laughter” is intentionally diluted to make other features stand out.
- the content collection program 151 uses the updated conversion coefficient to generate meta information that more accurately represents the characteristics of the video content.
- the integrated video content creation program 152 selects an appropriate video content according to the content creation instruction information and creates an integrated video content.
- the feature point learning program 153 returns to the process of S21 after the conversion coefficient is updated.
- a single video feature parameter of “laughter” is set to a plurality of video features such as “laughter”, “big laughter”, and “slow laughter”. It may be subdivided into parameters. In this case, the video content can be further distinguished and characterized according to the degree of laughter.
- the playback history information includes information indicating, for example, which scene in the integrated video content has been operated such as playback, skip, repeat, fast forward, rewind, and stop.
- the feature point learning program 153 periodically accesses the shared folder of each client and monitors whether or not the reproduction history information in the shared folder has been updated (S21 to S23). If the playback history information in the shared folder has been updated (S21, S22: NO, S23: YES), a weighting value described later held in the meta information database 157 is updated using the playback history information. (S25). For example, consider a case where the integrated video content created using the scenario 1581 is reproduced by the TV 22. According to the reproduction history information at this time, scenes 1 to 16 are repeated and scenes 17 to 20 are not reproduced.
- the feature point learning program 153 has all the video feature parameters (or scene feature parameters) having a value higher than, for example, a certain level (for example, 0.6) in the meta information of the video content of the scenes 1 to 16 reproduced repeatedly. ) And the weight values (or scene feature parameters) of all video feature parameters higher than a certain level in the meta information of the video content of the scenes 17 to 20 are lowered.
- the HDD 150 holds a list of weight values (not shown) corresponding to each feature point. When the integrated video content creation program 152 searches for meta information corresponding to each scene, the list of weight values is displayed. Referring to the meta information that matches the value obtained by adding the weighting value to the scene feature parameter included in the scene definition S of the scenario.
- the feature point learning program 153 updates the weight value of the feature point with reference to the list of weight values when the reproduction history information is updated. Further, the correlation between the number of repeats and the video feature parameter may be calculated, and a weight value corresponding to the correlation coefficient may be given. The assigned weight value is increased or decreased, for example, linearly, exponentially, or logarithmically according to the number of repeats.
- the integrated video content creation program 152 selects the video content that the user particularly wants to view and creates the integrated video content even when there are a plurality of similar video contents in the content database 156. Become.
- the feature point learning program 153 returns to the process of S21 after the weighting process.
- the feature point learning program 153 may periodically acquire the reproduction history information in the shared folder of each client as an alternative process of the monitoring process of the reproduction history information in the shared folder.
- the meta information in the meta information database 157 is updated based on all the reproduction history information. For example, a higher weighting value is assigned to a video feature parameter of meta information of video content having a new playback date and time.
- the integrated video content creation program 152 selects the video content suitable for the user's recent preferences and creates the integrated video content.
- the above is an example of the update process of the video feature parameter by the feature point learning program 153, and various other update processes are assumed.
- a video feature parameter update process using a family profile is assumed.
- the family profile here is information about a family held by some information devices (such as the mobile phone 24) on the home LAN 1 and the external network 2.
- video content recorded by each family member is stored in the HDD recorder 21 in association with recording categories such as “father”, “mother”, “sister”, and “brother”.
- information such as viewing history of each family member and program reservation is also recorded.
- browsing history of web pages, photos, music, and the like are stored in the document folder of each family member of the notebook PC 23.
- the family profile learning program 155 collects family profiles scattered in each information device and constructs a family profile database 159 in the HDD 150. Further, the family profile in the family profile database 159 is updated based on the reproduction history information or the like, or the family profile is added. As an example, the family profile is updated or added by estimating the family preference based on the content of the reproduced scene, the reproduction frequency, and the like.
- operator information is also input in a GUI (Graphical User Interface) for inputting content creation instruction information. Then, the operator information is associated with the reproduction history information generated when the reproduction of the integrated video content corresponding to the content creation instruction information is finished. By using the operator information associated with the reproduction history information, the reproduction history information of each family member is classified from all the reproduction history information.
- GUI Graphic User Interface
- the preference of each family member is estimated (for example, factor analysis described in the next paragraph) And the like, and a family profile of each family member can be updated or added.
- the family profile learning program 155 also performs a family behavior pattern analysis based on the family profile by a data mining method or the like, and accumulates the analysis results in the family profile database 159 as a family profile.
- a family behavior pattern analysis based on the family profile by a data mining method or the like, and accumulates the analysis results in the family profile database 159 as a family profile.
- family characteristics can be analyzed using multivariate analysis such as factor analysis, and a new family profile can be generated.
- multivariate analysis such as factor analysis
- a new family profile can be generated.
- an n-dimensional virtual space having each of n types of video feature parameters as coordinate axes is defined, and video content is distributed in the n-dimensional virtual space based on meta information.
- the distribution of the video content in the n-dimensional virtual space is mapped to a lower-order m-dimensional virtual space (here, a three-dimensional virtual space defined with each principal component as a coordinate axis). .
- the distribution of video content in the three-dimensional virtual space expresses the characteristics that the family potentially has.
- the family profile expressed in such a distribution it is possible to update conversion coefficients for feature points, weight values corresponding to each feature point, and the like. It is also possible to select a scenario suitable for the family profile expressed by the distribution or download it from a server on the Internet.
- a new family profile can be generated using a technique such as cluster analysis. For example, n types of video feature parameters are classified into two clusters: a parameter cluster that is frequently updated such as weighting according to the playback state of the video content, and a parameter cluster that is not frequently updated. Based on the classification, family characteristics are extracted to generate a family profile. For example, family features can be extracted by focusing on the former cluster.
- the family profile stored in the family profile database 159 can be used for various processes.
- these family profiles include, for example, the height and voice of each family member, the color and pattern of favorite clothes, favorite sports, age, and the like.
- the reference data for recognition used in the recognition algorithm based on the family profile can be updated to improve the accuracy of recognition algorithms such as motion recognition, object recognition, and voice recognition for each family member.
- the integrated video content creation program 152 selects a more appropriate video content in response to a user instruction, and creates the integrated video content.
- the integrated video content creation program 152 can select video content by directly using the family profile stored in the family profile database 159. For example, consider a case where “Father” is included in the content creation instruction information. In this case, the integrated video content creation program 152 accesses the family profile database 159 and searches for a family profile related to father's preference and the like. Then, based on the retrieved family profile, video content related to father's preference or the like is selected to create integrated video content.
- the family profile can also be used for weighting the video feature parameters. That is, the feature point learning program 153 can update the conversion coefficient in the same manner as the process of S22 of FIG. 7 using the family profile, and can update the meta information in the meta information database 157. As an example, in a family with many children, the conversion factor is updated or weighted so that the weight of the video feature parameter of “children” is lightened to dilute the feature of “children” and make other features stand out. Change the value.
- the family profile can also be used to edit each scenario in the scenario database 158.
- the scenario learning program 154 edits each scenario using the family profile.
- the family profile stored in the family profile database 159 has many children's photos
- the existing scenario is edited so that the child's scene becomes longer, or the child creates a new main scenario.
- family profiles can be shared without exception. Therefore, it is possible to update the meta information using the family profile effectively without collecting the family profile scattered in each information device in the home server 100.
- the home server 100 transmits the entire family or the family profile of each family member thus collected, added, updated, etc. to the information providing server 200 (see FIG. 1) on the external network 2 via the gateway server 10. .
- the information providing server 200 Based on the received family profile, the information providing server 200 transmits advertisement information, video content, and the like suitable for the preference of the family or each family member to the home server 100z.
- the advertisement information is displayed on the integrated video content on-screen, for example.
- Scenario learning program 154 updates scenario editing, scenario feature parameters, scene feature parameters, and the like based on playback history information and the like, similar to feature point learning program 153. For example, consider a case where the integrated video content created using the scenario 1582 of FIG. According to the reproduction history information at this time, scenes 4, 8, 12, 16, and 20 are repeated, and other scenes are skipped. In this case, the video of the entire family seems to be the video that the user wanted to watch. Therefore, the scenario learning program 154 edits the scenario 1582 so as to increase the clipping time of scenes 4, 8, 12, 16, and 20 and shorten the clipping time of other scenes, for example. Also, for example, the scene feature parameter weight values of scenes 4, 8, 12, 16, and 20 are increased, and the scene feature parameter weight values of other scenes are decreased.
- the scenario learning program 154 may edit the scenario so that the clipping time becomes longer as the scene has a larger number of repeats.
- the clipping time is increased or decreased according to the number of repeats, for example, linearly, exponentially, or logarithmically.
- the scenario learning program 154 can change the weighting value of each scenario feature parameter based on the number of times the scenario is selected in S11 of FIG. For example, when a scenario with a large number of selections is used as a high-quality scenario, the weight value of each scenario feature parameter is changed so that the scenario is further selected.
- the feature point learning program 153 and the scenario learning program 154 feed back various parameters to appropriate values based on the reproduction history information and the like.
- Such feedback processing is performed not only by the feature point learning program 153 but also by the integrated video content creation program 152.
- the integrated video content creation program 152 updates the threshold used for the scenario selection process in S11 of FIG. 5 based on all the meta information stored in the meta information database 157. That is, the video feature parameters of each meta information as feature points of each video content are clustered into, for example, two by a clustering method such as the K-average method to calculate the center of each cluster, and an intermediate value of these centers is set as a threshold value To do. In this case, an optimum threshold value is set according to the tendency of the video content stored in the content database 156.
- the integrated video content creation program 152 and the scenario learning program 154 may be linked to execute the following feedback processing. That is, when the clipping target is, for example, a laughing scene, the integrated video content creation program 152 clips a scene from n seconds before the start of laughing to laughing. The integrated video content creation program 152 randomly sets “n seconds” at this time for each clipping.
- the scenario learning program 154 analyzes the reproduction history information of the laughing scene. The scenario learning program 154 calculates n ′ seconds that are determined to be optimal based on the analysis result, and passes them to the integrated video content creation program 152. Thereafter, the integrated video content creation program 152 clips a scene from laughter to n 'seconds before the start of laughing until laughing.
- n seconds are set as a time of 2 seconds or more and less than 10 seconds.
- n seconds may be set randomly between 2 seconds and less than 10 seconds, or may be set to a time reflecting the user's intention to some extent by user operation.
- the probability that a first time (eg, a time between 2 seconds and less than 3 seconds) is set as n seconds is 30%, and a second time (eg, a time between 3 seconds and less than 5 seconds) is set.
- the probability of being set can be set to 40%, and the probability of setting the third time (for example, a time of 5 seconds to less than 10 seconds) can be set to 30%.
- n seconds are set at random.
- the time zone immediately before the occurrence of an event here, laughter
- specific content for example, a family photo.
- the clipping time and period in this case may also be learned by various learning programs, and a clipping time and period suitable for the user may be further set based on the learning result.
- the home server 100 automatically collects the video content stored in the information device such as the HDD recorder 21 and then integrates the video content. Have created.
- second embodiment another embodiment in which the home server does not have such an automatic video content collection function or does not perform automatic collection processing will be described.
- the home server and each information device perform a linkage process different from that in the first embodiment, so that the integrated video content is automatically edited and created without burdening the user and operating work. can do.
- the network configuration of the second embodiment is the same as that shown in FIG.
- the same or similar components as those of the first embodiment are denoted by the same or similar reference numerals, and description thereof is omitted.
- each information device such as the HDD recorder 21 has the meta information database 157, and adds meta information to the video content instead of the home server and stores it in the meta information database 157. That is, each information device records the video content, etc., and at the same time performs the same processing as S3 to S5 in FIG. 3, that is, the video content feature point analysis (S3), the generation of meta information based on the analyzed feature points (S4), The meta information is stored in the meta information database 157 (S5). Since the video content is not collected by the home server even after the processing of S3 to S5, it is scattered on the network.
- FIG. 8 is a block diagram showing the configuration of the home server 100z installed in the home LAN 1 of the second embodiment.
- the HDD 150 of the home server 100z includes a meta information collection program 151z, an integrated video content creation program 152z, a feature point learning program 153, a scenario learning program 154, a family profile learning program 155, a meta information database 157z, a scenario database 158, and a family profile database. 159 is stored.
- FIG. 9 is a flowchart showing meta information collection processing executed by the meta information collection program 151z.
- the meta information collection program 151z periodically accesses the meta information database 157 of each information device after being resident in the RAM 140 (S31).
- the meta information collection program 151z accumulates the meta information of each information device collected in this way in the meta information database 157z (S34).
- the meta information collection program 151z accumulates meta information, the content identification information for identifying the video content (for example, the video content name or the ID assigned to the video content) and the location information of the video content (for example, the information device)
- a MAC address, other unique ID, URL (Uniform Resource Locator) of the video content, or a unique ID of the removable media when the video content is recorded on the removable media are added to the meta information.
- FIG. 10 is a sequence diagram showing processing for creating integrated video content.
- one information device notebook PC 23 in this case
- content creation instruction information is input
- processing for creating integrated video content shown in FIG. 10 is started (S41).
- the notebook PC 23 transmits the input content creation instruction information to the home server 100z (S42).
- the CPU 120 starts execution of the integrated video content creation program 152z.
- the integrated video content creation program 152z performs processing similar to S11 to S12 in FIG. 5, that is, selection of a scenario suitable for the content creation instruction information, access to the meta information database 157z in which meta information is centrally managed, and each of the selected scenarios. Meta information suitable for the scene is searched (S43).
- the response message that is, the meta information searched together with the selected scenario is returned to the notebook PC 23 (S44).
- Note PC 23 determines the access destination with reference to the location information of the video content included in the received meta information (S45). Next, a request message including content identification information or URL is transmitted to each information device having video content corresponding to the determined access destination, that is, meta information (S46). Depending on the search result of the home server 100z, the URL of the video content held by the notebook PC 23 itself may be included in the access destination.
- the information device that has received the request message from the notebook PC 23 searches for the video content specified in the request message from the response corresponding to the content identification information or URL in the request message, that is, the video content held by itself. (S47) Return to the notebook PC 23 (S48). In this way, the video content of each scene necessary for the integrated video content is collected in the notebook PC 23. Using the collected video content and the selected scenario received from the home server 100z, the notebook PC 23 performs the same processing as S14 to S15 in FIG. 5 to create integrated video content (S49). The created integrated video content is decoded and reproduced by the video codec of the notebook PC 23 (S50).
- the integrated video content can be automatically edited and created without collecting the video content held by each information device in the home server 100z, that is, one storage device. For this reason, there is no operation or work burden associated with the collection of video content on the user.
- meta information updating, scenario editing, family profile updating, and the like may be performed by various learning programs as in the first embodiment.
- the resident program according to the feature of the present invention may be scattered in each information device in the home LAN 1, or all information devices may have the resident program.
- Various databases may be scattered in each information device in the home LAN 1.
- the home server 100 itself may record and store video content and reproduce it. This means that the home server 100 functions as a DMS and a DMP, so that the present invention can be realized by the home server 100 alone.
- the content to be processed includes not only video content but also any type of content included in the content definition described above.
- the content to be processed includes not only video content but also any type of content included in the content definition described above.
- a new mixed content of a plurality of formats Content can be created.
- the timing for collecting the playback history information is not limited to the regular timing.
- each information device can access the home server 100 simultaneously with generating the reproduction history information and the like, and transfer the reproduction history information and the like to the home server 100 in real time.
- the device that collects reproduction history information and the like is not limited to the home server 100.
- any information device in the home LAN 1 may collect reproduction history information and the like held by each information device regularly or in real time and transfer them to the home server 100.
- each information device may record meta information and generate meta information at the same time.
- the home server 100 collects meta information together with the video content.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Computer Graphics (AREA)
- Television Signal Processing For Recording (AREA)
- Processing Or Creating Images (AREA)
- Information Transfer Between Computers (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
L'invention concerne un procédé destiné à effectuer la gestion de contenus de dispositifs d’information respectifs par un serveur en réseau domestique avec les dispositifs d’information emmagasinant des contenus évalués par rapport à au moins un point de détail. Le procédé de gestion de contenus comporte une étape de recueil d’évaluations de points de détail visant à recueillir les évaluations de points de détail des contenus respectifs à partir des dispositifs d’information, une étape de mémorisation des évaluations de points de détail visant à stocker les évaluations de points de détail recueillies, une étape de sélection de contenus où, lorsqu’un contenu est demandé à un dispositif d’information, on effectue une recherche parmi les évaluations de points de détail mémorisées et on sélectionne un contenu présentant des évaluations de points de détail appropriées pour ladite demande, et une étape de mise à disposition d’informations d’emplacement du contenu visant à communiquer au dispositif d’information des informations d’emplacement relatives au contenu sélectionné.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008-142223 | 2008-05-30 | ||
JP2008142223A JP2009290651A (ja) | 2008-05-30 | 2008-05-30 | コンテンツ管理方法、コンテンツ自動編集方法、コンテンツ管理プログラム、コンテンツ自動編集プログラム、サーバ、情報機器、およびコンテンツ自動編集システム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2009145226A1 true WO2009145226A1 (fr) | 2009-12-03 |
Family
ID=41377097
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2009/059706 WO2009145226A1 (fr) | 2008-05-30 | 2009-05-27 | Procédé de gestion de contenus, procédé de modification automatique de contenus, programme de gestion de contenus, programme de modification automatique de contenus, serveur, dispositif d’information et système de modification automatique de contenus |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2009290651A (fr) |
WO (1) | WO2009145226A1 (fr) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8275375B2 (en) * | 2010-03-25 | 2012-09-25 | Jong Hyup Lee | Data integration for wireless network systems |
US8863183B2 (en) | 2010-04-09 | 2014-10-14 | Cyber Ai Entertainment Inc. | Server system for real-time moving image collection, recognition, classification, processing, and delivery |
JP5664120B2 (ja) * | 2010-10-25 | 2015-02-04 | ソニー株式会社 | 編集装置、編集方法、プログラム、および記録媒体 |
WO2022014294A1 (fr) * | 2020-07-15 | 2022-01-20 | ソニーグループ株式会社 | Dispositif de traitement d'informations, procédé de traitement d'informations, et programme |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007105265A1 (fr) * | 2006-03-10 | 2007-09-20 | Fujitsu Limited | Dispositif de reproduction, procédé de commande de dispositif de reproduction, programme, et support d'enregistrement lisible par ordinateur |
WO2007139105A1 (fr) * | 2006-05-31 | 2007-12-06 | Pioneer Corporation | Récepteur de diffusion, procédé de création de digest et programme de création de digest |
-
2008
- 2008-05-30 JP JP2008142223A patent/JP2009290651A/ja active Pending
-
2009
- 2009-05-27 WO PCT/JP2009/059706 patent/WO2009145226A1/fr active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007105265A1 (fr) * | 2006-03-10 | 2007-09-20 | Fujitsu Limited | Dispositif de reproduction, procédé de commande de dispositif de reproduction, programme, et support d'enregistrement lisible par ordinateur |
WO2007139105A1 (fr) * | 2006-05-31 | 2007-12-06 | Pioneer Corporation | Récepteur de diffusion, procédé de création de digest et programme de création de digest |
Also Published As
Publication number | Publication date |
---|---|
JP2009290651A (ja) | 2009-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4944919B2 (ja) | メディアファイルの自動選択 | |
US8260828B2 (en) | Organizing content using a dynamic profile | |
JP4859943B2 (ja) | メタデータ注入を用いたメディアファイルの管理 | |
US8180826B2 (en) | Media sharing and authoring on the web | |
EP2325766B1 (fr) | Procédé et appareil pour la gestion de service de contenu dans un réseau basé sur l'historique de l'utilisation du contenu | |
US8782170B2 (en) | Information processing apparatus, information processing method, and computer program | |
US20140223309A1 (en) | Method and system for browsing, searching and sharing of personal video by a non-parametric approach | |
JP2009512008A (ja) | ユーザに向けてレンダリングすることが可能なデータ項目を扱う装置 | |
WO2004068353A1 (fr) | Dispositif de traitement d'informations, procede de traitement d'informations, et programme informatique | |
JP2004235739A (ja) | 情報処理装置、および情報処理方法、並びにコンピュータ・プログラム | |
JP2009171558A (ja) | 画像処理装置及び画像管理サーバ装置及びそれらの制御方法及びプログラム | |
JPWO2007000949A1 (ja) | 再生開始位置制御付きコンテンツ再生方法および装置 | |
US20080229207A1 (en) | Content Presentation System | |
US7519612B2 (en) | Information processing system, information processing method, and computer program used therewith | |
WO2004068354A1 (fr) | Dispositif de traitement d'informations, procede de traitement d'informations, et programme informatique | |
WO2009145257A1 (fr) | Procédé, programme, système et serveur de reproduction automatique de contenus | |
WO2009145226A1 (fr) | Procédé de gestion de contenus, procédé de modification automatique de contenus, programme de gestion de contenus, programme de modification automatique de contenus, serveur, dispositif d’information et système de modification automatique de contenus | |
US8521844B2 (en) | Information processing apparatus and method and program | |
KR101377737B1 (ko) | 네트워크에 접속된 휴대가능 저장 장치들에 대한 저장프로파일 생성 | |
JP5043711B2 (ja) | ビデオ評価装置及び方法 | |
WO2014065165A1 (fr) | Dispositif, procédé, programme et système de traitement d'informations | |
US20080065695A1 (en) | System and method for nondeterministic media playback selected from a plurality of distributed media libraries | |
WO2014103374A1 (fr) | Dispositif de gestion d'informations, serveur et programme de commande | |
CN101471115B (zh) | 拍摄装置和拍摄方法 | |
KR102492022B1 (ko) | 다중 채널 네트워크의 컨텐츠 관리 방법, 장치 및 시스템 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09754734 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09754734 Country of ref document: EP Kind code of ref document: A1 |