WO2009145226A1 - Content management method, automatic content editing method, content management program, automatic content editing program, server, information device, and automatic content editing system - Google Patents

Content management method, automatic content editing method, content management program, automatic content editing program, server, information device, and automatic content editing system Download PDF

Info

Publication number
WO2009145226A1
WO2009145226A1 PCT/JP2009/059706 JP2009059706W WO2009145226A1 WO 2009145226 A1 WO2009145226 A1 WO 2009145226A1 JP 2009059706 W JP2009059706 W JP 2009059706W WO 2009145226 A1 WO2009145226 A1 WO 2009145226A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
feature point
information
server
scenario
Prior art date
Application number
PCT/JP2009/059706
Other languages
French (fr)
Japanese (ja)
Inventor
俊彦 山上
Original Assignee
株式会社Access
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Access filed Critical 株式会社Access
Publication of WO2009145226A1 publication Critical patent/WO2009145226A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25883Management of end-user data being end-user demographical data, e.g. age, family status or address
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/43615Interfacing a Home Network, e.g. for connecting the client to a plurality of peripherals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus

Definitions

  • the present invention relates to a content management method, a content automatic editing method, a content management program, a content automatic editing program, a server, an information device, and a content automatic editing system suitable for automatically collecting and editing content scattered on a home network. .
  • video content a moving image file, a still image file, a video digital content, and the like are referred to as “video content”.
  • content includes not only video content but also data composed of a markup document, audio data, document data, a worksheet, or a combination thereof.
  • a large amount of video content stored in such a large-capacity storage device includes many similar contents. Therefore, viewing all video content is tedious and time consuming. Therefore, a plurality of video contents are edited and integrated by using an authoring tool or the like and combined into one video content (hereinafter referred to as “integrated video content”). Examples of apparatuses that edit a plurality of video contents to create integrated video contents are disclosed in Japanese Patent Laid-Open No. 2005-303840 (hereinafter referred to as “Patent Document 1”) and Japanese Patent Laid-Open No. 2000-125253 (hereinafter referred to as “ Patent Document 2 ”)) and the like.
  • the moving image editing apparatus described in Patent Document 1 has a moving image storage unit in which a plurality of moving images are stored.
  • the editing device searches for a moving image from the moving image storage unit according to the search condition input by the user, and displays the search result as a list.
  • the user selects a desired moving image from the displayed list, and completes one video by arranging the selected moving images in time series.
  • the moving image editing apparatus described in Patent Document 2 extracts a portion (cutout range) including a scene specified by the user from all moving image materials, and creates a composite list in which the extracted cutout ranges are listed. Then, the cutout range corresponding to the designated scene is continuously reproduced according to the created synthesis list. As described above, in Patent Document 2, a series of operations from selection of a moving image to creation of a list and editing and reproduction are automatically performed on the apparatus side.
  • the present invention has been made in view of the above circumstances, and its object is to provide a content management method, a content automatic editing method suitable for automatically collecting and editing video content held in each information device, A content management program, a content automatic editing program, a server, an information device, and a content automatic editing system are provided.
  • a content management method that solves the above-described problem is executed by a server connected via a home network to a plurality of information devices that store content evaluated for at least one feature point.
  • a method for managing content of an information device a feature point evaluation collecting step for collecting feature point evaluation of each content from a plurality of information devices, a feature point evaluation storing step for storing the collected feature point evaluation, and an information device
  • a content selection step of searching for a stored feature point evaluation and selecting a content having a feature point evaluation suitable for the request, and notifying information equipment of location information of the selected content And a content location information notifying step.
  • the content management method also includes a scenario storing step for storing a plurality of scenarios associated with at least one feature point evaluation, and a scenario having a feature point evaluation suitable for the request from among the plurality of scenarios when requested.
  • the method may further include a scenario selection step of selecting. In this case, in the content selection step, content is selected based on the selected scenario.
  • the content selection step is configured to select each scene based on the feature point evaluation of each scene of the selected scenario. Content may be selected.
  • a content collection step of collecting the content from each information device based on the location information and a content editing step of editing the collected content are included.
  • the information device automatically collects and edits the content without the user's operation by accessing the server (that is, the feature point evaluation of each centrally managed content) and acquiring the location information of the content. Can do.
  • the content automatic editing method may further include a content reproduction step of reproducing the edited content.
  • the content automatic editing method may further include a content evaluation step of evaluating the content for at least one feature point when storing the content.
  • an automatic content editing method that solves the above problem includes a plurality of information devices that store content evaluated for at least one feature point, and the plurality of information via a home network.
  • the present invention relates to a method for automatically editing content by linking processing with a server connected to a device.
  • the steps executed by the server in the content automatic editing method include the following steps.
  • a feature point evaluation collecting step for collecting feature point evaluations of each content from a plurality of information devices a feature point evaluation storing step for storing the collected feature point evaluations, and when content is requested from the information device, A content selection step of searching the stored feature point evaluation and selecting content having a feature point evaluation suitable for the request, and a content location information notification step of notifying the information device of location information of the selected content are included. It is.
  • the steps executed by the information device in the content automatic editing method include the following steps.
  • a content requesting step for requesting content from the server and content that collects the content from each information device based on the location information when the location information of the content having feature point evaluation suitable for the request is notified from the server
  • a collection step and a content editing step for editing the collected content are included.
  • a content management program that solves the above-described problems is a program for causing a computer to execute each step of the content management method described above.
  • a content management program that solves the above-described problem is a program for causing a computer to execute each step of the content automatic editing method described above.
  • a server that solves the above problem is a server connected via a home network to a plurality of information devices that store content evaluated for at least one feature point, Feature point evaluation collecting means for collecting feature point evaluation of each content from the information device, feature point evaluation storing means for storing the collected feature point evaluation, and feature point evaluation storage when content is requested from the information device Content selection means for searching for means and selecting content having a feature point evaluation suitable for the request, and content location information notification means for notifying information equipment of location information of the selected content .
  • the server includes a scenario storage unit for storing a plurality of scenarios associated with at least one feature point evaluation, and a scenario having a feature point evaluation suitable for the request among the plurality of scenarios when requested by an information device. Further, it may be configured to further include scenario selection means for selecting. In this case, the content selection means selects content based on the selected scenario.
  • the content selection unit is configured to select the content of each scene based on the feature point evaluation of each scene of the selected scenario. May be.
  • an information device that solves the above-described problem includes another information device that stores content evaluated for at least one feature point, and the content stored in each information device.
  • an information device connected via a home network to a server managed in association with the server, and content request means for requesting content from the server and location information of the content having feature point evaluation suitable for the request are notified from the server
  • the information processing apparatus includes a content collection unit that collects the content from each information device based on the location information, and a content editing unit that edits the collected content.
  • the information device configured as described above may further include a content playback unit that plays back the edited content.
  • the information device may further include a content storage unit that stores content, and a content evaluation unit that evaluates the content with respect to at least one feature point when the content is stored in the content storage unit.
  • the content processed in the server or information device includes, for example, video content.
  • video content is an example, and other examples include markup documents, audio data, document data, worksheets, or data composed of a combination thereof.
  • an automatic content editing system that solves the above problem is connected to the plurality of information devices described above and the plurality of information devices via a home network
  • the present invention relates to a system having a server configured to select content of each scene based on feature point evaluation.
  • the content editing means of the information device provided in the system is configured to perform editing processing so that the content selected for each scene is played back in the order determined by the scenario.
  • content automatic editing method According to the content management method, content automatic editing method, content management program, content automatic editing program, server, information device, and content automatic editing system of the present invention, video content held in each information device is automatically collected, Since editing is performed, the user's operation and work load are reduced.
  • FIG. 1 is a network configuration diagram for explaining the present embodiment. As shown in FIG. 1, the network according to the present embodiment is constructed by a home LAN (Local Area Network) 1 and an external network 2.
  • LAN Local Area Network
  • a gateway server 10 and a plurality of information devices are arranged in the home LAN 1, a gateway server 10 and a plurality of information devices (HDD recorder 21, TV (Television) 22, notebook PC 23, home server 100, etc.) are arranged.
  • the gateway server 10 has switching and routing functions, interconnects information devices in the home LAN 1, and communicates with terminals arranged on the external network 2 or other networks not shown. can do.
  • the home LAN 1 is also connected to information devices other than the HDD recorder 21, TV 22, and notebook PC 23.
  • Information home appliances such as a microwave oven and a refrigerator are also connected.
  • information home appliances are also referred to as information devices for convenience.
  • All information devices connected to the home LAN 1 are installed with middleware and client software compliant with common technical specifications related to the home network, and have a configuration suitable for home LAN connection.
  • each information device complies with DLNA (Digital Living Network Alliance) which is a common technical specification
  • the HDD recorder 21 is a DMS (Digital Media Server)
  • the TV 22 is a DMP (Digital Media Player).
  • the notebook PC 23 functions as a DMS and a DMP, respectively.
  • the home server 100 is a desktop PC, and functions as a DMS and a DMP like the notebook PC 23.
  • the home LAN may be configured by information equipment that conforms to other technical specifications such as Havi (Home Audio / Video Interoperability), Jini, and the like.
  • the home server 100 can collect and store contents stored in each information device in the home LAN 1 including the HDD recorder 21.
  • content stored in an information device (for example, mobile phone 24) arranged on the external network 2 can be acquired and stored via the external network 2 and the gateway server 10. Accumulation of content in the home server 100 is performed according to settings of the home server 100 and each information device, user operation, or the like.
  • FIG. 2 is a block diagram showing the configuration of the home server 100.
  • Each element constituting the home server 100 is mutually connected to the CPU 120 via the system bus 110. After powering on the home server 100, the CPU 120 accesses necessary hardware via the system bus 110.
  • a CPU 120 accesses a ROM (Read-Only Memory) 130.
  • the CPU 120 loads a boot loader stored in the ROM 130 into a RAM (Random Access Memory) 140 and starts it.
  • the CPU 120 that activated the boot loader then loads an OS (Operating System) stored in the HDD 150 into the RAM 140 and activates it.
  • OS Operating System
  • each element performs various processes by cooperating as necessary under resource and process management by the OS.
  • various resident programs stored in the HDD 150 are resident in the RAM 140.
  • Such resident programs include a content collection program 151, an integrated video content creation program 152, a feature point learning program 153, a scenario learning program 154, a family profile learning program 155, and the like according to the features of the present invention. Hereinafter, these resident programs will be described.
  • FIG. 3 is a flowchart showing content collection processing executed by the content collection program 151.
  • the content collection program 151 sends a request message requesting video content (for example, MPEG-2 (Moving Picture Experts ⁇ Group phase 2) format) to the network interface 160 when it resides in the RAM 140, for example.
  • a request message for example, MPEG-2 (Moving Picture Experts ⁇ Group phase 2) format
  • the network interface 160 When it resides in the RAM 140, for example.
  • step 1 the same segment, that is, each DMS in the home LAN 1 individually (or by multicast) is transmitted (step 1.
  • S the step is abbreviated as “S” in this specification and the drawings).
  • the information device that has received the request message refers to its own content list and the like to check whether the video content has been updated or added since the previous request message was received. If there is an updated or added video content, the video content is uploaded to the home server 100.
  • the content collection program 151 stores the video content received from each terminal in the content database 156 in the HDD 150, and simultaneously analyzes the video content to evaluate various feature points (S2, S3).
  • the feature points are, for example, “grandma” (subject person), “two people” (subject number), “laughter” (subject facial expression), “eating” (subject motion), “yukata” (subject) ), “Outdoor” (shooting location), “Evening” (shooting time), “Weekend” (shooting date and time), “Looking down” (position and angle relationship between subject and photographer), “Birthday” (family) This is an element characterizing video content such as “Zoom in” (shooting pattern).
  • the feature points include not only the characteristics of video and audio recorded in the video content, but also information such as the photographer of the video content and the shooting date and time.
  • feature points of images and sounds in video content are extracted and digitized by a known recognition algorithm such as motion recognition, facial expression recognition, object recognition, and speech recognition. Information regarding various feature points of each video content digitized in this way is called a video feature parameter.
  • a family profile (to be described later) stored in the family profile database 159 is referred to, and a subject (for example, “younger brother”) or a photographer (for example, “father” voice) Is identified.
  • the shooting date and time and the photographer can also be acquired from the time stamp and properties of the video content file.
  • the family profile it is possible to estimate the age of the subject at the time of photographing (that is, the photographing time of the video content) from the height and the face of the subject.
  • the content collection program 151 generates a list of video feature parameters (hereinafter referred to as “meta information”), which is a result of analyzing the video content for each feature point in this way (S4). Specifically, using a predetermined function corresponding to each feature point, a video feature parameter indicating evaluation of video content related to the feature point is calculated.
  • FIG. 4 shows an example of meta information generated by the process of S4. As shown in FIG. 4, the meta information is composed of video feature parameter groups corresponding to feature points such as “father”, “mother”, “sister”, “brother”, and so on. In the present embodiment, various video feature parameters are expressed by numerical values of 0 to 1, for example.
  • the meta information in FIG. 4 indicates that the video content is an audioless video having, for example, an older sister and a younger brother as main subjects and includes many zoomed-in scenes.
  • the generated meta information is stored in the meta information database 157 in the HDD 150 (S5).
  • the video content held by each information device in the same segment is automatically collected by the content collection program 151, and feature point analysis is performed by the content collection program 151 to generate meta information.
  • video content held by information devices in other segments (for example, the mobile phone 24) is not automatically collected.
  • Video contents held by information devices in other segments are uploaded and stored in the home server 100 only when the information devices are manually operated.
  • the home server 100 and each information device may be set and changed so that the video content held by the information device in the same segment is uploaded and stored in the home server 100 only when a manual operation is performed.
  • FIG. 5 is a flowchart showing integrated video content creation processing executed by the integrated video content creation program 152.
  • the integrated video content creation program 152 creates integrated video content based on the scenario.
  • FIGS. 6A and 6B show examples of scenarios (scenarios 1581 and 1582) stored in the scenario database 158 in the HDD 150, respectively.
  • Each scenario is composed of one or more scene definitions S.
  • the scene definition S includes a plurality of types of scene feature parameters that define scene features (video feature parameters required for video content to be assigned to the scene), allocation time parameters for each scene, and the like.
  • the scene feature parameter is expressed by a value of 0 to 1 like the video feature parameter. For example, the scene definition S defining the scene 1 in the scenario 1581 in FIG.
  • a scenario feature parameter similar to the scene feature parameter is also associated with the scenario itself.
  • the scenario feature parameter is calculated based on, for example, scene feature parameters of each scene definition constituting the scenario. Alternatively, it may be a parameter expressing the flow of the entire scenario given independently of the scene feature parameter of each scene definition.
  • Each scenario and scene definition is stored in advance in the scenario database 158 as a template, for example. Scenarios and scene definitions can be created independently by the user using a dedicated editor, for example, but those distributed by video equipment manufacturers etc. can be downloaded from a server on the Internet, for example. .
  • the user operates a client such as the TV 22 or the notebook PC 23 connected to the home LAN 1 and inputs the theme of the video content to be viewed, the characters, the total playback time, and the like.
  • the information input at this time is transmitted to the home server 100.
  • the CPU 120 starts execution of the integrated video content creation program 152.
  • the input information is referred to as “content creation instruction information”.
  • the integrated video content creation program 152 first accesses the scenario database 158, refers to each scenario, and selects a scenario suitable for the content creation instruction information (S11).
  • a scenario suitable for the content creation instruction information is “sister” or “birthday”, for example, a scenario (scenario 1581 in this case) where the scenario feature parameters of “sister” and “birthday” are both greater than or equal to a predetermined value (eg, 0.6). ) Is selected.
  • the integrated video content creation program 152 subsequently accesses the meta information database 157 and searches for meta information suitable for each scene of the selected scenario (S12). For example, for the scene 1 of the scenario 1581, the meta information of the video content captured in 2002 when the video feature parameters of “sister” and “birthday” are equal to or larger than a predetermined value is searched.
  • the integrated video content creation program 152 accesses the content database 156 and reads the video content corresponding to the searched meta information for each scene of the selected scenario (S13). For each scene, for example, video content corresponding to the meta information having the highest search order is read.
  • the search order of meta information is determined according to the degree of coincidence between the scene definition S (specifically, the scene feature parameter) and the meta information.
  • the integrated video content creation program 152 clips and arranges the read video content for each scene, and generates a scene video (S14).
  • corresponding scene videos are generated for each of the scenes 1 to 20 of the scenario 1581.
  • a video for 25 seconds in video content shot on the sister's birthday in 2002 is clipped as a scene video.
  • the starting point of clipping on the time axis of the video content is set at random, for example. Also, since video with a long shooting time tends to be redundant in the latter half, the video in the first half is clipped with priority.
  • the integrated video content creation program 152 creates a series of video content, that is, an integrated video content by arranging the generated scene videos in order of scenes 1 to 20 and connecting adjacent scene videos (S15). A visual effect may be enhanced by using a switching effect or the like for connection between scene images.
  • the created integrated video content is transmitted to the client (that is, the transmission source of the content creation instruction information).
  • the client decodes and reproduces the received integrated video content using a video codec. Note that the user can arbitrarily save the created integrated video content in the HDD 150.
  • the integrated video content creation program 152 when executed, the integrated video content is automatically created, so there is no complexity in the video creation work, but the video content is feature point evaluation (meta information) and scenario selection.
  • the video content is feature point evaluation (meta information) and scenario selection.
  • the user can edit, for example, each scenario in the scenario database 158 so that the integrated video content is created as intended.
  • considerable trial and error is required to improve the content of the integrated video content by such scenario editing work. That is, the solution of the above problem by scenario editing work is not effective because the editing work is complicated and difficult.
  • the home server 100 includes a feature point learning program 153, a scenario learning program 154, a family profile learning program 155, etc. in order to improve the content of the integrated video content while eliminating the complexity of the video creation work.
  • a learning program has been implemented.
  • FIG. 7 is a flowchart showing the feature point learning process executed by the feature point learning program 153. As shown in FIG. 7, the feature point learning program 153 stays in the RAM 140 and then monitors the generation of meta information by the content collection program 151 and the update of predetermined shared information of the client (S21 to S23).
  • the feature point learning program 153 detects that the meta information is generated by the content collection program 151 (S21, S22: YES)
  • the feature point learning program 153 uses, for example, an algorithm applying a TF-IDF (Term Frequency-Inverse Document Frequency) method.
  • the conversion coefficient of the function used in the process of S4 in FIG. 3, that is, the coefficient for converting the feature point of the video content into the video feature parameter group is updated (S24).
  • the tendency of the video content stored in the content database 156 is analyzed based on all the meta information stored in the meta information database 157. For example, let us consider a case where an analysis result indicating that there are many smile images is obtained.
  • the feature point learning program 153 updates the conversion coefficient so that the weight of the video feature parameter of “laughter” is lightened so that the feature of “laughter” is intentionally diluted to make other features stand out.
  • the content collection program 151 uses the updated conversion coefficient to generate meta information that more accurately represents the characteristics of the video content.
  • the integrated video content creation program 152 selects an appropriate video content according to the content creation instruction information and creates an integrated video content.
  • the feature point learning program 153 returns to the process of S21 after the conversion coefficient is updated.
  • a single video feature parameter of “laughter” is set to a plurality of video features such as “laughter”, “big laughter”, and “slow laughter”. It may be subdivided into parameters. In this case, the video content can be further distinguished and characterized according to the degree of laughter.
  • the playback history information includes information indicating, for example, which scene in the integrated video content has been operated such as playback, skip, repeat, fast forward, rewind, and stop.
  • the feature point learning program 153 periodically accesses the shared folder of each client and monitors whether or not the reproduction history information in the shared folder has been updated (S21 to S23). If the playback history information in the shared folder has been updated (S21, S22: NO, S23: YES), a weighting value described later held in the meta information database 157 is updated using the playback history information. (S25). For example, consider a case where the integrated video content created using the scenario 1581 is reproduced by the TV 22. According to the reproduction history information at this time, scenes 1 to 16 are repeated and scenes 17 to 20 are not reproduced.
  • the feature point learning program 153 has all the video feature parameters (or scene feature parameters) having a value higher than, for example, a certain level (for example, 0.6) in the meta information of the video content of the scenes 1 to 16 reproduced repeatedly. ) And the weight values (or scene feature parameters) of all video feature parameters higher than a certain level in the meta information of the video content of the scenes 17 to 20 are lowered.
  • the HDD 150 holds a list of weight values (not shown) corresponding to each feature point. When the integrated video content creation program 152 searches for meta information corresponding to each scene, the list of weight values is displayed. Referring to the meta information that matches the value obtained by adding the weighting value to the scene feature parameter included in the scene definition S of the scenario.
  • the feature point learning program 153 updates the weight value of the feature point with reference to the list of weight values when the reproduction history information is updated. Further, the correlation between the number of repeats and the video feature parameter may be calculated, and a weight value corresponding to the correlation coefficient may be given. The assigned weight value is increased or decreased, for example, linearly, exponentially, or logarithmically according to the number of repeats.
  • the integrated video content creation program 152 selects the video content that the user particularly wants to view and creates the integrated video content even when there are a plurality of similar video contents in the content database 156. Become.
  • the feature point learning program 153 returns to the process of S21 after the weighting process.
  • the feature point learning program 153 may periodically acquire the reproduction history information in the shared folder of each client as an alternative process of the monitoring process of the reproduction history information in the shared folder.
  • the meta information in the meta information database 157 is updated based on all the reproduction history information. For example, a higher weighting value is assigned to a video feature parameter of meta information of video content having a new playback date and time.
  • the integrated video content creation program 152 selects the video content suitable for the user's recent preferences and creates the integrated video content.
  • the above is an example of the update process of the video feature parameter by the feature point learning program 153, and various other update processes are assumed.
  • a video feature parameter update process using a family profile is assumed.
  • the family profile here is information about a family held by some information devices (such as the mobile phone 24) on the home LAN 1 and the external network 2.
  • video content recorded by each family member is stored in the HDD recorder 21 in association with recording categories such as “father”, “mother”, “sister”, and “brother”.
  • information such as viewing history of each family member and program reservation is also recorded.
  • browsing history of web pages, photos, music, and the like are stored in the document folder of each family member of the notebook PC 23.
  • the family profile learning program 155 collects family profiles scattered in each information device and constructs a family profile database 159 in the HDD 150. Further, the family profile in the family profile database 159 is updated based on the reproduction history information or the like, or the family profile is added. As an example, the family profile is updated or added by estimating the family preference based on the content of the reproduced scene, the reproduction frequency, and the like.
  • operator information is also input in a GUI (Graphical User Interface) for inputting content creation instruction information. Then, the operator information is associated with the reproduction history information generated when the reproduction of the integrated video content corresponding to the content creation instruction information is finished. By using the operator information associated with the reproduction history information, the reproduction history information of each family member is classified from all the reproduction history information.
  • GUI Graphic User Interface
  • the preference of each family member is estimated (for example, factor analysis described in the next paragraph) And the like, and a family profile of each family member can be updated or added.
  • the family profile learning program 155 also performs a family behavior pattern analysis based on the family profile by a data mining method or the like, and accumulates the analysis results in the family profile database 159 as a family profile.
  • a family behavior pattern analysis based on the family profile by a data mining method or the like, and accumulates the analysis results in the family profile database 159 as a family profile.
  • family characteristics can be analyzed using multivariate analysis such as factor analysis, and a new family profile can be generated.
  • multivariate analysis such as factor analysis
  • a new family profile can be generated.
  • an n-dimensional virtual space having each of n types of video feature parameters as coordinate axes is defined, and video content is distributed in the n-dimensional virtual space based on meta information.
  • the distribution of the video content in the n-dimensional virtual space is mapped to a lower-order m-dimensional virtual space (here, a three-dimensional virtual space defined with each principal component as a coordinate axis). .
  • the distribution of video content in the three-dimensional virtual space expresses the characteristics that the family potentially has.
  • the family profile expressed in such a distribution it is possible to update conversion coefficients for feature points, weight values corresponding to each feature point, and the like. It is also possible to select a scenario suitable for the family profile expressed by the distribution or download it from a server on the Internet.
  • a new family profile can be generated using a technique such as cluster analysis. For example, n types of video feature parameters are classified into two clusters: a parameter cluster that is frequently updated such as weighting according to the playback state of the video content, and a parameter cluster that is not frequently updated. Based on the classification, family characteristics are extracted to generate a family profile. For example, family features can be extracted by focusing on the former cluster.
  • the family profile stored in the family profile database 159 can be used for various processes.
  • these family profiles include, for example, the height and voice of each family member, the color and pattern of favorite clothes, favorite sports, age, and the like.
  • the reference data for recognition used in the recognition algorithm based on the family profile can be updated to improve the accuracy of recognition algorithms such as motion recognition, object recognition, and voice recognition for each family member.
  • the integrated video content creation program 152 selects a more appropriate video content in response to a user instruction, and creates the integrated video content.
  • the integrated video content creation program 152 can select video content by directly using the family profile stored in the family profile database 159. For example, consider a case where “Father” is included in the content creation instruction information. In this case, the integrated video content creation program 152 accesses the family profile database 159 and searches for a family profile related to father's preference and the like. Then, based on the retrieved family profile, video content related to father's preference or the like is selected to create integrated video content.
  • the family profile can also be used for weighting the video feature parameters. That is, the feature point learning program 153 can update the conversion coefficient in the same manner as the process of S22 of FIG. 7 using the family profile, and can update the meta information in the meta information database 157. As an example, in a family with many children, the conversion factor is updated or weighted so that the weight of the video feature parameter of “children” is lightened to dilute the feature of “children” and make other features stand out. Change the value.
  • the family profile can also be used to edit each scenario in the scenario database 158.
  • the scenario learning program 154 edits each scenario using the family profile.
  • the family profile stored in the family profile database 159 has many children's photos
  • the existing scenario is edited so that the child's scene becomes longer, or the child creates a new main scenario.
  • family profiles can be shared without exception. Therefore, it is possible to update the meta information using the family profile effectively without collecting the family profile scattered in each information device in the home server 100.
  • the home server 100 transmits the entire family or the family profile of each family member thus collected, added, updated, etc. to the information providing server 200 (see FIG. 1) on the external network 2 via the gateway server 10. .
  • the information providing server 200 Based on the received family profile, the information providing server 200 transmits advertisement information, video content, and the like suitable for the preference of the family or each family member to the home server 100z.
  • the advertisement information is displayed on the integrated video content on-screen, for example.
  • Scenario learning program 154 updates scenario editing, scenario feature parameters, scene feature parameters, and the like based on playback history information and the like, similar to feature point learning program 153. For example, consider a case where the integrated video content created using the scenario 1582 of FIG. According to the reproduction history information at this time, scenes 4, 8, 12, 16, and 20 are repeated, and other scenes are skipped. In this case, the video of the entire family seems to be the video that the user wanted to watch. Therefore, the scenario learning program 154 edits the scenario 1582 so as to increase the clipping time of scenes 4, 8, 12, 16, and 20 and shorten the clipping time of other scenes, for example. Also, for example, the scene feature parameter weight values of scenes 4, 8, 12, 16, and 20 are increased, and the scene feature parameter weight values of other scenes are decreased.
  • the scenario learning program 154 may edit the scenario so that the clipping time becomes longer as the scene has a larger number of repeats.
  • the clipping time is increased or decreased according to the number of repeats, for example, linearly, exponentially, or logarithmically.
  • the scenario learning program 154 can change the weighting value of each scenario feature parameter based on the number of times the scenario is selected in S11 of FIG. For example, when a scenario with a large number of selections is used as a high-quality scenario, the weight value of each scenario feature parameter is changed so that the scenario is further selected.
  • the feature point learning program 153 and the scenario learning program 154 feed back various parameters to appropriate values based on the reproduction history information and the like.
  • Such feedback processing is performed not only by the feature point learning program 153 but also by the integrated video content creation program 152.
  • the integrated video content creation program 152 updates the threshold used for the scenario selection process in S11 of FIG. 5 based on all the meta information stored in the meta information database 157. That is, the video feature parameters of each meta information as feature points of each video content are clustered into, for example, two by a clustering method such as the K-average method to calculate the center of each cluster, and an intermediate value of these centers is set as a threshold value To do. In this case, an optimum threshold value is set according to the tendency of the video content stored in the content database 156.
  • the integrated video content creation program 152 and the scenario learning program 154 may be linked to execute the following feedback processing. That is, when the clipping target is, for example, a laughing scene, the integrated video content creation program 152 clips a scene from n seconds before the start of laughing to laughing. The integrated video content creation program 152 randomly sets “n seconds” at this time for each clipping.
  • the scenario learning program 154 analyzes the reproduction history information of the laughing scene. The scenario learning program 154 calculates n ′ seconds that are determined to be optimal based on the analysis result, and passes them to the integrated video content creation program 152. Thereafter, the integrated video content creation program 152 clips a scene from laughter to n 'seconds before the start of laughing until laughing.
  • n seconds are set as a time of 2 seconds or more and less than 10 seconds.
  • n seconds may be set randomly between 2 seconds and less than 10 seconds, or may be set to a time reflecting the user's intention to some extent by user operation.
  • the probability that a first time (eg, a time between 2 seconds and less than 3 seconds) is set as n seconds is 30%, and a second time (eg, a time between 3 seconds and less than 5 seconds) is set.
  • the probability of being set can be set to 40%, and the probability of setting the third time (for example, a time of 5 seconds to less than 10 seconds) can be set to 30%.
  • n seconds are set at random.
  • the time zone immediately before the occurrence of an event here, laughter
  • specific content for example, a family photo.
  • the clipping time and period in this case may also be learned by various learning programs, and a clipping time and period suitable for the user may be further set based on the learning result.
  • the home server 100 automatically collects the video content stored in the information device such as the HDD recorder 21 and then integrates the video content. Have created.
  • second embodiment another embodiment in which the home server does not have such an automatic video content collection function or does not perform automatic collection processing will be described.
  • the home server and each information device perform a linkage process different from that in the first embodiment, so that the integrated video content is automatically edited and created without burdening the user and operating work. can do.
  • the network configuration of the second embodiment is the same as that shown in FIG.
  • the same or similar components as those of the first embodiment are denoted by the same or similar reference numerals, and description thereof is omitted.
  • each information device such as the HDD recorder 21 has the meta information database 157, and adds meta information to the video content instead of the home server and stores it in the meta information database 157. That is, each information device records the video content, etc., and at the same time performs the same processing as S3 to S5 in FIG. 3, that is, the video content feature point analysis (S3), the generation of meta information based on the analyzed feature points (S4), The meta information is stored in the meta information database 157 (S5). Since the video content is not collected by the home server even after the processing of S3 to S5, it is scattered on the network.
  • FIG. 8 is a block diagram showing the configuration of the home server 100z installed in the home LAN 1 of the second embodiment.
  • the HDD 150 of the home server 100z includes a meta information collection program 151z, an integrated video content creation program 152z, a feature point learning program 153, a scenario learning program 154, a family profile learning program 155, a meta information database 157z, a scenario database 158, and a family profile database. 159 is stored.
  • FIG. 9 is a flowchart showing meta information collection processing executed by the meta information collection program 151z.
  • the meta information collection program 151z periodically accesses the meta information database 157 of each information device after being resident in the RAM 140 (S31).
  • the meta information collection program 151z accumulates the meta information of each information device collected in this way in the meta information database 157z (S34).
  • the meta information collection program 151z accumulates meta information, the content identification information for identifying the video content (for example, the video content name or the ID assigned to the video content) and the location information of the video content (for example, the information device)
  • a MAC address, other unique ID, URL (Uniform Resource Locator) of the video content, or a unique ID of the removable media when the video content is recorded on the removable media are added to the meta information.
  • FIG. 10 is a sequence diagram showing processing for creating integrated video content.
  • one information device notebook PC 23 in this case
  • content creation instruction information is input
  • processing for creating integrated video content shown in FIG. 10 is started (S41).
  • the notebook PC 23 transmits the input content creation instruction information to the home server 100z (S42).
  • the CPU 120 starts execution of the integrated video content creation program 152z.
  • the integrated video content creation program 152z performs processing similar to S11 to S12 in FIG. 5, that is, selection of a scenario suitable for the content creation instruction information, access to the meta information database 157z in which meta information is centrally managed, and each of the selected scenarios. Meta information suitable for the scene is searched (S43).
  • the response message that is, the meta information searched together with the selected scenario is returned to the notebook PC 23 (S44).
  • Note PC 23 determines the access destination with reference to the location information of the video content included in the received meta information (S45). Next, a request message including content identification information or URL is transmitted to each information device having video content corresponding to the determined access destination, that is, meta information (S46). Depending on the search result of the home server 100z, the URL of the video content held by the notebook PC 23 itself may be included in the access destination.
  • the information device that has received the request message from the notebook PC 23 searches for the video content specified in the request message from the response corresponding to the content identification information or URL in the request message, that is, the video content held by itself. (S47) Return to the notebook PC 23 (S48). In this way, the video content of each scene necessary for the integrated video content is collected in the notebook PC 23. Using the collected video content and the selected scenario received from the home server 100z, the notebook PC 23 performs the same processing as S14 to S15 in FIG. 5 to create integrated video content (S49). The created integrated video content is decoded and reproduced by the video codec of the notebook PC 23 (S50).
  • the integrated video content can be automatically edited and created without collecting the video content held by each information device in the home server 100z, that is, one storage device. For this reason, there is no operation or work burden associated with the collection of video content on the user.
  • meta information updating, scenario editing, family profile updating, and the like may be performed by various learning programs as in the first embodiment.
  • the resident program according to the feature of the present invention may be scattered in each information device in the home LAN 1, or all information devices may have the resident program.
  • Various databases may be scattered in each information device in the home LAN 1.
  • the home server 100 itself may record and store video content and reproduce it. This means that the home server 100 functions as a DMS and a DMP, so that the present invention can be realized by the home server 100 alone.
  • the content to be processed includes not only video content but also any type of content included in the content definition described above.
  • the content to be processed includes not only video content but also any type of content included in the content definition described above.
  • a new mixed content of a plurality of formats Content can be created.
  • the timing for collecting the playback history information is not limited to the regular timing.
  • each information device can access the home server 100 simultaneously with generating the reproduction history information and the like, and transfer the reproduction history information and the like to the home server 100 in real time.
  • the device that collects reproduction history information and the like is not limited to the home server 100.
  • any information device in the home LAN 1 may collect reproduction history information and the like held by each information device regularly or in real time and transfer them to the home server 100.
  • each information device may record meta information and generate meta information at the same time.
  • the home server 100 collects meta information together with the video content.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Graphics (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A method for executing the management of contents of respective information devices by a server home-networked with the information devices storing contents evaluated with respect to at least one feature point.  The content management method comprises a feature point evaluation collecting step for collecting the feature point evaluations of the respective contents from the information devices, a feature point evaluation storing step for storing the collected feature point evaluations, a content selecting step wherein when a content is requested from an information device, the stored feature point evaluations are searched and a content having feature point evaluations suited to the request is selected, and a content location information providing step for informing the information device of location information relating to the selected content.

Description

コンテンツ管理方法、コンテンツ自動編集方法、コンテンツ管理プログラム、コンテンツ自動編集プログラム、サーバ、情報機器、およびコンテンツ自動編集システムContent management method, content automatic editing method, content management program, content automatic editing program, server, information device, and content automatic editing system
 本発明は、ホームネットワーク上に散在するコンテンツを自動収集し編集するのに好適なコンテンツ管理方法、コンテンツ自動編集方法、コンテンツ管理プログラム、コンテンツ自動編集プログラム、サーバ、情報機器、およびコンテンツ自動編集システムに関する。 The present invention relates to a content management method, a content automatic editing method, a content management program, a content automatic editing program, a server, an information device, and a content automatic editing system suitable for automatically collecting and editing content scattered on a home network. .
 近年、半導体メモリやHDD(Hard Disk Drive)等の情報記録メディアの大容量化や価格下落が進んだことから、DCC(Digital CamCorder)やDSC(Digital Still Camera)、HDDレコーダ等の情報家電やPC(Personal Computer)等に大容量の記憶装置が搭載されるようになっている。このため、記憶装置の残容量を気にすることなく例えばDCCやDSC等を用いて次々と撮影を行い、動画ファイルや静止画ファイル等を記憶装置(メモリカード等)に大量に蓄積させることができる。また、放送或いはネットワーク伝送される映像系デジタルコンテンツをHDDレコーダやPC等のHDDに大量に録画してためておくことができる。以下、本明細書において動画ファイルや静止画ファイル、映像系デジタルコンテンツ等を「映像コンテンツ」と記す。なお、本明細書において「コンテンツ」と表現した場合には、映像コンテンツだけでなく、マークアップ文書やオーディオデータ、ドキュメントデータ、ワークシート、またはそれらの組み合わせで構成されるデータ等も含まれる。 In recent years, information storage media such as semiconductor memories and HDDs (Hard Disk Drives) have become larger and prices have fallen, so information home appliances such as DCC (Digital CamCorder), DSC (Digital Still Camera), HDD recorders, and PCs (Personal Computer) etc. are equipped with a large-capacity storage device. For this reason, without taking care of the remaining capacity of the storage device, it is possible to shoot one after another using, for example, DCC or DSC and accumulate a large amount of moving image files, still image files, etc. in the storage device (memory card or the like). it can. Also, it is possible to record a large amount of video digital content that is broadcast or transmitted over a network on an HDD such as an HDD recorder or a PC. Hereinafter, in this specification, a moving image file, a still image file, a video digital content, and the like are referred to as “video content”. Note that the expression “content” in this specification includes not only video content but also data composed of a markup document, audio data, document data, a worksheet, or a combination thereof.
 ところで、このような大容量の記憶装置に蓄積された大量の映像コンテンツには似通った内容のものが数多く含まれている。そのため、全ての映像コンテンツを視聴することは退屈であり、時間の浪費となる。そこで、オーサリングツール等を使用して複数の映像コンテンツを編集、統合して一つの映像コンテンツ(以下、「統合映像コンテンツ」と記す。)にまとめることが行われる。複数の映像コンテンツを編集して統合映像コンテンツを作成する装置の例は、特開2005-303840号公報(以下、「特許文献1」と記す。)や特開2000-125253号公報(以下、「特許文献2」と記す。)等に開示されている。 By the way, a large amount of video content stored in such a large-capacity storage device includes many similar contents. Therefore, viewing all video content is tedious and time consuming. Therefore, a plurality of video contents are edited and integrated by using an authoring tool or the like and combined into one video content (hereinafter referred to as “integrated video content”). Examples of apparatuses that edit a plurality of video contents to create integrated video contents are disclosed in Japanese Patent Laid-Open No. 2005-303840 (hereinafter referred to as “Patent Document 1”) and Japanese Patent Laid-Open No. 2000-125253 (hereinafter referred to as “ Patent Document 2 ”)) and the like.
 特許文献1に記載の動画編集装置は、複数の動画が記憶された動画記憶部を有している。当該編集装置は、ユーザ入力された検索条件にしたがって動画記憶部から動画を検索し、検索結果をリスト表示する。ユーザは、表示されたリストのなかから所望の動画を選択し、選択された各動画を時系列に並べることにより一本の映像を完成させる。 The moving image editing apparatus described in Patent Document 1 has a moving image storage unit in which a plurality of moving images are stored. The editing device searches for a moving image from the moving image storage unit according to the search condition input by the user, and displays the search result as a list. The user selects a desired moving image from the displayed list, and completes one video by arranging the selected moving images in time series.
 特許文献2に記載の動画編集装置は、全ての動画素材の中からユーザの指定するシーンを含む箇所(切り出し範囲)を抽出し、抽出された切り出し範囲を一覧にした合成リストを作成する。そして、作成された合成リストにしたがって指定シーンに対応する切り出し範囲を連続的に再生する。このように特許文献2においては、動画の選択から、リスト作成、編集再生までの一連の作業が装置側で自動的に行われる。 The moving image editing apparatus described in Patent Document 2 extracts a portion (cutout range) including a scene specified by the user from all moving image materials, and creates a composite list in which the extracted cutout ranges are listed. Then, the cutout range corresponding to the designated scene is continuously reproduced according to the created synthesis list. As described above, in Patent Document 2, a series of operations from selection of a moving image to creation of a list and editing and reproduction are automatically performed on the apparatus side.
 このように構成された特許文献1および特許文献2に記載の動画編集装置において映像編集処理を行うためには素材となる全ての映像コンテンツが単一の記憶装置に記憶されていることが前提となる。ところが、映像コンテンツは、実際上HDDレコーダやホームサーバ(例えばPC)等の各情報機器に散在している。このため、従来の映像編集技術を利用する場合には全ての情報機器の映像コンテンツを単一の記憶装置に集める必要があり、ユーザに煩雑な操作、作業を強いることとなっていた。具体的には、記録装置に映像コンテンツを集めるため、ユーザは、各情報機器に映像コンテンツが記憶されているかチェックし、映像コンテンツが確認された情報機器一台一台を記憶装置に接続してコピーまたは移動操作を行う必要があった。そこで、このようなユーザの操作、作業負担の軽減を図るべく、各情報機器に散在するコンテンツを自動的に編集する装置やシステム、方法、プログラム等の提供が望まれていた。 In order to perform video editing processing in the moving image editing apparatuses described in Patent Document 1 and Patent Document 2 configured as described above, it is assumed that all video contents as materials are stored in a single storage device. Become. However, video content is actually scattered in information devices such as HDD recorders and home servers (for example, PCs). For this reason, when the conventional video editing technology is used, it is necessary to collect the video contents of all the information devices in a single storage device, which forces the user to perform complicated operations and operations. Specifically, in order to collect video content on the recording device, the user checks whether the video content is stored in each information device, and connects each information device on which the video content is confirmed to the storage device. A copy or move operation had to be performed. Therefore, in order to reduce the user's operation and work burden, it has been desired to provide an apparatus, system, method, program, and the like that automatically edit content scattered in each information device.
 本発明は上記の事情に鑑みてなされたものであり、その目的とするところは、各情報機器に保持される映像コンテンツを自動収集し編集するのに好適なコンテンツ管理方法、コンテンツ自動編集方法、コンテンツ管理プログラム、コンテンツ自動編集プログラム、サーバ、情報機器、およびコンテンツ自動編集システムを提供することにある。 The present invention has been made in view of the above circumstances, and its object is to provide a content management method, a content automatic editing method suitable for automatically collecting and editing video content held in each information device, A content management program, a content automatic editing program, a server, an information device, and a content automatic editing system are provided.
 上記の課題を解決する本発明の一形態に係るコンテンツ管理方法は、少なくとも1つの特徴点について評価されたコンテンツを記憶した複数の情報機器とホームネットワークを介して接続されたサーバが実行する、各情報機器のコンテンツを管理する方法であり、複数の情報機器から各コンテンツの特徴点評価を収集する特徴点評価収集ステップと、収集された特徴点評価を記憶する特徴点評価記憶ステップと、情報機器からコンテンツが要求された場合に、記憶された特徴点評価を検索して該要求に適した特徴点評価を有するコンテンツを選択するコンテンツ選択ステップと、選択されたコンテンツの所在情報を情報機器に通知するコンテンツ所在情報通知ステップとを含むことを特徴とする。 A content management method according to an aspect of the present invention that solves the above-described problem is executed by a server connected via a home network to a plurality of information devices that store content evaluated for at least one feature point. A method for managing content of an information device, a feature point evaluation collecting step for collecting feature point evaluation of each content from a plurality of information devices, a feature point evaluation storing step for storing the collected feature point evaluation, and an information device When a content is requested from a content, a content selection step of searching for a stored feature point evaluation and selecting a content having a feature point evaluation suitable for the request, and notifying information equipment of location information of the selected content And a content location information notifying step.
 このように一元管理された各コンテンツの特徴点評価を利用してコンテンツの所在情報を情報機器に通知することにより、情報機器によるコンテンツの自動収集、編集処理等を支援することができる。 By using the feature point evaluation of each piece of content managed in this way and notifying the location information of the content to the information device, it is possible to support automatic collection and editing processing of the content by the information device.
 また、コンテンツ管理方法は、少なくとも1つの特徴点評価に関連付けされたシナリオを複数記憶するシナリオ記憶ステップと、要求がされた場合に複数のシナリオの中から該要求に適した特徴点評価を有するシナリオを選択するシナリオ選択ステップとをさらに含む方法としてもよい。この場合コンテンツ選択ステップにおいては、選択されたシナリオに基づきコンテンツが選択される。 The content management method also includes a scenario storing step for storing a plurality of scenarios associated with at least one feature point evaluation, and a scenario having a feature point evaluation suitable for the request from among the plurality of scenarios when requested. The method may further include a scenario selection step of selecting. In this case, in the content selection step, content is selected based on the selected scenario.
 さらに、コンテンツ管理方法においては、シナリオがそれぞれ異なる特徴点評価に関連付けされた複数のシーンによって構成される場合に、コンテンツ選択ステップにおいて、選択されたシナリオの各シーンの特徴点評価に基づき各シーンのコンテンツが選択されるようにしてもよい。 Further, in the content management method, when a scenario is composed of a plurality of scenes associated with different feature point evaluations, the content selection step is configured to select each scene based on the feature point evaluation of each scene of the selected scenario. Content may be selected.
 また、上記の課題を解決する本発明の一形態に係るコンテンツ自動編集方法は、少なくとも1つの特徴点について評価されたコンテンツを記憶した他の情報機器、および各情報機器に記憶されたコンテンツを該特徴点に関連付けて管理するサーバとホームネットワークを介して接続された情報機器が実行する方法であり、サーバにコンテンツを要求するコンテンツ要求ステップと、サーバから要求に適した特徴点評価を有するコンテンツの所在情報が通知された場合に、該所在情報に基づき各情報機器から該コンテンツを収集するコンテンツ収集ステップと、収集されたコンテンツを編集するコンテンツ編集ステップとを含むことを特徴とする。 In addition, an automatic content editing method according to an aspect of the present invention that solves the above-described problem includes another information device that stores content evaluated for at least one feature point, and the content stored in each information device. A method executed by an information device connected via a home network to a server managed in association with a feature point, a content request step for requesting content from the server, and a content request having a feature point evaluation suitable for the request from the server When location information is notified, a content collection step of collecting the content from each information device based on the location information and a content editing step of editing the collected content are included.
 このようにサーバ(すなわち一元管理された各コンテンツの特徴点評価)にアクセスしてコンテンツの所在情報を取得することにより、情報機器は、ユーザの操作を介することなくコンテンツを自動収集し編集することができる。 In this way, the information device automatically collects and edits the content without the user's operation by accessing the server (that is, the feature point evaluation of each centrally managed content) and acquiring the location information of the content. Can do.
 また、コンテンツ自動編集方法は、編集されたコンテンツを再生するコンテンツ再生ステップをさらに含む方法としてもよい。 Further, the content automatic editing method may further include a content reproduction step of reproducing the edited content.
 また、コンテンツ自動編集方法は、コンテンツを記憶する際に少なくとも1つの特徴点について該コンテンツを評価するコンテンツ評価ステップをさらに含む方法としてもよい。 The content automatic editing method may further include a content evaluation step of evaluating the content for at least one feature point when storing the content.
 また、上記の課題を解決する本発明の別の形態に係るコンテンツ自動編集方法は、少なくとも1つの特徴点について評価されたコンテンツを記憶した複数の情報機器と、ホームネットワークを介して該複数の情報機器と接続されたサーバとの連係処理によりコンテンツを自動編集する方法に関する。当該コンテンツ自動編集方法においてサーバに実行されるステップには次の各ステップが含まれる。すなわち、複数の情報機器から各コンテンツの特徴点評価を収集する特徴点評価収集ステップと、収集された特徴点評価を記憶する特徴点評価記憶ステップと、情報機器からコンテンツが要求された場合に、記憶された特徴点評価を検索して該要求に適した特徴点評価を有するコンテンツを選択するコンテンツ選択ステップと、選択されたコンテンツの所在情報を情報機器に通知するコンテンツ所在情報通知ステップとが含まれる。また、当該コンテンツ自動編集方法において情報機器に実行されるステップには次の各ステップが含まれる。すなわち、サーバにコンテンツを要求するコンテンツ要求ステップと、サーバから要求に適した特徴点評価を有するコンテンツの所在情報が通知された場合に、該所在情報に基づき各情報機器から該コンテンツを収集するコンテンツ収集ステップと、収集されたコンテンツを編集するコンテンツ編集ステップとが含まれる。 In addition, an automatic content editing method according to another aspect of the present invention that solves the above problem includes a plurality of information devices that store content evaluated for at least one feature point, and the plurality of information via a home network. The present invention relates to a method for automatically editing content by linking processing with a server connected to a device. The steps executed by the server in the content automatic editing method include the following steps. That is, when a feature point evaluation collecting step for collecting feature point evaluations of each content from a plurality of information devices, a feature point evaluation storing step for storing the collected feature point evaluations, and when content is requested from the information device, A content selection step of searching the stored feature point evaluation and selecting content having a feature point evaluation suitable for the request, and a content location information notification step of notifying the information device of location information of the selected content are included. It is. In addition, the steps executed by the information device in the content automatic editing method include the following steps. In other words, a content requesting step for requesting content from the server, and content that collects the content from each information device based on the location information when the location information of the content having feature point evaluation suitable for the request is notified from the server A collection step and a content editing step for editing the collected content are included.
 また、上記の課題を解決する本発明の一形態に係るコンテンツ管理プログラムは、上記何れかに記載のコンテンツ管理方法の各ステップをコンピュータに実行させるためのプログラムである。 Also, a content management program according to an aspect of the present invention that solves the above-described problems is a program for causing a computer to execute each step of the content management method described above.
 また、上記の課題を解決する本発明の一形態に係るコンテンツ管理プログラムは、上記何れかに記載のコンテンツ自動編集方法の各ステップをコンピュータに実行させるためのプログラムである。 Also, a content management program according to an aspect of the present invention that solves the above-described problem is a program for causing a computer to execute each step of the content automatic editing method described above.
 また、上記の課題を解決する本発明の一形態に係るサーバは、少なくとも1つの特徴点について評価されたコンテンツを記憶した複数の情報機器とホームネットワークを介して接続されたサーバであり、複数の情報機器から各コンテンツの特徴点評価を収集する特徴点評価収集手段と、収集された特徴点評価を記憶する特徴点評価記憶手段と、情報機器からコンテンツが要求された場合に、特徴点評価記憶手段を検索して該要求に適した特徴点評価を有するコンテンツを選択するコンテンツ選択手段と、選択されたコンテンツの所在情報を情報機器に通知するコンテンツ所在情報通知手段とを有することを特徴とする。 A server according to an embodiment of the present invention that solves the above problem is a server connected via a home network to a plurality of information devices that store content evaluated for at least one feature point, Feature point evaluation collecting means for collecting feature point evaluation of each content from the information device, feature point evaluation storing means for storing the collected feature point evaluation, and feature point evaluation storage when content is requested from the information device Content selection means for searching for means and selecting content having a feature point evaluation suitable for the request, and content location information notification means for notifying information equipment of location information of the selected content .
 当該サーバは、少なくとも1つの特徴点評価に関連付けされたシナリオを複数記憶するシナリオ記憶手段と、情報機器から要求がされた場合に複数のシナリオの中から該要求に適した特徴点評価を有するシナリオを選択するシナリオ選択手段とさらに有する構成としてもよい。この場合コンテンツ選択手段は、選択されたシナリオに基づきコンテンツを選択する。 The server includes a scenario storage unit for storing a plurality of scenarios associated with at least one feature point evaluation, and a scenario having a feature point evaluation suitable for the request among the plurality of scenarios when requested by an information device. Further, it may be configured to further include scenario selection means for selecting. In this case, the content selection means selects content based on the selected scenario.
 また、シナリオがそれぞれ異なる特徴点評価に関連付けされた複数のシーンによって構成される場合に、コンテンツ選択手段は、選択されたシナリオの各シーンの特徴点評価に基づき各シーンのコンテンツを選択するよう構成されてもよい。 Further, when the scenario is configured by a plurality of scenes associated with different feature point evaluations, the content selection unit is configured to select the content of each scene based on the feature point evaluation of each scene of the selected scenario. May be.
 また、上記の課題を解決する本発明の一形態に係る情報機器は、少なくとも1つの特徴点について評価されたコンテンツを記憶した他の情報機器、および各情報機器に記憶されたコンテンツを該特徴点に関連付けて管理するサーバとホームネットワークを介して接続された情報機器であり、サーバにコンテンツを要求するコンテンツ要求手段と、サーバから要求に適した特徴点評価を有するコンテンツの所在情報が通知された場合に、該所在情報に基づき各情報機器から該コンテンツを収集するコンテンツ収集手段と、収集されたコンテンツを編集するコンテンツ編集手段とを有することを特徴とする。 In addition, an information device according to an aspect of the present invention that solves the above-described problem includes another information device that stores content evaluated for at least one feature point, and the content stored in each information device. Is an information device connected via a home network to a server managed in association with the server, and content request means for requesting content from the server and location information of the content having feature point evaluation suitable for the request are notified from the server In this case, the information processing apparatus includes a content collection unit that collects the content from each information device based on the location information, and a content editing unit that edits the collected content.
 このように構成された情報機器は、編集されたコンテンツを再生するコンテンツ再生手段をさらに有する構成としてもよい。 The information device configured as described above may further include a content playback unit that plays back the edited content.
 また、該情報機器は、コンテンツを記憶するコンテンツ記憶手段と、コンテンツ記憶手段にコンテンツを記憶する際に少なくとも1つの特徴点についてコンテンツを評価するコンテンツ評価手段とをさらに有する構成としてもよい。 In addition, the information device may further include a content storage unit that stores content, and a content evaluation unit that evaluates the content with respect to at least one feature point when the content is stored in the content storage unit.
 上記サーバまたは情報機器において処理されるコンテンツには、例えば映像コンテンツが含まれる。なお、映像コンテンツは一例であり、他にはマークアップ文書やオーディオデータ、ドキュメントデータ、ワークシート、またはそれらの組み合わせで構成されるデータ等も含まれる。 The content processed in the server or information device includes, for example, video content. Note that the video content is an example, and other examples include markup documents, audio data, document data, worksheets, or data composed of a combination thereof.
 また、上記の課題を解決する本発明の一形態に係るコンテンツ自動編集システムは、上記何れかに記載の複数の情報機器と、ホームネットワークを介して該複数の情報機器に接続され、各シーンの特徴点評価に基づき各シーンのコンテンツを選択するように構成されたサーバとを有するシステムに関する。当該システムに備えられる情報機器のコンテンツ編集手段は、各シーンに対して選択されたコンテンツがシナリオで定められた順に再生されるよう編集処理を行うように構成される。 In addition, an automatic content editing system according to an aspect of the present invention that solves the above problem is connected to the plurality of information devices described above and the plurality of information devices via a home network, The present invention relates to a system having a server configured to select content of each scene based on feature point evaluation. The content editing means of the information device provided in the system is configured to perform editing processing so that the content selected for each scene is played back in the order determined by the scenario.
 本発明のコンテンツ管理方法、コンテンツ自動編集方法、コンテンツ管理プログラム、コンテンツ自動編集プログラム、サーバ、情報機器、およびコンテンツ自動編集システムによれば、各情報機器に保持される映像コンテンツが自動的に収集、編集されるため、ユーザの操作、作業負担が軽減される。 According to the content management method, content automatic editing method, content management program, content automatic editing program, server, information device, and content automatic editing system of the present invention, video content held in each information device is automatically collected, Since editing is performed, the user's operation and work load are reduced.
本発明の実施形態を説明するためのネットワーク構成図である。It is a network block diagram for demonstrating embodiment of this invention. 本発明の実施形態のホームサーバの構成を示すブロック図である。It is a block diagram which shows the structure of the home server of embodiment of this invention. 本発明の実施形態のコンテンツ収集プログラムにより実行されるコンテンツ収集処理を示すフローチャート図である。It is a flowchart figure which shows the content collection process performed by the content collection program of embodiment of this invention. 図3のS4の処理により生成されるメタ情報の一例を示す図である。It is a figure which shows an example of the meta information produced | generated by the process of S4 of FIG. 本発明の実施形態の統合映像コンテンツ作成プログラムにより実行される統合映像コンテンツ作成処理を示すフローチャート図である。It is a flowchart figure which shows the integrated video content creation process performed by the integrated video content creation program of embodiment of this invention. 本発明の実施形態のシナリオデータベースに格納されたシナリオ例を示す図である。It is a figure which shows the example of a scenario stored in the scenario database of embodiment of this invention. 本発明の実施形態の特徴点学習プログラムにより実行される特徴点学習処理を示すフローチャート図である。It is a flowchart figure which shows the feature point learning process performed by the feature point learning program of embodiment of this invention. 別の実施形態のホームサーバの構成を示すブロック図である。It is a block diagram which shows the structure of the home server of another embodiment. 別の実施形態のメタ情報収集プログラムにより実行されるメタ情報収集処理を示すフローチャート図である。It is a flowchart figure which shows the meta information collection process performed by the meta information collection program of another embodiment. 別の実施形態における統合映像コンテンツを作成する処理を示すシーケンス図である。It is a sequence diagram which shows the process which produces the integrated video content in another embodiment.
 以下、図面を参照して、本発明の実施形態について説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 図1は、本実施形態を説明するためのネットワーク構成図である。図1に示されるように本実施形態のネットワークは、家庭内LAN(Local Area Network)1と外部ネットワーク2により構築される。 FIG. 1 is a network configuration diagram for explaining the present embodiment. As shown in FIG. 1, the network according to the present embodiment is constructed by a home LAN (Local Area Network) 1 and an external network 2.
 家庭内LAN1には、ゲートウェイサーバ10および複数の情報機器(HDDレコーダ21、TV(Television)22、ノートPC23、ホームサーバ100等)が配置されている。ゲートウェイサーバ10は、スイッチングおよびルーティング機能を有しており、家庭内LAN1内の各情報機器を相互接続するとともに外部ネットワーク2上に、あるいは図示省略された他のネットワーク上に配置された端末と通信することができる。 In the home LAN 1, a gateway server 10 and a plurality of information devices (HDD recorder 21, TV (Television) 22, notebook PC 23, home server 100, etc.) are arranged. The gateway server 10 has switching and routing functions, interconnects information devices in the home LAN 1, and communicates with terminals arranged on the external network 2 or other networks not shown. can do.
 家庭内LAN1には、HDDレコーダ21、TV22、ノートPC23以外の情報機器も接続されている。また、電子レンジや冷蔵庫等の情報家電も接続されている。なお、以降の説明においては便宜上、情報家電も情報機器と記す。家庭内LAN1に接続された全ての情報機器にはホームネットワークに関する共通の技術仕様に準拠したミドルウェアやクライアントソフトウェアが実装され、家庭内LAN接続に適した構成を有する。本実施形態においては、各情報機器は共通の技術仕様であるDLNA(Digital Living Network Alliance)に準拠したものであり、HDDレコーダ21はDMS(Digital Media Server)として、TV22はDMP(Digital Media Player)として、ノートPC23はDMSおよびDMPとして、それぞれ機能する。また、ホームサーバ100は、デスクトップPCであり、ノートPC23と同様にDMSおよびDMPとして機能する。なお、Havi(Home Audio/Video interoperability)、Jini等の別の技術仕様に適合した情報機器によって家庭内LANを構成してもよい。 The home LAN 1 is also connected to information devices other than the HDD recorder 21, TV 22, and notebook PC 23. Information home appliances such as a microwave oven and a refrigerator are also connected. In the following description, information home appliances are also referred to as information devices for convenience. All information devices connected to the home LAN 1 are installed with middleware and client software compliant with common technical specifications related to the home network, and have a configuration suitable for home LAN connection. In the present embodiment, each information device complies with DLNA (Digital Living Network Alliance) which is a common technical specification, the HDD recorder 21 is a DMS (Digital Media Server), and the TV 22 is a DMP (Digital Media Player). The notebook PC 23 functions as a DMS and a DMP, respectively. The home server 100 is a desktop PC, and functions as a DMS and a DMP like the notebook PC 23. Note that the home LAN may be configured by information equipment that conforms to other technical specifications such as Havi (Home Audio / Video Interoperability), Jini, and the like.
 ホームサーバ100は、HDDレコーダ21をはじめとする家庭内LAN1内の各情報機器に記憶されたコンテンツを収集して蓄積することができる。また、外部ネットワーク2上に配置された情報機器(例えば携帯電話24)に記憶されたコンテンツを外部ネットワーク2、ゲートウェイサーバ10経由で取得して蓄積することもできる。ホームサーバ100へのコンテンツの蓄積は、ホームサーバ100や各情報機器の設定、またはユーザ操作等にしたがって行われる。 The home server 100 can collect and store contents stored in each information device in the home LAN 1 including the HDD recorder 21. In addition, content stored in an information device (for example, mobile phone 24) arranged on the external network 2 can be acquired and stored via the external network 2 and the gateway server 10. Accumulation of content in the home server 100 is performed according to settings of the home server 100 and each information device, user operation, or the like.
 図2は、ホームサーバ100の構成を示すブロック図である。ホームサーバ100を構成する各要素は、システムバス110を介してCPU120と相互に接続されている。CPU120は、ホームサーバ100の電源投入後、システムバス110を介して必要なハードウェアにアクセスする。 FIG. 2 is a block diagram showing the configuration of the home server 100. Each element constituting the home server 100 is mutually connected to the CPU 120 via the system bus 110. After powering on the home server 100, the CPU 120 accesses necessary hardware via the system bus 110.
 例えばホームサーバ100の電源投入直後は、CPU(Central Processing Unit)120がROM(Read-Only Memory)130にアクセスする。CPU120は、ROM130に格納されたブートローダをRAM(Random Access Memory)140にロードして起動する。ブートローダを起動したCPU120は、次いでHDD150に格納されたOS(Operating System)をRAM140にロードして起動する。これにより、各要素がOSによるリソース、プロセス管理下で必要に応じて連係動作して各種処理を実行する。OS起動後は、HDD150に格納された各種常駐プログラムがRAM140に常駐する。このような常駐プログラムには、本発明の特徴に係るコンテンツ収集プログラム151、統合映像コンテンツ作成プログラム152、特徴点学習プログラム153、シナリオ学習プログラム154、家族プロファイル学習プログラム155等がある。以下、これらの常駐プログラムについて説明する。 For example, immediately after the home server 100 is turned on, a CPU (Central Processing Unit) 120 accesses a ROM (Read-Only Memory) 130. The CPU 120 loads a boot loader stored in the ROM 130 into a RAM (Random Access Memory) 140 and starts it. The CPU 120 that activated the boot loader then loads an OS (Operating System) stored in the HDD 150 into the RAM 140 and activates it. As a result, each element performs various processes by cooperating as necessary under resource and process management by the OS. After starting the OS, various resident programs stored in the HDD 150 are resident in the RAM 140. Such resident programs include a content collection program 151, an integrated video content creation program 152, a feature point learning program 153, a scenario learning program 154, a family profile learning program 155, and the like according to the features of the present invention. Hereinafter, these resident programs will be described.
 図3は、コンテンツ収集プログラム151により実行されるコンテンツ収集処理を示すフローチャート図である。図3に示されるように、コンテンツ収集プログラム151は、例えばRAM140に常駐した時点で、映像コンテンツ(例えばMPEG-2(Moving Picture Experts Group phase 2)形式)をリクエストするリクエストメッセージを、ネットワークインタフェース160を介して同一セグメント、つまり家庭内LAN1内の各DMSにユニキャストで個別に(またはマルチキャストで)送信する(ステップ1。以下、本明細書および図面においてステップを「S」と略記する。)。リクエストメッセージを受信した情報機器は、自己の保有するコンテンツリスト等を参照して、前回のリクエストメッセージ受信時から映像コンテンツが更新や追加等されたかチェックする。そして、更新や追加等された映像コンテンツがある場合には、該映像コンテンツをホームサーバ100にアップロードする。 FIG. 3 is a flowchart showing content collection processing executed by the content collection program 151. As shown in FIG. 3, the content collection program 151 sends a request message requesting video content (for example, MPEG-2 (Moving Picture Experts 映像 Group phase 2) format) to the network interface 160 when it resides in the RAM 140, for example. Via the same segment, that is, each DMS in the home LAN 1 individually (or by multicast) is transmitted (step 1. Hereinafter, the step is abbreviated as “S” in this specification and the drawings). The information device that has received the request message refers to its own content list and the like to check whether the video content has been updated or added since the previous request message was received. If there is an updated or added video content, the video content is uploaded to the home server 100.
 次に、コンテンツ収集プログラム151は、各端末から受信した映像コンテンツをHDD150内のコンテンツデータベース156に格納すると同時に該映像コンテンツを解析して様々な特徴点について評価する(S2、S3)。ここで、特徴点とは、例えば「おばあちゃん」(被写体人物)、「二人」(被写体の人数)、「笑い」(被写体の表情)、「食べる」(被写体の動作)、「浴衣」(被写体の服装)、「戸外」(撮影場所)、「夕方」(撮影時刻)、「週末」(撮影日時)、「見下ろし」(被写体と撮影者との位置・角度関係)、「誕生日」(家族の年中行事)、「ズームイン」(撮影パターン)等の映像コンテンツを特徴付ける要素である。特徴点には、映像コンテンツに記録された映像や音声の特徴のみならず、映像コンテンツの撮影者、撮影日時等の情報も含まれる。特徴点の解析処理においては、例えば動作認識や顔表情認識、対象物認識、音声認識等の周知の認識アルゴリズムにより映像コンテンツ中の画像や音声の特徴点が抽出、数値化される。このように数値化された各映像コンテンツの各種特徴点に関する情報を、映像特徴パラメータと呼ぶ。解析処理において対象物認識や音声認識等を行う際には、家族プロファイルデータベース159に記憶された後述する家族プロファイルが参照され、被写体(例えば「弟」)や撮影者(例えば「父」の声)が特定される。また、撮影日時や撮影者については、映像コンテンツ・ファイルのタイムスタンプやプロパティから取得することもできる。あるいは、家族プロファイルを使用することにより、被写体の身長や顔つき等から、撮影時における被写体の年齢(すなわち、映像コンテンツの撮影時期)を推定することもできる。 Next, the content collection program 151 stores the video content received from each terminal in the content database 156 in the HDD 150, and simultaneously analyzes the video content to evaluate various feature points (S2, S3). Here, the feature points are, for example, “grandma” (subject person), “two people” (subject number), “laughter” (subject facial expression), “eating” (subject motion), “yukata” (subject) ), “Outdoor” (shooting location), “Evening” (shooting time), “Weekend” (shooting date and time), “Looking down” (position and angle relationship between subject and photographer), “Birthday” (family) This is an element characterizing video content such as “Zoom in” (shooting pattern). The feature points include not only the characteristics of video and audio recorded in the video content, but also information such as the photographer of the video content and the shooting date and time. In the feature point analysis processing, feature points of images and sounds in video content are extracted and digitized by a known recognition algorithm such as motion recognition, facial expression recognition, object recognition, and speech recognition. Information regarding various feature points of each video content digitized in this way is called a video feature parameter. When performing object recognition, voice recognition, or the like in the analysis processing, a family profile (to be described later) stored in the family profile database 159 is referred to, and a subject (for example, “younger brother”) or a photographer (for example, “father” voice) Is identified. The shooting date and time and the photographer can also be acquired from the time stamp and properties of the video content file. Alternatively, by using the family profile, it is possible to estimate the age of the subject at the time of photographing (that is, the photographing time of the video content) from the height and the face of the subject.
 コンテンツ収集プログラム151は、このように各特徴点について映像コンテンツを解析した結果である映像特徴パラメータのリスト(以下、「メタ情報」という。)を生成する(S4)。具体的には、各特徴点に対応した所定の関数を用いて、当該特徴点に関する映像コンテンツの評価を示す映像特徴パラメータを計算する。図4に、S4の処理により生成されるメタ情報の一例を示す。図4に示されるように、メタ情報は、「父」、「母」、「姉」、「弟」・・・等の各特徴点に対応する映像特徴パラメータ群で構成される。本実施形態においては、各種映像特徴パラメータは、例えば0~1の数値で表現される。図4のメタ情報は、映像コンテンツが例えば姉と弟を主たる被写体とした無音声の映像であり、ズームインされたシーンが多く含まれる内容であることを示している。生成されたメタ情報は、HDD150内のメタ情報データベース157に格納される(S5)。 The content collection program 151 generates a list of video feature parameters (hereinafter referred to as “meta information”), which is a result of analyzing the video content for each feature point in this way (S4). Specifically, using a predetermined function corresponding to each feature point, a video feature parameter indicating evaluation of video content related to the feature point is calculated. FIG. 4 shows an example of meta information generated by the process of S4. As shown in FIG. 4, the meta information is composed of video feature parameter groups corresponding to feature points such as “father”, “mother”, “sister”, “brother”, and so on. In the present embodiment, various video feature parameters are expressed by numerical values of 0 to 1, for example. The meta information in FIG. 4 indicates that the video content is an audioless video having, for example, an older sister and a younger brother as main subjects and includes many zoomed-in scenes. The generated meta information is stored in the meta information database 157 in the HDD 150 (S5).
 このように同一セグメント内の各情報機器が保持する映像コンテンツは、コンテンツ収集プログラム151によって自動的に収集され、コンテンツ収集プログラム151によって特徴点解析が行われてメタ情報が生成される。なお、他のセグメントの情報機器(例えば携帯電話24)が保持する映像コンテンツまでは自動収集しない。他のセグメントの情報機器が保持する映像コンテンツは、該情報機器がマニュアル操作されたときにはじめてホームサーバ100にアップロードされ蓄積される。また、ホームサーバ100や各情報機器を設定変更して、同一セグメントの情報機器が保持する映像コンテンツもマニュアル操作されたときにはじめてホームサーバ100にアップロードされ蓄積されるようにしてもよい。 As described above, the video content held by each information device in the same segment is automatically collected by the content collection program 151, and feature point analysis is performed by the content collection program 151 to generate meta information. Note that video content held by information devices in other segments (for example, the mobile phone 24) is not automatically collected. Video contents held by information devices in other segments are uploaded and stored in the home server 100 only when the information devices are manually operated. Alternatively, the home server 100 and each information device may be set and changed so that the video content held by the information device in the same segment is uploaded and stored in the home server 100 only when a manual operation is performed.
 図5は、統合映像コンテンツ作成プログラム152により実行される統合映像コンテンツ作成処理を示すフローチャート図である。統合映像コンテンツ作成プログラム152は、シナリオに基づいて統合映像コンテンツを作成する。図6(a)、(b)はそれぞれ、HDD150内のシナリオデータベース158に格納されたシナリオの例(シナリオ1581、1582)を示す。各シナリオは、1つ以上のシーン定義Sから構成される。シーン定義Sは、シーンの特徴(当該シーンに割り当てる映像コンテンツに求められる映像特徴パラメータ)を定義する複数種類のシーン特徴パラメータや、各シーンの割当時間パラメータ等から構成されている。シーン特徴パラメータは、映像特徴パラメータと同じく、0~1の値で表現される。例えば図6(a)のシナリオ1581におけるシーン1を定義するシーン定義Sには、(姉、誕生日、笑い、悲しみ・・・)=(0.8、1、0.8、0.1・・・)といったシーン特徴パラメータが設定されている。なお、シナリオ自体にもシーン特徴パラメータと同様のシナリオ特徴パラメータが関連付けられている。シナリオ特徴パラメータは、例えばシナリオを構成する各シーン定義のシーン特徴パラメータに基づき計算される。または、各シーン定義のシーン特徴パラメータとは独立して付与されたシナリオ全体の流れを表現するパラメータであってもよい。各シナリオやシーン定義は、シナリオデータベース158に例えばテンプレートとして予め格納されている。なお、シナリオやシーン定義は、例えば専用のエディタを使用してユーザが独自に作成することもできるが、映像機器メーカ等が配布するものを例えばインターネット上のサーバからダウンロードして利用することもできる。 FIG. 5 is a flowchart showing integrated video content creation processing executed by the integrated video content creation program 152. The integrated video content creation program 152 creates integrated video content based on the scenario. FIGS. 6A and 6B show examples of scenarios (scenarios 1581 and 1582) stored in the scenario database 158 in the HDD 150, respectively. Each scenario is composed of one or more scene definitions S. The scene definition S includes a plurality of types of scene feature parameters that define scene features (video feature parameters required for video content to be assigned to the scene), allocation time parameters for each scene, and the like. The scene feature parameter is expressed by a value of 0 to 1 like the video feature parameter. For example, the scene definition S defining the scene 1 in the scenario 1581 in FIG. 6A includes (sister, birthday, laughter, sadness ...) = (0.8, 1, 0.8, 0.1 · Scene feature parameters such as ..) are set. A scenario feature parameter similar to the scene feature parameter is also associated with the scenario itself. The scenario feature parameter is calculated based on, for example, scene feature parameters of each scene definition constituting the scenario. Alternatively, it may be a parameter expressing the flow of the entire scenario given independently of the scene feature parameter of each scene definition. Each scenario and scene definition is stored in advance in the scenario database 158 as a template, for example. Scenarios and scene definitions can be created independently by the user using a dedicated editor, for example, but those distributed by video equipment manufacturers etc. can be downloaded from a server on the Internet, for example. .
 ここで、家庭内LAN1に接続されたTV22やノートPC23等のクライアントをユーザが操作して、視聴したい映像コンテンツのテーマや登場人物、再生総時間等が入力された場合を考える。このとき入力された情報はホームサーバ100に送信される。ホームサーバ100によって該入力情報が受信されると、CPU120により統合映像コンテンツ作成プログラム152の実行が開始される。以下、説明の便宜上、該入力情報を「コンテンツ作成指示情報」と記す。 Here, let us consider a case where the user operates a client such as the TV 22 or the notebook PC 23 connected to the home LAN 1 and inputs the theme of the video content to be viewed, the characters, the total playback time, and the like. The information input at this time is transmitted to the home server 100. When the input information is received by the home server 100, the CPU 120 starts execution of the integrated video content creation program 152. Hereinafter, for convenience of explanation, the input information is referred to as “content creation instruction information”.
 図5に示されるように、統合映像コンテンツ作成プログラム152は、先ず、シナリオデータベース158にアクセスして各シナリオを参照し、コンテンツ作成指示情報に適したシナリオを選択する(S11)。コンテンツ作成指示情報が「姉」、「誕生日」である場合には、例えば「姉」、「誕生日」のシナリオ特徴パラメータがともに所定値(例えば0.6)以上のシナリオ(ここではシナリオ1581)が選択される。 As shown in FIG. 5, the integrated video content creation program 152 first accesses the scenario database 158, refers to each scenario, and selects a scenario suitable for the content creation instruction information (S11). When the content creation instruction information is “sister” or “birthday”, for example, a scenario (scenario 1581 in this case) where the scenario feature parameters of “sister” and “birthday” are both greater than or equal to a predetermined value (eg, 0.6). ) Is selected.
 統合映像コンテンツ作成プログラム152は、続いてメタ情報データベース157にアクセスし、選択されたシナリオの各シーンに適したメタ情報を検索する(S12)。例えばシナリオ1581のシーン1に対しては、「姉」、「誕生日」の映像特徴パラメータが所定値以上でかつ2002年に撮影された映像コンテンツのメタ情報が検索される。 The integrated video content creation program 152 subsequently accesses the meta information database 157 and searches for meta information suitable for each scene of the selected scenario (S12). For example, for the scene 1 of the scenario 1581, the meta information of the video content captured in 2002 when the video feature parameters of “sister” and “birthday” are equal to or larger than a predetermined value is searched.
 統合映像コンテンツ作成プログラム152は、コンテンツデータベース156にアクセスし、検索されたメタ情報に対応する映像コンテンツを選択シナリオのシーン毎に読み込む(S13)。各シーンに対しては、例えば検索順位が最上位のメタ情報に対応する映像コンテンツが読み込まれる。メタ情報の検索順位は、シーン定義S(具体的には、シーン特徴パラメータ)とメタ情報との一致度にしたがって決定される。 The integrated video content creation program 152 accesses the content database 156 and reads the video content corresponding to the searched meta information for each scene of the selected scenario (S13). For each scene, for example, video content corresponding to the meta information having the highest search order is read. The search order of meta information is determined according to the degree of coincidence between the scene definition S (specifically, the scene feature parameter) and the meta information.
 統合映像コンテンツ作成プログラム152は、読み込まれた映像コンテンツをシーン毎にクリッピングして配列し、シーン映像を生成する(S14)。ここではシナリオ1581のシーン1~20それぞれについて対応するシーン映像が生成される。例えばシーン1の場合は、2002年の姉の誕生日に撮影された映像コンテンツ中の25秒間の映像がシーン映像としてクリッピングされる。映像コンテンツの時間軸におけるクリッピングの開始点は、例えばランダムに設定される。また、撮影時間の長い映像は後半冗長になりやすいため、前半の映像が優先的にクリッピングされる。 The integrated video content creation program 152 clips and arranges the read video content for each scene, and generates a scene video (S14). Here, corresponding scene videos are generated for each of the scenes 1 to 20 of the scenario 1581. For example, in the case of scene 1, a video for 25 seconds in video content shot on the sister's birthday in 2002 is clipped as a scene video. The starting point of clipping on the time axis of the video content is set at random, for example. Also, since video with a long shooting time tends to be redundant in the latter half, the video in the first half is clipped with priority.
 統合映像コンテンツ作成プログラム152は、生成された各シーン映像をシーン1~20の順に配列して隣り合うシーン映像を繋げて一連の映像コンテンツ、つまり統合映像コンテンツを作成する(S15)。シーン映像間の接続には切替エフェクト等を使用して視覚効果を高めてもよい。作成された統合映像コンテンツは、クライアント(つまりコンテンツ作成指示情報の送信元)に送信される。クライアントは、受信した統合映像コンテンツを映像コーデックによりデコードして再生する。なお、ユーザは、作成された統合映像コンテンツをHDD150に任意に保存することができる。 The integrated video content creation program 152 creates a series of video content, that is, an integrated video content by arranging the generated scene videos in order of scenes 1 to 20 and connecting adjacent scene videos (S15). A visual effect may be enhanced by using a switching effect or the like for connection between scene images. The created integrated video content is transmitted to the client (that is, the transmission source of the content creation instruction information). The client decodes and reproduces the received integrated video content using a video codec. Note that the user can arbitrarily save the created integrated video content in the HDD 150.
 このように統合映像コンテンツ作成プログラム152を実行させた場合は、統合映像コンテンツが自動的に作成されるため映像作成作業の煩雑さが無い反面、映像内容が特徴点評価(メタ情報)やシナリオ選択等の固定されたアルゴリズム設計に依存する問題がある。つまり、アルゴリズムの設計次第では、ユーザの意図する内容から外れた統合映像コンテンツが作成される虞がある。ここで、ユーザは、意図した通りに統合映像コンテンツが作成されるように例えばシナリオデータベース158内の各シナリオを編集することができる。しかし、このようなシナリオ編集作業によって統合映像コンテンツの内容を向上させるには相当の試行錯誤が必要である。すなわちシナリオ編集作業による上記問題の解決は、編集作業が煩雑なうえに困難性が伴うため有効とはいえない。このような事情を鑑みて、映像作成作業の煩雑さを無くしつつも統合映像コンテンツの内容向上を図るべく、ホームサーバ100には特徴点学習プログラム153、シナリオ学習プログラム154、家族プロファイル学習プログラム155等の学習プログラムが実装されている。 In this way, when the integrated video content creation program 152 is executed, the integrated video content is automatically created, so there is no complexity in the video creation work, but the video content is feature point evaluation (meta information) and scenario selection. There is a problem that depends on the fixed algorithm design. That is, depending on the design of the algorithm, there is a possibility that integrated video content deviating from the content intended by the user is created. Here, the user can edit, for example, each scenario in the scenario database 158 so that the integrated video content is created as intended. However, considerable trial and error is required to improve the content of the integrated video content by such scenario editing work. That is, the solution of the above problem by scenario editing work is not effective because the editing work is complicated and difficult. In view of such circumstances, the home server 100 includes a feature point learning program 153, a scenario learning program 154, a family profile learning program 155, etc. in order to improve the content of the integrated video content while eliminating the complexity of the video creation work. A learning program has been implemented.
 図7は、特徴点学習プログラム153により実行される特徴点学習処理を示すフローチャート図である。図7に示されるように、特徴点学習プログラム153は、RAM140に常駐後、コンテンツ収集プログラム151によるメタ情報の生成、およびクライアントの所定の共有情報の更新を監視する(S21~S23)。 FIG. 7 is a flowchart showing the feature point learning process executed by the feature point learning program 153. As shown in FIG. 7, the feature point learning program 153 stays in the RAM 140 and then monitors the generation of meta information by the content collection program 151 and the update of predetermined shared information of the client (S21 to S23).
 特徴点学習プログラム153は、コンテンツ収集プログラム151によりメタ情報が生成されたことを検出した場合(S21、S22:YES)、例えばTF-IDF(Term Frequency-Inverse Document Frequency)法を応用したアルゴリズムにより、図3のS4の処理で用いられる関数の変換係数、つまり映像コンテンツの特徴点を映像特徴パラメータ群に変換するための係数を更新する(S24)。該アルゴリズムによれば、メタ情報データベース157に格納された全てのメタ情報に基づき、コンテンツデータベース156に格納された映像コンテンツの傾向が解析される。例えば笑顔の映像が多いといった解析結果が得られた場合を考える。この場合、コンテンツデータベース156に格納された映像コンテンツのなかで笑顔の多い映像コンテンツは特徴的な映像コンテンツとはいえない。すなわち、「笑い」の映像特徴パラメータによって映像コンテンツを差別化することは難しい。そこで、特徴点学習プログラム153は、「笑い」という特徴を敢えて希薄化させて他の特徴を際立たせるため、「笑い」の映像特徴パラメータの重みが軽くなるように変換係数を更新する。コンテンツ収集プログラム151は、更新された変換係数を用いることにより、映像コンテンツの特徴をより的確に表現したメタ情報を生成するようになる。この結果、統合映像コンテンツ作成プログラム152は、コンテンツ作成指示情報により適した映像コンテンツを選択して統合映像コンテンツを作成するようになる。特徴点学習プログラム153は変換係数更新後、S21の処理に復帰する。 When the feature point learning program 153 detects that the meta information is generated by the content collection program 151 (S21, S22: YES), the feature point learning program 153 uses, for example, an algorithm applying a TF-IDF (Term Frequency-Inverse Document Frequency) method. The conversion coefficient of the function used in the process of S4 in FIG. 3, that is, the coefficient for converting the feature point of the video content into the video feature parameter group is updated (S24). According to the algorithm, the tendency of the video content stored in the content database 156 is analyzed based on all the meta information stored in the meta information database 157. For example, let us consider a case where an analysis result indicating that there are many smile images is obtained. In this case, video content with a lot of smiles among video content stored in the content database 156 is not a characteristic video content. That is, it is difficult to differentiate video content by the video feature parameter of “laughter”. Therefore, the feature point learning program 153 updates the conversion coefficient so that the weight of the video feature parameter of “laughter” is lightened so that the feature of “laughter” is intentionally diluted to make other features stand out. The content collection program 151 uses the updated conversion coefficient to generate meta information that more accurately represents the characteristics of the video content. As a result, the integrated video content creation program 152 selects an appropriate video content according to the content creation instruction information and creates an integrated video content. The feature point learning program 153 returns to the process of S21 after the conversion coefficient is updated.
 また、S24の処理において笑顔の映像が多いといった解析結果が得られた場合に、「笑い」という単一の映像特徴パラメータを「笑い」、「大笑い」、「くすくす笑い」等の複数の映像特徴パラメータに細分化してもよい。この場合、映像コンテンツを笑いの程度に応じて更に細かく区別し特徴付けられるようになる。 In addition, when an analysis result indicating that there are many smile videos in the process of S24, a single video feature parameter of “laughter” is set to a plurality of video features such as “laughter”, “big laughter”, and “slow laughter”. It may be subdivided into parameters. In this case, the video content can be further distinguished and characterized according to the degree of laughter.
 ところで、DLNAガイドライン等に準拠した情報機器が接続されたセグメント内では、各情報機器間において情報交換が相互になされ、各種情報が共有される。このため家庭内LAN1では、TV22やノートPC23等のクライアントにおけるコンテンツの再生履歴に関する詳細な情報をクライアントとホームサーバ100で共有することができる。クライアントは、例えば統合映像コンテンツの再生が終了する毎に共有フォルダに置かれた再生履歴情報を更新する。再生履歴情報は、例えば統合映像コンテンツ中の何れのシーンにおいて再生、スキップ、リピート、早送り、巻き戻し、停止等の操作が行われたかを示す情報を有する。 By the way, in a segment to which information devices complying with the DLNA guidelines are connected, information is exchanged between the information devices and various information is shared. For this reason, in the home LAN 1, detailed information regarding the content reproduction history in the client such as the TV 22 or the notebook PC 23 can be shared between the client and the home server 100. For example, the client updates the reproduction history information placed in the shared folder every time reproduction of the integrated video content is completed. The playback history information includes information indicating, for example, which scene in the integrated video content has been operated such as playback, skip, repeat, fast forward, rewind, and stop.
 特徴点学習プログラム153は、各クライアントの共有フォルダに定期的にアクセスして、共有フォルダ内の再生履歴情報が更新されているかどうかを監視する(S21~S23)。共有フォルダ内の再生履歴情報が更新されている場合には(S21、S22:NO、S23:YES)、該再生履歴情報を用いてメタ情報データベース157内に保持される後述の重み付け値を更新する(S25)。例えばシナリオ1581を用いて作成された統合映像コンテンツがTV22により再生された場合を考える。このときの再生履歴情報によれば、シーン1~16がリピートされ、シーン17~20が再生されていない。したがって、特徴点学習プログラム153は、リピート再生されたシーン1~16の映像コンテンツのメタ情報において例えば一定の水準(例えば、0.6)より高い値をもつ全ての映像特徴パラメータ(またはシーン特徴パラメータ)の重み付け値を上げるとともに、シーン17~20の映像コンテンツのメタ情報において一定の水準より高い全ての映像特徴パラメータの重み付け値(またはシーン特徴パラメータ)を下げる。なお、HDD150は、各特徴点に対応する重み付け値のリスト(不図示)を保持しており、統合映像コンテンツ作成プログラム152が各シーンに対応するメタ情報を検索する際に該重み付け値のリストを参照して、シナリオのシーン定義Sに含まれるシーン特徴パラメータに該重み付け値を加えた値と一致するメタ情報を検索する。また、特徴点学習プログラム153は、再生履歴情報の更新等があったときに、該重み付け値のリストを参照して、特徴点の重み付け値を更新する。また、リピート回数と映像特徴パラメータとの相関を計算して、相関係数に応じた重み付け値が付与されるようにしてもよい。付与される重み付け値は、リピート回数に応じて、例えばリニアに、指数関数的に、または対数的に増減される。これにより、統合映像コンテンツ作成プログラム152は、コンテンツデータベース156内に同じような内容の映像コンテンツが複数ある場合にもユーザが特に視聴したいと思う映像コンテンツを選択して統合映像コンテンツを作成するようになる。特徴点学習プログラム153は重み付け処理後、S21の処理に復帰する。 The feature point learning program 153 periodically accesses the shared folder of each client and monitors whether or not the reproduction history information in the shared folder has been updated (S21 to S23). If the playback history information in the shared folder has been updated (S21, S22: NO, S23: YES), a weighting value described later held in the meta information database 157 is updated using the playback history information. (S25). For example, consider a case where the integrated video content created using the scenario 1581 is reproduced by the TV 22. According to the reproduction history information at this time, scenes 1 to 16 are repeated and scenes 17 to 20 are not reproduced. Therefore, the feature point learning program 153 has all the video feature parameters (or scene feature parameters) having a value higher than, for example, a certain level (for example, 0.6) in the meta information of the video content of the scenes 1 to 16 reproduced repeatedly. ) And the weight values (or scene feature parameters) of all video feature parameters higher than a certain level in the meta information of the video content of the scenes 17 to 20 are lowered. The HDD 150 holds a list of weight values (not shown) corresponding to each feature point. When the integrated video content creation program 152 searches for meta information corresponding to each scene, the list of weight values is displayed. Referring to the meta information that matches the value obtained by adding the weighting value to the scene feature parameter included in the scene definition S of the scenario. Further, the feature point learning program 153 updates the weight value of the feature point with reference to the list of weight values when the reproduction history information is updated. Further, the correlation between the number of repeats and the video feature parameter may be calculated, and a weight value corresponding to the correlation coefficient may be given. The assigned weight value is increased or decreased, for example, linearly, exponentially, or logarithmically according to the number of repeats. As a result, the integrated video content creation program 152 selects the video content that the user particularly wants to view and creates the integrated video content even when there are a plurality of similar video contents in the content database 156. Become. The feature point learning program 153 returns to the process of S21 after the weighting process.
 特徴点学習プログラム153は、共有フォルダ内の再生履歴情報の監視処理の代替処理として、各クライアントの共有フォルダ内の再生履歴情報を定期的に取得するようにしてもよい。この場合、S25の処理においては、全ての再生履歴情報に基づきメタ情報データベース157内のメタ情報が更新される。例えば再生日時が新しい映像コンテンツのメタ情報の映像特徴パラメータほど高い重み付け値が付与される。この結果、統合映像コンテンツ作成プログラム152は、ユーザの最近の嗜好等に適した映像コンテンツを選択して統合映像コンテンツを作成するようになる。 The feature point learning program 153 may periodically acquire the reproduction history information in the shared folder of each client as an alternative process of the monitoring process of the reproduction history information in the shared folder. In this case, in the process of S25, the meta information in the meta information database 157 is updated based on all the reproduction history information. For example, a higher weighting value is assigned to a video feature parameter of meta information of video content having a new playback date and time. As a result, the integrated video content creation program 152 selects the video content suitable for the user's recent preferences and creates the integrated video content.
 上記は、特徴点学習プログラム153による映像特徴パラメータの更新処理の例示であり、他にも様々な更新処理が想定される。上記以外には、例えば家族プロファイルを利用した映像特徴パラメータの更新処理が想定される。ここでいう家族プロファイルとは、家庭内LAN1および外部ネットワーク2上の一部の情報機器(携帯電話24等)が保有する家族に関する情報である。HDDレコーダ21には、例えば家族各人が録画した映像コンテンツが「父」、「母」、「姉」、「弟」等の各録画カテゴリに関連付けて記憶されている。また、家族各人の視聴履歴や番組予約等の情報も記録されている。また、ノートPC23の家族各人のドキュメントフォルダには、Webページの閲覧履歴や写真、音楽等が格納されている。各情報機器に散在するこれらの情報は、家族構成や家族各人の嗜好、身体的情報等を反映している。 The above is an example of the update process of the video feature parameter by the feature point learning program 153, and various other update processes are assumed. Other than the above, for example, a video feature parameter update process using a family profile is assumed. The family profile here is information about a family held by some information devices (such as the mobile phone 24) on the home LAN 1 and the external network 2. For example, video content recorded by each family member is stored in the HDD recorder 21 in association with recording categories such as “father”, “mother”, “sister”, and “brother”. In addition, information such as viewing history of each family member and program reservation is also recorded. Further, browsing history of web pages, photos, music, and the like are stored in the document folder of each family member of the notebook PC 23. These pieces of information scattered in each information device reflect the family structure, the preferences of each family member, physical information, and the like.
 家族プロファイル学習プログラム155は、各情報機器に散在する家族プロファイルを収集してHDD150内に家族プロファイルデータベース159を構築する。また、再生履歴情報等に基づき家族プロファイルデータベース159内の家族プロファイルを更新し、あるいは家族プロファイルを追加する。一例として、再生されたシーンの内容や再生頻度等に基づき家族の嗜好等を推定して家族プロファイルを更新または追加する。また、コンテンツ作成指示情報を入力するGUI(Graphical User Interface)において操作者情報も併せて入力させるようにする。そして、該コンテンツ作成指示情報に応じた統合映像コンテンツの再生が終了した時に生成される再生履歴情報に操作者情報を関連付ける。再生履歴情報に関連付けられた操作者情報を利用することにより、全ての再生履歴情報のなかから家族各人の再生履歴情報を分類する。このようにして分類された再生履歴情報、さらには家族各人の視聴履歴や番組予約等の他の情報に基づき家族各人の嗜好等を推定して(例えば次段落にて説明される因子分析等を用いて嗜好等を表現する分布を生成し)、家族各人の家族プロファイルを更新または追加することができる。 The family profile learning program 155 collects family profiles scattered in each information device and constructs a family profile database 159 in the HDD 150. Further, the family profile in the family profile database 159 is updated based on the reproduction history information or the like, or the family profile is added. As an example, the family profile is updated or added by estimating the family preference based on the content of the reproduced scene, the reproduction frequency, and the like. In addition, operator information is also input in a GUI (Graphical User Interface) for inputting content creation instruction information. Then, the operator information is associated with the reproduction history information generated when the reproduction of the integrated video content corresponding to the content creation instruction information is finished. By using the operator information associated with the reproduction history information, the reproduction history information of each family member is classified from all the reproduction history information. Based on the playback history information thus classified, and also other information such as viewing history of each family member and program reservation, the preference of each family member is estimated (for example, factor analysis described in the next paragraph) And the like, and a family profile of each family member can be updated or added.
 家族プロファイル学習プログラム155は、さらに、データマイニングの手法等により家族プロファイルに基づく家族の行動パターン解析も行い、解析結果を家族プロファイルとして家族プロファイルデータベース159に蓄積する。このように従来各情報機器に散在していた家族プロファイルを収集して管理、解析することにより、有益な情報が新たに発見される可能性がある。 The family profile learning program 155 also performs a family behavior pattern analysis based on the family profile by a data mining method or the like, and accumulates the analysis results in the family profile database 159 as a family profile. Thus, by collecting, managing, and analyzing family profiles that have been conventionally scattered in each information device, useful information may be newly discovered.
 例えば因子分析等の多変量解析を用いて家族の特徴を分析し、新たな家族プロファイルを生成することができる。具体的には、n種類の映像特徴パラメータの各々を座標軸とするn次元仮想空間を定義し、該n次元仮想空間にメタ情報に基づき映像コンテンツを分布させる。次いで、映像コンテンツの分布を多変量解析することによりm(m<nであり、例えばm=3)種類の主成分を抽出し、各主成分の値をもつ主成分ベクトルを求める。求められた主成分ベクトルの情報に基づき、n次元仮想空間上の映像コンテンツの分布をより低次なm次元仮想空間(ここでは各主成分を座標軸として定義された三次元仮想空間)に写像する。三次元仮想空間上の映像コンテンツの分布は、家族が潜在的に有する特徴を表現している。このような分布で表現された家族プロファイルを利用することにより、特徴点に対する変換係数や各特徴点に対応する重み付け値の更新等を行うことができる。また、該分布で表現された家族プロファイルに適したシナリオ等を選択したりインターネット上のサーバからダウンロードしたりすることもできる。 For example, family characteristics can be analyzed using multivariate analysis such as factor analysis, and a new family profile can be generated. Specifically, an n-dimensional virtual space having each of n types of video feature parameters as coordinate axes is defined, and video content is distributed in the n-dimensional virtual space based on meta information. Next, m (m <n, for example, m = 3) types of principal components are extracted by multivariate analysis of the distribution of the video content, and principal component vectors having values of the principal components are obtained. Based on the obtained principal component vector information, the distribution of the video content in the n-dimensional virtual space is mapped to a lower-order m-dimensional virtual space (here, a three-dimensional virtual space defined with each principal component as a coordinate axis). . The distribution of video content in the three-dimensional virtual space expresses the characteristics that the family potentially has. By using the family profile expressed in such a distribution, it is possible to update conversion coefficients for feature points, weight values corresponding to each feature point, and the like. It is also possible to select a scenario suitable for the family profile expressed by the distribution or download it from a server on the Internet.
 また、クラスタ分析等の手法を用いて新たな家族プロファイルを生成することもできる。例えばn種類の映像特徴パラメータを映像コンテンツの再生状況等によって重み付け等の更新が頻繁になされるパラメータのクラスタと、頻繁にはなされないパラメータのクラスタの2つのクラスタに分類する。その分類に基づき家族の特徴を抽出して家族プロファイルを生成する。例えば前者のクラスタに注目して家族の特徴を抽出することができる。 Also, a new family profile can be generated using a technique such as cluster analysis. For example, n types of video feature parameters are classified into two clusters: a parameter cluster that is frequently updated such as weighting according to the playback state of the video content, and a parameter cluster that is not frequently updated. Based on the classification, family characteristics are extracted to generate a family profile. For example, family features can be extracted by focusing on the former cluster.
 家族プロファイルデータベース159に蓄積された家族プロファイルは、様々な処理に活用することができる。これらの家族プロファイルに例えば家族各人の身長や声、好きな服の色やパターン、好きなスポーツ、年齢等が含まれる場合を考える。この場合、家族プロファイルに基づき認識アルゴリズムで用いられる認識用のリファレンスデータを更新し、家族各人の動作認識や対象物認識、音声認識等の認識アルゴリズムの精度を向上させることができる。この結果、統合映像コンテンツ作成プログラム152は、ユーザの指示に対してより適切な映像コンテンツを選択して統合映像コンテンツを作成するようになる。 The family profile stored in the family profile database 159 can be used for various processes. Consider the case where these family profiles include, for example, the height and voice of each family member, the color and pattern of favorite clothes, favorite sports, age, and the like. In this case, the reference data for recognition used in the recognition algorithm based on the family profile can be updated to improve the accuracy of recognition algorithms such as motion recognition, object recognition, and voice recognition for each family member. As a result, the integrated video content creation program 152 selects a more appropriate video content in response to a user instruction, and creates the integrated video content.
 また、統合映像コンテンツ作成プログラム152は、家族プロファイルデータベース159に蓄積された家族プロファイルを直接的に活用して映像コンテンツを選択することができる。例えばコンテンツ作成指示情報に「父」が含まれる場合を考える。この場合、統合映像コンテンツ作成プログラム152は、家族プロファイルデータベース159にアクセスして父の嗜好等に関する家族プロファイルを検索する。そして、検索された家族プロファイルに基づき父の嗜好等に関する映像コンテンツを選択して統合映像コンテンツを作成する。 Also, the integrated video content creation program 152 can select video content by directly using the family profile stored in the family profile database 159. For example, consider a case where “Father” is included in the content creation instruction information. In this case, the integrated video content creation program 152 accesses the family profile database 159 and searches for a family profile related to father's preference and the like. Then, based on the retrieved family profile, video content related to father's preference or the like is selected to create integrated video content.
 家族プロファイルは、映像特徴パラメータの重み付け処理にも活用することができる。すなわち、特徴点学習プログラム153は、家族プロファイルを利用して図7のS22の処理と同じく変換係数を更新し、また、メタ情報データベース157内のメタ情報を更新することができる。一例として、子供が多い家庭の場合には「子供」という特徴を希薄化させて他の特徴を際立たせるため、「子供」の映像特徴パラメータの重み付けが軽くなるように変換係数を更新したり重み付け値を変動させたりする。 The family profile can also be used for weighting the video feature parameters. That is, the feature point learning program 153 can update the conversion coefficient in the same manner as the process of S22 of FIG. 7 using the family profile, and can update the meta information in the meta information database 157. As an example, in a family with many children, the conversion factor is updated or weighted so that the weight of the video feature parameter of “children” is lightened to dilute the feature of “children” and make other features stand out. Change the value.
 家族プロファイルは、さらに、シナリオデータベース158内の各シナリオの編集にも活用することができる。具体的には、シナリオ学習プログラム154が家族プロファイルを利用して各シナリオの編集を行う。一例として、家族プロファイルデータベース159に蓄積された家族プロファイルに子供の写真が多い場合には、子供のシーンが長くなるように既存のシナリオを編集し、または子供がメインのシナリオを新たに作成する。 The family profile can also be used to edit each scenario in the scenario database 158. Specifically, the scenario learning program 154 edits each scenario using the family profile. As an example, when the family profile stored in the family profile database 159 has many children's photos, the existing scenario is edited so that the child's scene becomes longer, or the child creates a new main scenario.
 なお、上述したようにDLNAガイドラインに対応する情報機器間においては各種情報が共有されることから、家族プロファイルも例外無く共有可能である。したがって、各情報機器に散在する家族プロファイルをホームサーバ100に収集することなく、家族プロファイルを有効活用したメタ情報の更新等を行うことができる。 As described above, since various information is shared between information devices that comply with the DLNA guidelines, family profiles can be shared without exception. Therefore, it is possible to update the meta information using the family profile effectively without collecting the family profile scattered in each information device in the home server 100.
 ホームサーバ100は、このようにして収集、追加、更新等された家族全体または家族各人の家族プロファイルをゲートウェイサーバ10を介して外部ネットワーク2上の情報提供サーバ200(図1参照)に送信する。情報提供サーバ200は、受信した家族プロファイルに基づき家族または家族各人の嗜好等に適した広告情報や映像コンテンツ等をホームサーバ100zに発信する。広告情報は、例えば統合映像コンテンツにオンスクリーンで表示される。 The home server 100 transmits the entire family or the family profile of each family member thus collected, added, updated, etc. to the information providing server 200 (see FIG. 1) on the external network 2 via the gateway server 10. . Based on the received family profile, the information providing server 200 transmits advertisement information, video content, and the like suitable for the preference of the family or each family member to the home server 100z. The advertisement information is displayed on the integrated video content on-screen, for example.
 シナリオ学習プログラム154は、特徴点学習プログラム153と同様に再生履歴情報等に基づきシナリオ編集やシナリオ特徴パラメータ、シーン特徴パラメータ等を更新する。例えば図6(b)のシナリオ1582を用いて作成された統合映像コンテンツがTV22により再生された場合を考える。このときの再生履歴情報によれば、シーン4、8、12、16、20がリピートされ、それ以外のシーンがスキップされている。この場合、家族全員の映像がユーザの視聴したかった映像と思われる。したがって、シナリオ学習プログラム154は、例えばシーン4、8、12、16、20のクリッピング時間を長くし、それ以外のシーンのクリッピング時間を短くするようにシナリオ1582を編集する。また、例えばシーン4、8、12、16、20のシーン特徴パラメータの重み付け値を上げるとともにそれ以外のシーンのシーン特徴パラメータの重み付け値を下げる。以降、ユーザが春の週末の戸外の映像を視聴するようにコンテンツ作成指示情報を入力した場合には、家族全員が写された統合映像コンテンツが作成されやすくなる。また、シナリオ学習プログラム154は、リピート回数が多いシーンほどクリッピング時間が長くなるようにシナリオ編集してもよい。クリッピング時間は、リピート回数に応じて、例えばリニアに、指数関数的に、または対数的に増減される。 Scenario learning program 154 updates scenario editing, scenario feature parameters, scene feature parameters, and the like based on playback history information and the like, similar to feature point learning program 153. For example, consider a case where the integrated video content created using the scenario 1582 of FIG. According to the reproduction history information at this time, scenes 4, 8, 12, 16, and 20 are repeated, and other scenes are skipped. In this case, the video of the entire family seems to be the video that the user wanted to watch. Therefore, the scenario learning program 154 edits the scenario 1582 so as to increase the clipping time of scenes 4, 8, 12, 16, and 20 and shorten the clipping time of other scenes, for example. Also, for example, the scene feature parameter weight values of scenes 4, 8, 12, 16, and 20 are increased, and the scene feature parameter weight values of other scenes are decreased. Thereafter, when the user inputs content creation instruction information so as to watch an outdoor video on a spring weekend, it is easy to create an integrated video content in which the entire family is copied. Further, the scenario learning program 154 may edit the scenario so that the clipping time becomes longer as the scene has a larger number of repeats. The clipping time is increased or decreased according to the number of repeats, for example, linearly, exponentially, or logarithmically.
 また、シナリオ学習プログラム154は、図5のS11の処理におけるシナリオの選択回数や再生日時等に基づき、各シナリオ特徴パラメータの重み付け値を変動させることができる。例えば選択回数が多いシナリオを良質なシナリオとする場合には、該シナリオがさらに選択されるように各シナリオ特徴パラメータの重み付け値を変動させる。 Further, the scenario learning program 154 can change the weighting value of each scenario feature parameter based on the number of times the scenario is selected in S11 of FIG. For example, when a scenario with a large number of selections is used as a high-quality scenario, the weight value of each scenario feature parameter is changed so that the scenario is further selected.
 このように特徴点学習プログラム153とシナリオ学習プログラム154は、再生履歴情報等に基づき各種パラメータを適切な値にフィードバックしている。このようなフィードバック処理は、特徴点学習プログラム153等に限らず統合映像コンテンツ作成プログラム152も行う。具体的には、統合映像コンテンツ作成プログラム152は、メタ情報データベース157に格納された全てのメタ情報に基づき、図5のS11のシナリオ選択処理に用いる閾値を更新する。すなわち、各映像コンテンツの特徴点たる各メタ情報の映像特徴パラメータをK平均法等のクラスタリング手法により例えば2つにクラスタリングして各クラスタの中心を計算し、これらの中心の中間値を閾値として設定する。この場合、コンテンツデータベース156に格納された映像コンテンツの傾向に応じて最適な閾値が設定される。 As described above, the feature point learning program 153 and the scenario learning program 154 feed back various parameters to appropriate values based on the reproduction history information and the like. Such feedback processing is performed not only by the feature point learning program 153 but also by the integrated video content creation program 152. Specifically, the integrated video content creation program 152 updates the threshold used for the scenario selection process in S11 of FIG. 5 based on all the meta information stored in the meta information database 157. That is, the video feature parameters of each meta information as feature points of each video content are clustered into, for example, two by a clustering method such as the K-average method to calculate the center of each cluster, and an intermediate value of these centers is set as a threshold value To do. In this case, an optimum threshold value is set according to the tendency of the video content stored in the content database 156.
 また、統合映像コンテンツ作成プログラム152とシナリオ学習プログラム154とが連係して以下のフィードバック処理を実行するようにしてもよい。すなわち、統合映像コンテンツ作成プログラム152は、クリッピング対象が例えば笑いのシーンである場合、笑い始めのn秒前から笑うまでのシーンをクリッピングする。統合映像コンテンツ作成プログラム152は、このときの「n秒」をクリッピング毎にランダムに設定する。一方、シナリオ学習プログラム154は、笑いのシーンの再生履歴情報を解析する。シナリオ学習プログラム154は、解析結果に基づき最適と判断されるn’秒を演算して統合映像コンテンツ作成プログラム152に渡す。以降、統合映像コンテンツ作成プログラム152は、笑いのシーンに対しては、笑い始めのn’秒前から笑うまでのシーンをクリッピングする。このようなプログラム間の連係処理により、ユーザの感性や嗜好等に合った統合映像コンテンツが作成されるようになる。 Further, the integrated video content creation program 152 and the scenario learning program 154 may be linked to execute the following feedback processing. That is, when the clipping target is, for example, a laughing scene, the integrated video content creation program 152 clips a scene from n seconds before the start of laughing to laughing. The integrated video content creation program 152 randomly sets “n seconds” at this time for each clipping. On the other hand, the scenario learning program 154 analyzes the reproduction history information of the laughing scene. The scenario learning program 154 calculates n ′ seconds that are determined to be optimal based on the analysis result, and passes them to the integrated video content creation program 152. Thereafter, the integrated video content creation program 152 clips a scene from laughter to n 'seconds before the start of laughing until laughing. By such linkage processing between programs, integrated video content suitable for the user's sensibility, preferences, etc. is created.
 ここで、上記n秒が2秒以上10秒未満の時間で設定される場合を例に考える。この場合においてn秒は、2秒以上10秒未満の間でランダムに設定されてもよく、或いは、ユーザ操作によりユーザの意図をある程度反映させた時間に設定されてもよい。後者の場合の具体的一例として、n秒として第一時間(例えば2秒以上3秒未満の時間)が設定される確率を30%、第二時間(例えば3秒以上5秒未満の時間)が設定される確率を40%、第三時間(例えば5秒以上10秒未満の時間)が設定される確率を30%というように設定することができる。第一~第三時間の各時間のなかではn秒はランダムに設定される。このようなユーザ設定を可能とすることにより、例えば事象(ここでは笑い)が起こる直前の時間帯を特定のコンテンツ(例えば家族写真等)の再生タイミング等に合わせて設定することができる。この場合のクリッピング時間および期間も各種学習プログラムにより学習され、該学習結果を基にユーザにとって好適なクリッピング時間および期間がさらに設定されるようにしてもよい。 Here, consider the case where the above-mentioned n seconds are set as a time of 2 seconds or more and less than 10 seconds. In this case, n seconds may be set randomly between 2 seconds and less than 10 seconds, or may be set to a time reflecting the user's intention to some extent by user operation. As a specific example of the latter case, the probability that a first time (eg, a time between 2 seconds and less than 3 seconds) is set as n seconds is 30%, and a second time (eg, a time between 3 seconds and less than 5 seconds) is set. The probability of being set can be set to 40%, and the probability of setting the third time (for example, a time of 5 seconds to less than 10 seconds) can be set to 30%. Of the first to third time periods, n seconds are set at random. By making such user settings possible, for example, the time zone immediately before the occurrence of an event (here, laughter) can be set in accordance with the playback timing of specific content (for example, a family photo). The clipping time and period in this case may also be learned by various learning programs, and a clipping time and period suitable for the user may be further set based on the learning result.
 以上説明された実施形態(以下、「第一実施形態」と記す。)によれば、ホームサーバ100は、HDDレコーダ21等の情報機器に記憶された映像コンテンツを自動収集したうえで統合映像コンテンツを作成している。次に、ホームサーバがこのような映像コンテンツの自動収集機能を有さないまたは自動収集処理を行わない別の実施形態(以下、「第二実施形態」と記す。)について説明する。第二実施形態において各情報機器の映像コンテンツをホームサーバに集めるためには、ユーザ自らが各情報機器に対する映像コンテンツの保持状況を確認し、そのうえで映像コンテンツをコピーまたは移動させる必要がある。このため、第二実施形態においては、ユーザの操作、作業負担が増加するように思われる。しかし、第二実施形態においては、ホームサーバと各情報機器が第一実施形態とは異なる連係処理を行うことにより、ユーザに操作、作業負担をかけること無く統合映像コンテンツを自動的に編集し作成することができる。 According to the embodiment described above (hereinafter referred to as “first embodiment”), the home server 100 automatically collects the video content stored in the information device such as the HDD recorder 21 and then integrates the video content. Have created. Next, another embodiment (hereinafter referred to as “second embodiment”) in which the home server does not have such an automatic video content collection function or does not perform automatic collection processing will be described. In order to collect the video content of each information device in the home server in the second embodiment, it is necessary for the user himself / herself to check the holding status of the video content for each information device and then copy or move the video content. For this reason, in the second embodiment, it seems that the user's operation and work load increase. However, in the second embodiment, the home server and each information device perform a linkage process different from that in the first embodiment, so that the integrated video content is automatically edited and created without burdening the user and operating work. can do.
 なお、第二実施形態のネットワーク構成は図1と同一構成である。第二実施形態において第一実施形態の構成と同一または同様の構成には同一または同様の符号を付して説明を省略する。 The network configuration of the second embodiment is the same as that shown in FIG. In the second embodiment, the same or similar components as those of the first embodiment are denoted by the same or similar reference numerals, and description thereof is omitted.
 第二実施形態においては、HDDレコーダ21等の各情報機器がメタ情報データベース157を有し、ホームサーバに代わり映像コンテンツにメタ情報を付与してメタ情報データベース157に格納する。すなわち各情報機器は、映像コンテンツを録画等すると同時に図3のS3~S5と同様の処理、すなわち映像コンテンツの特徴点解析(S3)、解析された特徴点に基づくメタ情報の生成(S4)、およびメタ情報データベース157へのメタ情報の格納(S5)を行う。映像コンテンツは、S3~S5の処理後もホームサーバに収集されないためネットワーク上に散在した状態にある。 In the second embodiment, each information device such as the HDD recorder 21 has the meta information database 157, and adds meta information to the video content instead of the home server and stores it in the meta information database 157. That is, each information device records the video content, etc., and at the same time performs the same processing as S3 to S5 in FIG. 3, that is, the video content feature point analysis (S3), the generation of meta information based on the analyzed feature points (S4), The meta information is stored in the meta information database 157 (S5). Since the video content is not collected by the home server even after the processing of S3 to S5, it is scattered on the network.
 図8は、第二実施形態の家庭内LAN1に設置されたホームサーバ100zの構成を示すブロック図である。ホームサーバ100zのHDD150には、メタ情報収集プログラム151z、統合映像コンテンツ作成プログラム152z、特徴点学習プログラム153、シナリオ学習プログラム154、家族プロファイル学習プログラム155、メタ情報データベース157z、シナリオデータベース158、家族プロファイルデータベース159が記憶されている。 FIG. 8 is a block diagram showing the configuration of the home server 100z installed in the home LAN 1 of the second embodiment. The HDD 150 of the home server 100z includes a meta information collection program 151z, an integrated video content creation program 152z, a feature point learning program 153, a scenario learning program 154, a family profile learning program 155, a meta information database 157z, a scenario database 158, and a family profile database. 159 is stored.
 図9は、メタ情報収集プログラム151zにより実行されるメタ情報収集処理を示すフローチャート図である。図9に示されるように、メタ情報収集プログラム151zは、RAM140に常駐後、各情報機器のメタ情報データベース157に定期的にアクセスする(S31)。次いで、各情報機器と情報を相互に交換してメタ情報データベース157に追加または更新されたメタ情報が存在するか否かを検知する(S32)。何れの情報機器のメタ情報データベース157にも追加または更新されたメタ情報が無い場合には(S32:NO)、メタ情報収集処理を終了する。一方、その何れかにおいて追加または更新されたメタ情報が有る場合には(S32:YES)、該メタ情報を収集する(S33)。メタ情報収集プログラム151zは、このようにして収集された各情報機器のメタ情報をメタ情報データベース157zに蓄積する(S34)。メタ情報収集プログラム151zは、メタ情報を蓄積する際、映像コンテンツを識別するためのコンテンツ識別情報(例えば映像コンテンツ名や映像コンテンツ固有に割り当てたID)と該映像コンテンツの所在情報(例えば情報機器のMACアドレスや他の固有ID、映像コンテンツのURL(Uniform Resource Locator)、或いは、映像コンテンツがリムーバルメディアに記録されている場合にはリムーバルメディアの固有ID等)を該メタ情報に追加する。 FIG. 9 is a flowchart showing meta information collection processing executed by the meta information collection program 151z. As shown in FIG. 9, the meta information collection program 151z periodically accesses the meta information database 157 of each information device after being resident in the RAM 140 (S31). Next, it is detected whether or not there is meta information added or updated in the meta information database 157 by exchanging information with each information device (S32). If there is no added or updated meta information in the meta information database 157 of any information device (S32: NO), the meta information collecting process is terminated. On the other hand, if any of the meta information is added or updated (S32: YES), the meta information is collected (S33). The meta information collection program 151z accumulates the meta information of each information device collected in this way in the meta information database 157z (S34). When the meta information collection program 151z accumulates meta information, the content identification information for identifying the video content (for example, the video content name or the ID assigned to the video content) and the location information of the video content (for example, the information device) A MAC address, other unique ID, URL (Uniform Resource Locator) of the video content, or a unique ID of the removable media when the video content is recorded on the removable media are added to the meta information.
 このように第二実施形態においては、各情報機器が保持する映像コンテンツのメタ情報はホームサーバ100zにより一元管理される。各情報機器は、一元管理されたメタ情報を利用してホームサーバ100zおよび他の情報機器と連係処理することにより統合映像コンテンツを作成することができる。図10は、統合映像コンテンツを作成する処理を示すシーケンス図である。一つの情報機器(ここではノートPC23)が操作されてコンテンツ作成指示情報が入力されると、図10に示される統合映像コンテンツを作成する処理が開始される(S41)。 Thus, in the second embodiment, the meta information of the video content held by each information device is centrally managed by the home server 100z. Each information device can create integrated video content by performing linkage processing with the home server 100z and other information devices using the centrally managed meta-information. FIG. 10 is a sequence diagram showing processing for creating integrated video content. When one information device (notebook PC 23 in this case) is operated and content creation instruction information is input, processing for creating integrated video content shown in FIG. 10 is started (S41).
 ノートPC23は、入力されたコンテンツ作成指示情報をホームサーバ100zに送信する(S42)。ホームサーバ100zによってコンテンツ作成指示情報が受信されるとCPU120により統合映像コンテンツ作成プログラム152zの実行が開始される。統合映像コンテンツ作成プログラム152zは、図5のS11~S12と同様の処理、すなわちコンテンツ作成指示情報に適したシナリオの選択、メタ情報を一元管理したメタ情報データベース157zへのアクセス、および選択シナリオの各シーンに適したメタ情報の検索を行う(S43)。次いで、レスポンスメッセージ、つまり選択シナリオとともに検索されたメタ情報をノートPC23に返す(S44)。 The notebook PC 23 transmits the input content creation instruction information to the home server 100z (S42). When the content creation instruction information is received by the home server 100z, the CPU 120 starts execution of the integrated video content creation program 152z. The integrated video content creation program 152z performs processing similar to S11 to S12 in FIG. 5, that is, selection of a scenario suitable for the content creation instruction information, access to the meta information database 157z in which meta information is centrally managed, and each of the selected scenarios. Meta information suitable for the scene is searched (S43). Next, the response message, that is, the meta information searched together with the selected scenario is returned to the notebook PC 23 (S44).
 ノートPC23は、受信されたメタ情報が有する映像コンテンツの所在情報を参照してアクセス先を決定する(S45)。次いで、決定されたアクセス先、つまりメタ情報に対応する映像コンテンツを有する各情報機器に、コンテンツ識別情報またはURLを含むリクエストメッセージを送信する(S46)。ホームサーバ100zの検索結果によっては、ノートPC23自身が保持する映像コンテンツのURLがアクセス先に含まれる場合もある。 Note PC 23 determines the access destination with reference to the location information of the video content included in the received meta information (S45). Next, a request message including content identification information or URL is transmitted to each information device having video content corresponding to the determined access destination, that is, meta information (S46). Depending on the search result of the home server 100z, the URL of the video content held by the notebook PC 23 itself may be included in the access destination.
 ノートPC23からリクエストメッセージを受け取った情報機器は、リクエストメッセージ中のコンテンツ識別情報またはURLに応じたレスポンス、すなわち自己が保持する映像コンテンツのなかから、該リクエストメッセージ中で指定された映像コンテンツを検索して(S47)ノートPC23に返す(S48)。このようにして統合映像コンテンツに必要な各シーンの映像コンテンツがノートPC23に集められる。ノートPC23は、収集された映像コンテンツおよびホームサーバ100zから受け取った選択シナリオを用いて図5のS14~S15と同様の処理を行い、統合映像コンテンツを作成する(S49)。作成された統合映像コンテンツは、ノートPC23の映像コーデックによりデコードされ再生される(S50)。 The information device that has received the request message from the notebook PC 23 searches for the video content specified in the request message from the response corresponding to the content identification information or URL in the request message, that is, the video content held by itself. (S47) Return to the notebook PC 23 (S48). In this way, the video content of each scene necessary for the integrated video content is collected in the notebook PC 23. Using the collected video content and the selected scenario received from the home server 100z, the notebook PC 23 performs the same processing as S14 to S15 in FIG. 5 to create integrated video content (S49). The created integrated video content is decoded and reproduced by the video codec of the notebook PC 23 (S50).
 以上のように第二実施形態においては、各情報機器が保持する映像コンテンツをホームサーバ100z、つまり一つの記憶装置に収集すること無く統合映像コンテンツを自動的に編集し作成することができる。このため、映像コンテンツの収集に伴う操作、作業負担がユーザに無い。また、第二実施形態では家庭内LAN1のトラフィック面において有利な効果がある。その例示として、各情報機器が情報量の多い映像コンテンツを同時に収集した際にトラフィックが著しく増加してネットワーク障害が発生するといった心配がないことが挙げられる。また、家庭内LAN1内のネットワーク伝送路を伝送される映像コンテンツが統合映像コンテンツの作成に必要とされる最低限の映像コンテンツに限られる。このため、ネットワーク伝送路を映像コンテンツ以外の他のデータの伝送に割り当てやすくなり、情報機器間においてより多様な情報の交換が可能になる効果も期待できる。 As described above, in the second embodiment, the integrated video content can be automatically edited and created without collecting the video content held by each information device in the home server 100z, that is, one storage device. For this reason, there is no operation or work burden associated with the collection of video content on the user. In the second embodiment, there is an advantageous effect in terms of traffic on the home LAN 1. As an example, there is no concern that when each information device collects video content with a large amount of information at the same time, traffic will increase significantly and a network failure will occur. Further, the video content transmitted through the network transmission line in the home LAN 1 is limited to the minimum video content required for creating the integrated video content. For this reason, it becomes easy to assign the network transmission path to the transmission of data other than the video content, and it is also possible to expect the effect that a wider variety of information can be exchanged between information devices.
 なお、第二実施形態においても第一実施形態と同様に各種学習プログラムによるメタ情報の更新やシナリオ編集、家族プロファイル更新等が行われるようにしてもよい。 In the second embodiment, meta information updating, scenario editing, family profile updating, and the like may be performed by various learning programs as in the first embodiment.
 以上が本発明の実施形態の説明である。本発明は、上記の構成に限定されるものではなく、本発明の技術的思想の範囲において様々な変形が可能である。例えば本発明の特徴に係る常駐プログラムは、家庭内LAN1内の各情報機器に散在してもよく、あるいは全ての情報機器が該常駐プログラムを有する構成としてもよい。また、各種データベースも家庭内LAN1内の各情報機器に散在する構成としてもよい。また、ホームサーバ100自らが映像コンテンツを録画して蓄積し、再生するようにしてもよい。これは、ホームサーバ100がDMSおよびDMPとして機能することにより本発明がホームサーバ100単体でも成立することを意味する。 The above is the description of the embodiment of the present invention. The present invention is not limited to the above-described configuration, and various modifications can be made within the scope of the technical idea of the present invention. For example, the resident program according to the feature of the present invention may be scattered in each information device in the home LAN 1, or all information devices may have the resident program. Various databases may be scattered in each information device in the home LAN 1. Alternatively, the home server 100 itself may record and store video content and reproduce it. This means that the home server 100 functions as a DMS and a DMP, so that the present invention can be realized by the home server 100 alone.
 また、例えば処理対象となるコンテンツには映像コンテンツだけでなく、上述したコンテンツの定義に含まれるあらゆる形式のコンテンツが含まれる。この場合、例えばサーバサイドで一元管理されたコンテンツリスト等にアクセスしてネットワークに散在する多様なコンテンツのうちユーザが求めるコンテンツだけを収集して処理を施し、複数形式のコンテンツを混合させた新たなコンテンツを作成することができる。 Also, for example, the content to be processed includes not only video content but also any type of content included in the content definition described above. In this case, for example, by accessing a content list that is centrally managed on the server side, and collecting and processing only the content requested by the user from among various contents scattered on the network, a new mixed content of a plurality of formats Content can be created.
 また、再生履歴情報(さらには情報機器の動作情報全般)を収集するタイミングは、定期的なタイミングに限定されない。例えば各情報機器は、再生履歴情報等を生成すると同時にホームサーバ100にアクセスし、該再生履歴情報等をホームサーバ100にリアルタイムに転送することができる。また、再生履歴情報等を収集する装置は、ホームサーバ100に限定されない。例えば家庭内LAN1内の何れかの情報機器が、各情報機器が持つ再生履歴情報等を定期的にまたはリアルタイムに収集してホームサーバ100に転送するようにしてもよい。 Also, the timing for collecting the playback history information (and the general operation information of the information device) is not limited to the regular timing. For example, each information device can access the home server 100 simultaneously with generating the reproduction history information and the like, and transfer the reproduction history information and the like to the home server 100 in real time. Also, the device that collects reproduction history information and the like is not limited to the home server 100. For example, any information device in the home LAN 1 may collect reproduction history information and the like held by each information device regularly or in real time and transfer them to the home server 100.
 また、第一実施形態において第二実施形態と同様に、各情報機器が映像コンテンツを録画等すると同時にメタ情報を生成するようにしてもよい。この場合、ホームサーバ100は、映像コンテンツとともにメタ情報も収集することとなる。 Also, in the first embodiment, as in the second embodiment, each information device may record meta information and generate meta information at the same time. In this case, the home server 100 collects meta information together with the video content.

Claims (16)

  1.  少なくとも1つの特徴点について評価されたコンテンツを記憶した複数の情報機器とホームネットワークを介して接続されたサーバが実行する、各情報機器のコンテンツを管理するコンテンツ管理方法であって、
     前記複数の情報機器から各コンテンツの前記特徴点評価を収集する特徴点評価収集ステップと、
     前記収集された特徴点評価を記憶する特徴点評価記憶ステップと、
     前記情報機器からコンテンツが要求された場合に、前記記憶された特徴点評価を検索して該要求に適した特徴点評価を有するコンテンツを選択するコンテンツ選択ステップと、
     前記選択されたコンテンツの所在情報を前記情報機器に通知するコンテンツ所在情報通知ステップと、
    を含むことを特徴とするコンテンツ管理方法。
    A content management method for managing content of each information device executed by a server connected via a home network with a plurality of information devices storing content evaluated for at least one feature point,
    A feature point evaluation collecting step of collecting the feature point evaluation of each content from the plurality of information devices;
    A feature point evaluation storing step for storing the collected feature point evaluation;
    A content selection step of searching the stored feature point evaluation and selecting content having a feature point evaluation suitable for the request when content is requested from the information device;
    A content location information notification step of notifying the information device of the location information of the selected content;
    A content management method comprising:
  2.  少なくとも1つの特徴点評価に関連付けされたシナリオを複数記憶するシナリオ記憶ステップと、
     前記要求がされた場合に前記複数のシナリオの中から該要求に適した特徴点評価を有するシナリオを選択するシナリオ選択ステップと、
    をさらに含み、
     前記コンテンツ選択ステップにおいて、前記選択されたシナリオに基づきコンテンツを選択することを特徴とする、請求項1に記載のコンテンツ管理方法。
    A scenario storage step for storing a plurality of scenarios associated with at least one feature point evaluation;
    A scenario selection step of selecting a scenario having a feature point evaluation suitable for the request from the plurality of scenarios when the request is made;
    Further including
    The content management method according to claim 1, wherein, in the content selection step, content is selected based on the selected scenario.
  3.  前記シナリオがそれぞれ異なる特徴点評価に関連付けされた複数のシーンによって構成される場合に、
      前記コンテンツ選択ステップにおいて、前記選択されたシナリオの各シーンの特徴点評価に基づき各シーンのコンテンツを選択することを特徴とする、請求項2に記載のコンテンツ管理方法。
    When the scenario is composed of a plurality of scenes associated with different feature point evaluations,
    The content management method according to claim 2, wherein, in the content selection step, content of each scene is selected based on a feature point evaluation of each scene of the selected scenario.
  4.  少なくとも1つの特徴点について評価されたコンテンツを記憶した他の情報機器、および各情報機器に記憶されたコンテンツを該特徴点に関連付けて管理するサーバとホームネットワークを介して接続された情報機器が実行するコンテンツ自動編集方法であって、
     前記サーバにコンテンツを要求するコンテンツ要求ステップと、
     前記サーバから前記要求に適した特徴点評価を有するコンテンツの所在情報が通知された場合に、該所在情報に基づき各情報機器から該コンテンツを収集するコンテンツ収集ステップと、
     前記収集されたコンテンツを編集するコンテンツ編集ステップと、
    を含むことを特徴とするコンテンツ自動編集方法。
    Executed by another information device that stores content evaluated for at least one feature point, and an information device connected via a home network to a server that manages the content stored in each information device in association with the feature point Content automatic editing method,
    A content requesting step for requesting content from the server;
    A content collection step of collecting the content from each information device based on the location information when the location information of the content having the feature point evaluation suitable for the request is notified from the server;
    A content editing step of editing the collected content;
    The content automatic edit method characterized by including this.
  5.  前記編集されたコンテンツを再生するコンテンツ再生ステップをさらに含むことを特徴とする、請求項4に記載のコンテンツ自動編集方法。 5. The content automatic editing method according to claim 4, further comprising a content reproduction step of reproducing the edited content.
  6.  前記コンテンツを記憶する際に前記少なくとも1つの特徴点について該コンテンツを評価するコンテンツ評価ステップをさらに含むことを特徴とする、請求項4または請求項5の何れかに記載のコンテンツ自動編集方法。 6. The content automatic editing method according to claim 4, further comprising a content evaluation step of evaluating the content with respect to the at least one feature point when the content is stored.
  7.  少なくとも1つの特徴点について評価されたコンテンツを記憶した複数の情報機器と、ホームネットワークを介して該複数の情報機器と接続されたサーバとの連係処理によりコンテンツを自動編集するコンテンツ自動編集方法であって、
     前記サーバにより実行される、
      前記複数の情報機器から各コンテンツの前記特徴点評価を収集する特徴点評価収集ステップと、
      前記収集された特徴点評価を記憶する特徴点評価記憶ステップと、
      前記情報機器からコンテンツが要求された場合に、前記記憶された特徴点評価を検索して該要求に適した特徴点評価を有するコンテンツを選択するコンテンツ選択ステップと、
      前記選択されたコンテンツの所在情報を前記情報機器に通知するコンテンツ所在情報通知ステップと、
     前記情報機器により実行される、
      前記サーバにコンテンツを要求するコンテンツ要求ステップと、
      前記サーバから前記要求に適した特徴点評価を有するコンテンツの所在情報が通知された場合に、該所在情報に基づき各情報機器から該コンテンツを収集するコンテンツ収集ステップと、
      前記収集されたコンテンツを編集するコンテンツ編集ステップと、
    を含むことを特徴とするコンテンツ自動編集方法。
    This is an automatic content editing method for automatically editing content through linkage processing between a plurality of information devices storing content evaluated for at least one feature point and a server connected to the plurality of information devices via a home network. And
    Executed by the server,
    A feature point evaluation collecting step for collecting the feature point evaluation of each content from the plurality of information devices;
    A feature point evaluation storing step for storing the collected feature point evaluation;
    A content selection step of searching for the stored feature point evaluation and selecting content having a feature point evaluation suitable for the request when content is requested from the information device;
    A content location information notification step of notifying the information device of the location information of the selected content;
    Executed by the information device,
    A content requesting step for requesting content from the server;
    A content collection step of collecting the content from each information device based on the location information when the location information of the content having the feature point evaluation suitable for the request is notified from the server;
    A content editing step of editing the collected content;
    The content automatic edit method characterized by including this.
  8.  請求項1から請求項6の何れかに記載の方法の各ステップをコンピュータに実行させるためのコンテンツ管理プログラムまたはコンテンツ自動編集プログラム。 A content management program or a content automatic editing program for causing a computer to execute each step of the method according to any one of claims 1 to 6.
  9.  少なくとも1つの特徴点について評価されたコンテンツを記憶した複数の情報機器とホームネットワークを介して接続されたサーバであって、
     前記複数の情報機器から各コンテンツの前記特徴点評価を収集する特徴点評価収集手段と、
     前記収集された特徴点評価を記憶する特徴点評価記憶手段と、
     前記情報機器からコンテンツが要求された場合に、前記特徴点評価記憶手段を検索して該要求に適した特徴点評価を有するコンテンツを選択するコンテンツ選択手段と、
     前記選択されたコンテンツの所在情報を前記情報機器に通知するコンテンツ所在情報通知手段と、
    を有することを特徴とするサーバ。
    A server connected via a home network to a plurality of information devices storing content evaluated for at least one feature point,
    Feature point evaluation collecting means for collecting the feature point evaluation of each content from the plurality of information devices;
    Feature point evaluation storage means for storing the collected feature point evaluation;
    Content selection means for searching the feature point evaluation storage means and selecting content having a feature point evaluation suitable for the request when content is requested from the information device;
    Content location information notifying means for notifying the information device of the location information of the selected content;
    The server characterized by having.
  10.  少なくとも1つの特徴点評価に関連付けされたシナリオを複数記憶するシナリオ記憶手段と、
     前記要求がされた場合に前記複数のシナリオの中から該要求に適した特徴点評価を有するシナリオを選択するシナリオ選択手段と、
    さらに有し、
     前記コンテンツ選択手段は、前記選択されたシナリオに基づきコンテンツを選択することを特徴とする、請求項9に記載のサーバ。
    Scenario storage means for storing a plurality of scenarios associated with at least one feature point evaluation;
    Scenario selection means for selecting a scenario having a feature point evaluation suitable for the request from the plurality of scenarios when the request is made;
    In addition,
    The server according to claim 9, wherein the content selection unit selects content based on the selected scenario.
  11.  前記シナリオがそれぞれ異なる特徴点評価に関連付けされた複数のシーンによって構成される場合に、
      前記コンテンツ選択手段は、前記選択されたシナリオの各シーンの特徴点評価に基づき各シーンのコンテンツを選択することを特徴とする、請求項10に記載のサーバ。
    When the scenario is composed of a plurality of scenes associated with different feature point evaluations,
    The server according to claim 10, wherein the content selection unit selects content of each scene based on a feature point evaluation of each scene of the selected scenario.
  12.  少なくとも1つの特徴点について評価されたコンテンツを記憶した他の情報機器、および各情報機器に記憶されたコンテンツを該特徴点に関連付けて管理するサーバとホームネットワークを介して接続された情報機器であって、
     前記サーバにコンテンツを要求するコンテンツ要求手段と、
     前記サーバから前記要求に適した特徴点評価を有するコンテンツの所在情報が通知された場合に、該所在情報に基づき各情報機器から該コンテンツを収集するコンテンツ収集手段と、
     前記収集されたコンテンツを編集するコンテンツ編集手段と、
    を有することを特徴とする情報機器。
    An information device connected via a home network to another information device that stores content evaluated for at least one feature point, and a server that manages the content stored in each information device in association with the feature point. And
    Content requesting means for requesting content from the server;
    Content collection means for collecting the content from each information device based on the location information when the location information of the content having the feature point evaluation suitable for the request is notified from the server;
    Content editing means for editing the collected content;
    An information device characterized by comprising:
  13.  前記編集されたコンテンツを再生するコンテンツ再生手段をさらに有することを特徴とする、請求項12に記載の情報機器。 13. The information device according to claim 12, further comprising content reproduction means for reproducing the edited content.
  14.  コンテンツを記憶するコンテンツ記憶手段と、
     前記コンテンツ記憶手段に前記コンテンツを記憶する際に前記少なくとも1つの特徴点についてコンテンツを評価するコンテンツ評価手段と、
    をさらに有することを特徴とする、請求項12または請求項13の何れかに記載の情報機器。
    Content storage means for storing content;
    Content evaluation means for evaluating content for the at least one feature point when storing the content in the content storage means;
    The information apparatus according to claim 12, further comprising:
  15.  前記コンテンツは、映像コンテンツであることを特徴とする、請求項9から請求項14の何れかに記載のサーバまたは情報機器。 15. The server or the information device according to claim 9, wherein the content is a video content.
  16.  請求項12から請求項14の何れかに記載の複数の情報機器と、ホームネットワークを介して該複数の情報機器に接続された請求項11に記載のサーバと、を有するコンテンツ自動編集システムであって、
     前記コンテンツ編集手段は、各シーンに対して選択されたコンテンツが前記シナリオで定められた順に再生されるよう編集処理を行うことを特徴とするコンテンツ自動編集システム。
    An automatic content editing system comprising a plurality of information devices according to any one of claims 12 to 14 and a server according to claim 11 connected to the plurality of information devices via a home network. And
    The content automatic editing system, wherein the content editing means performs an editing process so that the content selected for each scene is reproduced in an order determined by the scenario.
PCT/JP2009/059706 2008-05-30 2009-05-27 Content management method, automatic content editing method, content management program, automatic content editing program, server, information device, and automatic content editing system WO2009145226A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2008-142223 2008-05-30
JP2008142223A JP2009290651A (en) 2008-05-30 2008-05-30 Content managing method, automatic content editing method, content management program, automatic content editing program, server, information device, and automatic content editing system

Publications (1)

Publication Number Publication Date
WO2009145226A1 true WO2009145226A1 (en) 2009-12-03

Family

ID=41377097

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2009/059706 WO2009145226A1 (en) 2008-05-30 2009-05-27 Content management method, automatic content editing method, content management program, automatic content editing program, server, information device, and automatic content editing system

Country Status (2)

Country Link
JP (1) JP2009290651A (en)
WO (1) WO2009145226A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8275375B2 (en) * 2010-03-25 2012-09-25 Jong Hyup Lee Data integration for wireless network systems
EP2557782B1 (en) 2010-04-09 2019-07-17 Cyber Ai Entertainment Inc. Server system for real-time moving image collection, recognition, classification, processing, and delivery
JP5664120B2 (en) * 2010-10-25 2015-02-04 ソニー株式会社 Editing device, editing method, program, and recording medium
WO2022014294A1 (en) * 2020-07-15 2022-01-20 ソニーグループ株式会社 Information processing device, information processing method, and program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007105265A1 (en) * 2006-03-10 2007-09-20 Fujitsu Limited Reproduction device, reproduction device control method, program, and computer-readable recording medium
WO2007139105A1 (en) * 2006-05-31 2007-12-06 Pioneer Corporation Broadcast receiver, digest creating method, and digest creating program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007105265A1 (en) * 2006-03-10 2007-09-20 Fujitsu Limited Reproduction device, reproduction device control method, program, and computer-readable recording medium
WO2007139105A1 (en) * 2006-05-31 2007-12-06 Pioneer Corporation Broadcast receiver, digest creating method, and digest creating program

Also Published As

Publication number Publication date
JP2009290651A (en) 2009-12-10

Similar Documents

Publication Publication Date Title
JP4944919B2 (en) Automatic media file selection
US8260828B2 (en) Organizing content using a dynamic profile
JP4859943B2 (en) Media file management using metadata injection
US8180826B2 (en) Media sharing and authoring on the web
EP2325766B1 (en) Method and apparatus for managing content service in network based on content use history
US8782170B2 (en) Information processing apparatus, information processing method, and computer program
US20140223309A1 (en) Method and system for browsing, searching and sharing of personal video by a non-parametric approach
JP2009512008A (en) A device that handles data items that can be rendered to the user
WO2004068353A1 (en) Information processing device, information processing method, and computer program
JP2004235739A (en) Information processor, information processing method and computer program
JP2009171558A (en) Image processor, image managing server, and control method and program thereof
JPWO2007000949A1 (en) Method and apparatus for reproducing content with reproduction start position control
US20080229207A1 (en) Content Presentation System
US7519612B2 (en) Information processing system, information processing method, and computer program used therewith
WO2004068354A1 (en) Information processing device, information processing method, and computer program
WO2009145257A1 (en) Automatic content reproduction method, automatic content reproduction program, automatic content reproduction system, and automatic content reproduction server
WO2009145226A1 (en) Content management method, automatic content editing method, content management program, automatic content editing program, server, information device, and automatic content editing system
US8521844B2 (en) Information processing apparatus and method and program
KR101377737B1 (en) Storage profile generation for network-connected portable storage devices
JP5043711B2 (en) Video evaluation apparatus and method
WO2014065165A1 (en) Information processing device, information processing method and program, and information processing system
US20080065695A1 (en) System and method for nondeterministic media playback selected from a plurality of distributed media libraries
WO2014103374A1 (en) Information management device, server and control method
CN101471115B (en) Photographing apparatus and photographing method
KR102492022B1 (en) Method, Apparatus and System of managing contents in Multi-channel Network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09754734

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09754734

Country of ref document: EP

Kind code of ref document: A1