WO2014162757A1 - Information processing apparatus, tagging method and program - Google Patents

Information processing apparatus, tagging method and program Download PDF

Info

Publication number
WO2014162757A1
WO2014162757A1 PCT/JP2014/050829 JP2014050829W WO2014162757A1 WO 2014162757 A1 WO2014162757 A1 WO 2014162757A1 JP 2014050829 W JP2014050829 W JP 2014050829W WO 2014162757 A1 WO2014162757 A1 WO 2014162757A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
position information
information
processing apparatus
control unit
Prior art date
Application number
PCT/JP2014/050829
Other languages
French (fr)
Japanese (ja)
Inventor
淳己 大村
淳也 小野
誠司 鈴木
健太郎 木村
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2014162757A1 publication Critical patent/WO2014162757A1/en

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • H04N21/4325Content retrieval operation from a local storage medium, e.g. hard-disk by playing back content from the storage medium
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4524Management of client data or end-user data involving the geographical location of the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal

Definitions

  • the present disclosure relates to an information processing apparatus, a tagging method, and a program.
  • Patent Document 1 metadata including information on event occurrence time, event type, content, and the like of each scene included in a moving image that is content is acquired, and the metadata is transferred to the corresponding scene in the moving image as an event.
  • a technique for assigning as a time tag is disclosed.
  • Patent Document 2 collects a plurality of comments posted to a moving image via a network such as the Internet, extracts a scene of interest from the moving image based on the number of comments posted, and includes the comment.
  • a technique for tagging a comment keyword to the attention scene is disclosed.
  • the present disclosure proposes a new and improved information processing apparatus, tagging method, and program capable of tagging content with a higher degree of freedom.
  • a position information acquisition unit that acquires position information of an operating tool associated with an elapsed time during content reproduction, and a tag for the content by adding the position information to the content as tag information.
  • an information processing apparatus including a tagging unit that performs tagging.
  • tagging of the content is obtained by acquiring position information of the operating tool associated with the elapsed time during content playback, and adding the position information to the content as tag information. And a tagging method is provided.
  • the position information acquisition unit acquires position information of the operating tool associated with the elapsed time during content reproduction, and the tagging unit assigns the position information as tag information to the content. Is tagged. As described above, since the user can tag the content by the operation of moving the operating tool, the tagging reflecting the user's preference is realized by a simpler operation.
  • FIG. 1 is a functional block diagram illustrating a schematic configuration of an information processing apparatus according to an embodiment of the present disclosure. It is explanatory drawing for demonstrating an example of the tagging process which concerns on this embodiment. It is explanatory drawing for demonstrating the tagging process by which the reproduction speed of a content is controlled which is a modification of the tagging process in this embodiment.
  • FIG. 10 is an explanatory diagram for explaining the association between the position information of the operating tool in the X-axis direction and the content playback speed in the modification of the tagging process shown in FIG. 3.
  • FIG. 10 is an explanatory diagram for explaining the association between the position information of the operating tool in the X-axis direction and the content playback speed in the modification of the tagging process shown in FIG. 3.
  • FIG. 10 is an explanatory diagram for explaining the association between the position information of the operating tool in the X-axis direction and the content playback speed in the modification of the tagging process shown in FIG. 3.
  • FIG. 10 is an explanatory diagram for explaining the association between the position information of the operating tool in the X-axis direction and the content playback speed in the modification of the tagging process shown in FIG. 3. It is explanatory drawing for demonstrating the tagging process which is a modification of the tagging process in this embodiment, and the fast-forward or rewind of content is controlled. It is explanatory drawing for demonstrating the production
  • the position information of the operating tool associated with the elapsed time during content playback is acquired by detecting the position of the operating tool while controlling the playback state of the content. Then, tagging of the content is performed by giving the position information as tag information to the content.
  • tagging process the above-described series of processes in which tag information is acquired and the tag information is assigned to content is referred to as tagging process.
  • FIG. 1 is a functional block diagram illustrating a schematic configuration of an information processing apparatus according to an embodiment of the present disclosure.
  • the information processing apparatus 10 includes an input unit 110, a display unit 120, a storage unit 130, and a control unit 140.
  • the input unit 110 is an input interface for allowing a user to input information and commands related to various processing operations to the information processing apparatus 10.
  • the input unit 110 has a function of detecting the position of the operating tool and inputting the position information to the information processing apparatus 10.
  • the input unit 110 includes a sensor device for detecting the position of the operating body. The user can input the position information of the operating tool to the information processing apparatus 10 by moving the operating tool within the detection range of the sensor device of the input unit 110.
  • the sensor device included in the input unit 110 may be a device that detects the position of the operation body on a plane such as a touch pad, and position information on a two-dimensional plane is input as position information of the operation body. Also good.
  • the sensor device included in the input unit 110 may be a device that detects the position of the operating tool in a space such as a stereo camera or an infrared camera, and position information in a three-dimensional space as the position information of the operating tool. May be input.
  • the display unit 120 is an output interface that visually displays various types of information processed in the information processing apparatus 10 and processed results on a display screen.
  • the display unit 120 displays the contents of various contents (for example, moving images and still images) on the display screen under the control of the control unit 140. Further, the display unit 120 may display the locus of the position information of the operating tool input from the input unit 110 on the display screen.
  • the storage unit 130 is an example of a storage medium for storing various types of information processed by the information processing apparatus 10 and processed results.
  • the storage unit 130 stores content data processed by the information processing apparatus 10.
  • the storage unit 130 stores content data to which tag information is added, which is generated as a result of the tagging process performed by the control unit 140.
  • FIG. 2 is an explanatory diagram for explaining an example of a tagging process according to the present embodiment.
  • the input unit 110 includes a sensor device that detects the position of the operating body on a plane such as a touch pad, and is integrated with the display screen 210 of the display unit 120. That is, the input unit 110 and the display unit 120 constitute a so-called touch panel.
  • FIG. 2 illustrates a case where the operation body is a user's finger as an example of the operation body.
  • one scene of the moving image (image data included in the moving image data) is displayed on the display screen 210 as an example of the content.
  • an indicator 220 indicating the elapsed time (reproduction position) during reproduction of the moving image is displayed on the display screen 210 at the same time.
  • the horizontal direction is referred to as the X-axis direction and the vertical direction is referred to as the Y-axis direction based on the image displayed on the display screen 210.
  • a point 240 representing the contact point is displayed on the display screen 210.
  • the two-dimensional position information of the point 240 on the display screen 210 is input to the information processing apparatus 10.
  • a locus 250 of position information may be displayed on the display screen 210 as shown in FIG.
  • the position information of the operation tool is acquired as the coordinate value (X, Y) on the display screen 210.
  • the position information of the operating tool in the first direction is used as tag information, and the operating tool in a second direction different from the first direction is used.
  • the position information may be used for content playback state control.
  • the first direction may be the Y-axis direction, that is, the vertical direction with respect to the display screen, and the X-axis direction, ie, the display screen display.
  • the left-right direction may be the second direction.
  • the playback position of the content or the playback speed of the content may be controlled according to the position information in the X-axis direction among the position information of the operating tool.
  • the reproduction state is controlled by the position information in one of them, for example, the X-axis direction, while the other, for example, Position information in the Y-axis direction is acquired. Therefore, the position information in the Y-axis direction is acquired as the position information of the operating tool associated with the elapsed time during content reproduction, and is used as tag information.
  • the input unit 110 and the display unit 120 configure a touch panel
  • the operation body is a finger
  • the content data is moving image data.
  • the present embodiment will be described.
  • this embodiment is not limited to this example.
  • the input unit 110 may have any configuration as long as the position of the operating body can be detected.
  • the operating body may be a mouse pointer operated by a mouse. The position of the mouse pointer on the display screen of the display unit 120 may be detected.
  • the input unit 110 may include a sensor device that detects the position of the operating tool in space, and the position of the user's hand may be detected as the operating tool.
  • the content data may not be moving image data, and for example, any content data such as music data and slide show data in which still images are continuously displayed at a predetermined time may be applied.
  • the control unit 140 controls the information processing apparatus 10 in an integrated manner, and performs various types of information processing in the tagging process according to the present embodiment.
  • the function and configuration of the control unit 140 will be described in more detail.
  • the control unit 140 includes a position information acquisition unit 141, a playback state control unit 142, a display control unit 143, and a tagging unit 144.
  • the position information acquisition unit 141 acquires the position information of the operating tool that is detected by the input unit 110 and is associated with the elapsed time during content playback.
  • the position information acquired by the position information acquisition unit 141 is the position information of the operating tool on the display screen of the display unit 120, and the position information is acquired as, for example, two-dimensional coordinates on the display screen. May be.
  • the position information acquisition unit 141 may acquire the coordinate value (X, Y) corresponding to the point 240 that is a contact point of the operating tool with respect to the display screen 210 as the position information.
  • the position information acquisition unit 141 transmits the acquired position information of the operating tool to the reproduction state control unit 142, the display control unit 143, and the tagging unit 144.
  • the playback state control unit 142 controls the playback state of content in the information processing apparatus 10.
  • content playback state control means control of various operations related to content playback. For example, (normal) playback, stop, pause, fast forward, rewind, high speed playback, slow playback, and repeat of content. This includes control of various operations such as reproduction.
  • the content playback state control includes control for playing back content from an arbitrary playback position, control for extracting and playing back a part of the content, and the like.
  • the position information of the operating tool in the first direction is used as tag information for the position information of the operating tool acquired by the position information acquiring unit 141, and the first direction and The position information of the operating body in a different second direction is used for content playback state control.
  • the playback state control unit 142 may control the playback state of the content according to the position information of the operating tool in the X-axis direction of the display screen 210.
  • the playback state control unit 142 associates the position information of the operating tool in the X-axis direction with the elapsed time during content playback, and the content at the playback position corresponding to the position information of the operating tool in the X-axis direction. Can be played. That is, the X-axis coordinate value on the display screen 210 corresponds to the elapsed time during content reproduction, and the position of the operation body in the X-axis direction on the display screen 210 changes to seek the content reproduction position. The When the X-axis coordinate value corresponds to the elapsed time during content playback, the time in the content elapses from the left to the right of the display screen 210, that is, as the X-axis coordinate value increases.
  • Both can be associated with each other. By performing such association, it is possible to seek the reproduction position that is more suitable for the user's intuition for moving the operating tool.
  • the indicator 220 indicating the content playback position may also change according to the position information of the operating tool in the X-axis direction. .
  • the playback state control unit 142 controls the playback position of the content according to the position information of the operating tool in the X-axis direction has been described, but the present embodiment is not limited to such an example.
  • the playback state control unit 142 may perform other playback control on the content in accordance with the position information of the operating tool in the X-axis direction.
  • the playback state control unit 142 edits the content after the tagging process based on the tag information given to the content, and controls the playback of the edited content. May be.
  • content editing processing using such tag information the following ⁇ 3. Specific example of content editing using tag information> will be described in detail.
  • the playback state control unit 142 transmits information related to the playback control of content performed by the playback state control unit 142 to the display control unit 143.
  • the display control unit 143 controls the driving of the display unit 120 and visually displays various types of information processed in the information processing apparatus 10 on the display screen of the display unit 120 in all formats such as text, tables, graphs, and images. To display.
  • the display control unit 143 displays the content content on the display screen of the display unit 120.
  • the display control unit 143 displays an image included in the moving image that is the content on the display screen in accordance with the content reproduction state control by the reproduction state control unit 142.
  • the display control unit 143 displays a point corresponding to the position information of the operating tool acquired by the position information acquisition unit 141 on the display screen of the display unit 120. For example, in the example illustrated in FIG.
  • the display control unit 143 displays the point 240 on the display screen 210 at a position corresponding to the position information of the operating tool. Further, as shown in FIG. 2, the display control unit 143 may display the locus 250 of the position information of the operating tool on the display screen 210.
  • the tagging unit 144 tags the content by adding the location information acquired by the location information acquisition unit 141 to the content as tag information.
  • the tagging unit 144 uses the position information of the operating tool in the first direction among the position information of the operating tool acquired by the position information acquiring unit 141 as tag information. .
  • the tagging unit 144 uses position information of the operating tool in the Y-axis direction as tag information. More specifically, the tagging unit 144 may digitize the position information of the operating tool in the Y-axis direction and use the numerical value (for example, the coordinate value of the Y-axis) as tag information.
  • the position information of the operating tool is, for example, coordinate values (X, Y) on the display screen 210.
  • the coordinate value of the X axis in the position information of the operating tool corresponds to, for example, the playback position of the content, that is, the elapsed time during playback of the content. Yes. Accordingly, it can be said that the position information of the operating tool acquired by the position information acquisition unit 141 has a value (Y-axis coordinate value) associated with the elapsed time during the reproduction of the content. Therefore, the tagging unit 144 can tag content by using the position information of the operating tool as tag information.
  • FIG. 2 is an explanatory diagram for explaining an example of a tagging process according to the present embodiment.
  • the tagging process described with reference to FIG. 2 is an example of the tagging process according to the present embodiment, and other tagging processes may be performed in the present embodiment. Details of such other tagging processes in the present embodiment are described in ⁇ 2. This will be described in detail again in “Modification of Tagging Process>.
  • the user when performing the tagging process, the user inputs position information by bringing a finger 230 into contact with the display screen 210.
  • the X axis corresponds to the elapsed time (reproduction position) during content reproduction.
  • the value of the Y axis may be an index indicating, for example, the “preference level” of the user for the content. For example, while the finger 230 is in contact with the display screen 210, the user moves the finger 230 from the left end to the right end of the display screen 210 to seek the playback position of the content, and in the desired scene, moves the finger 230 upward.
  • the finger 230 is moved downward (in the direction in which the Y-axis coordinate value decreases) in a scene that does not feel so attractive.
  • the position information acquired by the position information acquisition unit 141 corresponds to the elapsed time during the reproduction of the content, that is, represents the user's preference for each scene of the content.
  • the tagging unit 144 can add a tag representing the user's preference level for each scene of the content to the content by adding the position information to the content as the tag information.
  • inputting position information serving as tag information is also referred to as inputting tag information.
  • the display control unit 143 displays an image of the scene corresponding to the position information of the finger 230 in the X-axis direction on the display screen 210.
  • the playback state control unit 142 displays the content in the scene corresponding to the X-axis coordinate value of the position.
  • the reproduction state may be controlled such that the reproduction of the content is paused and the reproduction of the content is continued when the position information is input again by the user.
  • the position information may not necessarily be input continuously, and may be interrupted in the middle. In this way, it is possible to input position information while pausing playback of a video as necessary and referring to a thumbnail of the video, so that the user's intention is reflected more in each scene of the video. The degree can be input.
  • the position information acquisition unit 141 acquires the position information of the operating tool for a portion corresponding to an arbitrary time range in the content, and the tagging unit 144 corresponds to the time range in which the position information is acquired. Tag information may be added to the content for the portion.
  • input of position information may be started from an arbitrary point on the X axis, and input of position information may be ended at an arbitrary point.
  • the display screen 210 is divided into a tag information input area and a reproduction position seek area, and the reproduction state control unit 142 and tagging unit 144 tag only the position information acquired in the tag information input area.
  • the position information used in the information and acquired in the playback position seek area may not be used as tag information but may be used to seek the playback position of the content.
  • the playback position seek area may be an area where the indicator 220 is displayed, and the tag information input area is from the area where the indicator 220 is displayed. May be the upper region.
  • the user seeks the playback position of the content to a desired position by moving the operating tool on the indicator 220, and then inputs the tag information by moving the operating tool in the tag information input area. .
  • the X axis is set according to the length of the content reproduction time.
  • the resolution will be different. For example, for a movie with a playback time of 10 minutes and a movie with a playback time of 100 minutes, the playback time of content associated with the same distance on the X-axis is 10 times different. Therefore, when the content playback time is relatively long, the progress (seek amount) of the elapsed time during content playback with respect to the movement distance of the operating tool in the X-axis direction increases, and fine position information is input for each scene. Can be difficult.
  • the playback state control unit 142 associates one end to the other end in the X-axis direction on the display screen 210 with a portion corresponding to an arbitrary time range in the content, and The content may be reproduced at a reproduction position corresponding to the position information of the operating body in the X-axis direction. That is, the time range of the content to be sought while the operating body moves from the left end to the right end of the display screen 210 may be arbitrarily set. For example, if the content is a video with a playback time of 100 minutes, and if the left end to the right end of the display screen 210 is assigned to 10 minutes, the playback state control unit 142 plays the content by dividing it into 10 parts.
  • the position information may be input while moving the operating body from the left end to the right end of the display screen 210.
  • the position information may be input while moving the operating body from the left end to the right end of the display screen 210.
  • tag information may be overwritten. That is, the tagging unit 144 may tag content based on the latest position information of the operating tool. Further, when the tag information is overwritten, the position information does not need to be reacquired for the entire time range of the content, and only the portion corresponding to the arbitrary time range in the content is positioned by the position information acquisition unit 141. Information may be reacquired, and only the tag information of the portion corresponding to the time range may be overwritten by the tagging unit 144. Accordingly, for example, the position information is input with a small resolution in association with the entire time range (total playback time) of the content from the left end to the right end of the display screen 210 for the first time.
  • the time range corresponding to the part is matched with the left end from the left end of the display screen 210, and position information is input again in a state with a large resolution.
  • the position information corresponding to the time range of interest is input again again. This enables efficient tagging processing.
  • the content reproduction state may not be controlled by the position information of the finger 230 in the X-axis direction.
  • the content is played back at a predetermined speed and the content is displayed on the display screen 210, and the playback position of the content played back and displayed on the display screen 210;
  • the tagging process may be performed by associating the input position information of the finger 230 in the Y-axis direction.
  • the user does not need to pay attention to the positional information of the finger 230 in the X-axis direction, and the finger 230 can be viewed in the Y-axis direction while watching content that is played back at a normal speed.
  • the position information can be input by moving to. Therefore, the user can input position information as if he / she attaches a tag to a scene he / she likes while enjoying the contents, and tagging processing more convenient for the user can be performed.
  • the position information acquisition unit 141 acquires the position information of the operating tool associated with the elapsed time during content reproduction, and the tagging unit 144 uses the position information as the tag information.
  • the tagging unit 144 uses the position information as the tag information.
  • tag information can be added to the content by inputting the position information of the operating tool by the user, for example, by moving a finger on the display screen of the touch panel, so that tagging processing with a higher degree of freedom is possible.
  • the position information of the operating body in the first direction is used as tag information by the tagging unit 144 among the acquired position information of the operating body, and is different from the first direction.
  • the playback state of the content is controlled by the playback state control unit 142 in accordance with the position information of the operating body in the direction of the direction. Therefore, the user can input the tag information while controlling the playback state of the content, for example, while seeking the playback position of the content.
  • tag information may be input only with respect to a part of content, and may be overwritten.
  • the resolution of the position information of the operating body in the second direction assigned to seek the playback position of the content may be changed. Therefore, the user seeks the playback position of the content up to an arbitrary position and then inputs tag information only for an arbitrary part, or changes the resolution and inputs tag information multiple times. It becomes possible to input tag information.
  • the reproduction state control unit 142 associates the position information of the operating tool in the X-axis direction of the display screen 210 with the elapsed time during content reproduction.
  • reproduction control for reproducing content at a reproduction position corresponding to the position information of the operating tool in the X-axis direction has been performed.
  • the playback state control unit 142 may perform other playback controls on the content in accordance with the position information of the operating tool in the X-axis direction.
  • the playback state control unit 142 may change the playback speed of the content based on the position information of the operating tool in the X-axis direction.
  • the playback state control unit 142 may perform fast forward or rewind of the content based on the position information of the operating tool in the X axis direction.
  • FIG. 3 is an explanatory diagram for describing a tagging process in which the playback speed of content is controlled, which is a modification of the tagging process in the present embodiment.
  • the display screen 210, the indicator 220, the finger 230, the point 240, and the locus 250 in FIG. 3 and FIG. 5 described later are the same as those shown in FIG.
  • the position information of the operating body in the X-axis direction of the display screen 210 and the content playback speed are associated with each other, and the playback state control unit 142 displays the operating body in the X-axis direction. Based on the position information, the playback speed of the content is changed.
  • the position information of the operating tool and the content playback speed in the X-axis direction are the normal playback speed at the left end of the display screen 210 in the X-axis direction, and as it goes toward the right end. That is, the reproduction speed is increased as the X-axis value increases.
  • the reproduction state control unit 142 can reproduce the content at a corresponding reproduction speed based on the position information of the point 240 in the X-axis direction.
  • the position information in the Y-axis direction may be an index indicating the “preference level” of the user for the content.
  • the user adjusts the playback speed of the content according to the position of the finger 230 in the X-axis direction, and moves the finger 230 upward (Y-axis coordinates in a favorite scene).
  • the finger 230 is moved downward (in the direction in which the Y-axis coordinate value decreases) in a scene that is not very attractive.
  • the user's preference operation for each scene corresponding to the elapsed time during the reproduction of the content by performing the input operation of the position information on the entire time range of the content or the portion corresponding to the arbitrary time range. Is acquired by the position information acquisition unit 141, and the tagging unit 144 adds tag information to the content using the position information as tag information.
  • the display control unit 143 controls the playback state control unit 142 to control the playback speed of the content at a speed corresponding to the playback speed.
  • the display control unit 143 controls the playback state control unit 142 to control the playback speed of the content at a speed corresponding to the playback speed.
  • FIGS. 4A to 4C are explanatory diagrams for explaining the association between the position information of the operating tool in the X-axis direction and the content reproduction speed in the modification of the tagging process shown in FIG.
  • the horizontal axis represents the X-axis coordinate value on the display screen 210
  • the vertical axis represents the content playback speed. Therefore, the curves shown in FIGS. 4A to 4C show the relationship between the position information of the operating tool in the X-axis direction and the content playback speed.
  • the X-axis coordinate value and the content playback speed are proportional to each other as the X-axis coordinate value increases and the content playback speed also increases at the same rate. Relationship may be. Further, as shown by a curve B in FIG. 4B, the X-axis coordinate value and the content playback speed gradually increase until a certain value on the X-axis, and then increase rapidly. The relationship shown by the downwardly convex curve may be sufficient. Further, as indicated by a curve C in FIG. 4C, the X-axis coordinate value and the content playback speed rapidly increase until a certain value on the X-axis, and then gradually increase. The relationship shown by a convex curve may be sufficient.
  • various correspondence relationships as shown in FIGS. 4A to 4C may be used for associating the position information of the operating tool in the X-axis direction and the content playback speed in this modification.
  • the association between the position information of the operating tool in the X-axis direction and the content playback speed in the present modification is not limited to the relationship shown in FIGS. 4A to 4C, and any other correspondence relationship may be used.
  • the user may be able to input an arbitrary relationship regarding the relationship between the X-axis coordinate value and the content playback speed by moving the operating tool on the display screen 210.
  • the position information of the operating tool in the X-axis direction and the content playback speed are associated with each other.
  • the user can input the position information of the operating tool while controlling the playback speed of the content. Therefore, for example, while referring to the thumbnail of the moving image displayed on the display screen 210, the user increases the playback speed and moves the finger 230 downward in a scene that does not feel much attractive (that is, as a preference level). It is possible to input a fine degree of preference by slowing down the playback speed and moving the finger 230 up and down in a favorite scene.
  • tagging processing that is more convenient for the user is realized.
  • FIG. 5 is an explanatory diagram for explaining a tagging process in which fast-forwarding or rewinding of content is controlled, which is a modification of the tagging process in the present embodiment.
  • the position information of the operating tool in the X-axis direction of the display screen 210 is associated with the fast-forward and rewind control of the content.
  • the content may be fast-forwarded or rewound based on the position information of the operating body.
  • the playback state control unit 142 uses the approximate midpoint in the X-axis direction of the display screen 210 as a reference point, and the content when the position information of the operating tool is acquired on the right side of the reference point. And when the position information of the operating tool is acquired on the left side of the reference point, control is performed to rewind the content.
  • the fast-forward or rewind speed may be controlled according to the position information of the operating body in the X-axis direction. For example, in the example shown in FIG. 5, the fast-forward speed increases as the value of the X-axis in the position information of the operating body moves rightward on the display screen 210, and the rewinding speed increases faster toward the left side. May be.
  • the position information in the Y-axis direction may be an index indicating the “preference level” of the user for the content.
  • the user rewinds or fast-forwards the content according to the position of the finger 230 in the X-axis direction, and moves the finger 230 upward (in the Y-axis direction in a favorite scene).
  • the finger 230 is moved downward (in the direction in which the Y-axis coordinate value decreases) in a scene that is not very attractive.
  • the user's preference operation for each scene corresponding to the elapsed time during the reproduction of the content by performing the input operation of the position information on the entire time range of the content or the portion corresponding to the arbitrary time range.
  • Is acquired by the position information acquisition unit 141, and the tagging unit 144 adds tag information to the content using the position information as tag information.
  • the display control unit 143 controls the moving image corresponding to the fast-forward or rewind according to the fast-forward or rewind control of the content by the playback state control unit 142.
  • the image inside is displayed on the display screen 210. Therefore, the user can input his / her preference degree while referring to the image (moving image thumbnail) displayed on the display screen 210.
  • the position information of the operating tool in the X-axis direction is associated with the fast-forward and rewind control of the content
  • the user can input position information while fast-forwarding or rewinding the content.
  • the speed of fast forward or rewind can be controlled. Therefore, for example, while referring to the image data displayed on the display screen 210, the user moves the finger 230 downward while fast-forwarding the content in a scene that does not feel much attractive (that is, the preference is small). In a favorite scene, it is possible to return the playback speed to the normal speed and move the finger 230 up and down to input a fine degree of preference.
  • the content can be rewound to a desired reproduction position, and the position information can be re-input.
  • tagging processing that is more convenient for the user is realized.
  • the control of the content playback speed and the fast-forward or rewind operation of the content are performed. Accordingly, the user can input the tag information while changing the playback speed of the content to a desired speed and moving to a desired playback position by performing fast forward or rewind, which is more convenient for the user.
  • a highly tagging process is realized. Note that the tagging process according to the present embodiment is not limited to the one described above, and other tagging processes in which the position information of the operating tool in the X-axis direction is associated with other reproduction controls may be performed. .
  • the playback state control unit 142 can edit the content based on the tag information given to the content, and can control the playback of the edited content.
  • a specific example of content editing using tag information according to the present embodiment will be described in detail with reference to FIGS. 6, 7, 8, and 9.
  • the playback state control unit 142 can extract a part of the content based on the tag information.
  • FIG. 6 is an explanatory diagram for explaining a process of creating content with a predetermined playback time.
  • FIG. 7 is an explanatory diagram for explaining a smoothing process when a part of content is extracted.
  • the horizontal axis (x-axis) indicates the elapsed time during playback of the content
  • the vertical axis (y-axis) is on the Y axis of the position information display screen 210 input in the tagging process.
  • the coordinate value in is shown. Therefore, it can be said that the curve 310 shown in FIG. 6 is tag information in which the elapsed time during the reproduction of the content and the position information of the operating tool (position information in the Y-axis direction) are associated with each other. Therefore, in the following description, the curve 310 is also referred to as tag information 310. 8 and 9 to be described later also have the same meaning as the horizontal and vertical axes in FIG. 6, and in the following description, the curve 410 shown in FIGS.
  • 420 are also referred to as tag information 410, 420.
  • tag information 410, 420 In the following description of FIGS. 6, 7, 8, and 9, as an example of the position information of the operating tool, the value on the vertical axis is an index representing the user's preference.
  • FIG. 6 a state in which content having a predetermined playback time is created by extracting a range in which the value of the vertical axis is greater than or equal to a predetermined threshold with respect to the tag information 310 is shown. . Since the value on the vertical axis represents the user's preference level in each scene of the content, by performing such processing, only the portion where the user's preference level is high, that is, the portion in which the user is interested is displayed.
  • the extracted digest version of the moving image data can be created.
  • the reproduction state control unit 142 associates the position information of the operating tool in the Y-axis direction with the preference degree (score) assigned to the coordinate value of the Y-axis for the tag information, and the preference degree of the content A portion where is equal to or greater than a predetermined threshold can be extracted.
  • a threshold value for creating digest version moving image data a threshold value for 5 minute digest, a threshold value for 10 minute digest, and a threshold value for 20 minute digest Is schematically shown on the tag information 310.
  • a portion corresponding to a time range in which the value on the vertical axis is equal to or greater than the 5 minute digest threshold is extracted from the content.
  • portions corresponding to the time ranges from the time T 11 to T 12 and T 17 to the end of the moving image are extracted from the content, and these moving images are extracted from the content.
  • a time range in which the value on the vertical axis is equal to or greater than the 10 minute digest threshold, that is, the elapsed time during playback is T 2 .
  • the portions corresponding to the time ranges from T 3 , T 6 to T 7 , T 10 to T 13, and T 16 to the end of the moving image are extracted from the content, and these moving image data are joined together to produce a 10 minute digest version.
  • Video data is created.
  • a time range in which the value on the vertical axis is equal to or greater than the 20-minute digest threshold, that is, the elapsed time during playback is T 1 to T 4 , T 5 -T 8 , T 9 -T 14 and T 15 -Parts corresponding to the time range from the end of the video are extracted from the content, and these video data are joined together to create a 20-minute digest version. Movie data is created.
  • the playback state control unit 142 adjusts the threshold value for the preference level, and extracts a part of the content in order from the highest preference level so that the total playback time becomes a predetermined playback time. be able to.
  • portions corresponding to a plurality of non-consecutive time ranges are extracted from the content, and the moving image data are connected to each other, so that the digest version of the moving image data of a predetermined reproduction time is obtained. Is created. Therefore, when the extracted portions are joined together, the moving image may become discontinuous at the joint. In the present embodiment, for example, smoothing processing based on the content of a moving image can be performed for this phenomenon.
  • a curve 320 shown in FIG. 7 is obtained by extracting a part of the tag information 310 shown in FIG. Referring to FIG. 7, for example, when trying to create a digest version of a moving image, as described with reference to FIG. 6, an extraction range that is originally a portion corresponding to a time range equal to or greater than a threshold value. A portion of TA is extracted. However, when performing smoothing processing, playback state control unit 142 extracts a portion of the broad extraction range T B than the range shown in the extraction range T A, by joining with other extracted portions of the front and rear A digest version of the video may be created.
  • the extraction range T B may be determined based on image data and audio data included in the content data. For example, when the pixel information changes greatly in the image data included in the moving image data, the brightness, color, etc. in the screen change greatly, and the shooting direction of the camera changes or the scene changes within the moving image. There is a high possibility of being broken.
  • audio data when the audio input level (volume) changes significantly or the audio direction changes, there is a high possibility that a scene change is performed in the moving image. Accordingly, the playback state control unit 142 uses the extraction range T2 as a boundary by using a point where the amount of change in the pixel information in the image data, the audio input level and the audio direction in the audio data, and the like included in the content data is relatively large.
  • the content data when the tag information according to the present embodiment is different from the tags applied may be set extraction range T B based on such other tags.
  • the other tag is, for example, metadata including information about the event occurrence time, the event type, and the content of each scene included in the moving image that is the content set by the content provider.
  • the reproduction state control unit 142 based on the metadata, as the boundary a timing indicating a scene change, may set the extracted range T B of the content data.
  • the immediately preceding and immediately following the extraction range T B possibly scene change or the like is performed in the moving image is increased. Therefore, by connecting portions corresponding to these extraction ranges, discontinuity at the joints is reduced in the created digest version of the moving image, and a more natural moving image for the user is generated. Note that whether to use what information to set an extraction range T B may be configurable as appropriate by the user.
  • FIG. 8 is an explanatory diagram for describing a plurality of different tag information.
  • FIG. 9 is an explanatory diagram for explaining content creation processing for a predetermined playback time based on tag information for a plurality of different moving images.
  • the tag information 310 is the same as the tag information 310 shown in FIG. 6, and is created based on the position information of the operating tool input by the user.
  • the tag information 410 is, for example, tag information created based on the position information of the operating tool input by another user for the same moving image.
  • a plurality of tag information created by a plurality of different users may be shared between users.
  • a user can upload the tag information created by himself / herself for a certain video to a server existing on the cloud and make it available to other users. Moreover, the user can browse the tag information created by other users for the moving image uploaded to the server.
  • the range of users who can share tag information may be set arbitrarily.
  • the range of users who can share tag information may be a range of users belonging to the same SNS (Social Networking Service), or an arbitrary range of users set in the SNS (for example, so-called “ Range of users belonging to “My Friend”. In this way, by sharing tag information created by different users between users, it is possible to easily compare the degree of preference for the same moving image with the degree of preference of others.
  • SNS Social Networking Service
  • tag information by other users may be uploaded to a server on the cloud when the tagging process is completed, or the tag information during the tagging process is updated in real time while the server is updated. May be up.
  • Tag information while tagging is being performed can be viewed by multiple users, so that tagging can be performed while referring to tag information of other users, that is, referring to social tag information. Processing can be performed.
  • tag information by other users uploaded to the server on the cloud may be stored for a predetermined period, and a plurality of different tag information for the same video may be accumulated in the server as needed. Then, when browsing the tag information by other users uploaded to the server, the user displays the tag information for each registered period (daily, weekly, etc.), displays it in order of preference, etc. It may be possible to sort and display the items in order.
  • the content editing process by the playback state control unit 142 may be performed using a plurality of pieces of tag information that are different from each other.
  • the playback state control unit 142 performs the above [3-1.
  • a process of extracting a part of the content as described in “Creating content with a predetermined playback time” can be performed.
  • the playback state control unit 142 uses the tag information created by other users as a plurality of different tag information, thereby creating a digest version of the movie that reflects the social preference. be able to.
  • the creation of a digest version of a movie is given specific conditions such as tag information created during a desired period, tag information created by a desired user, among tag information created by other users. Social tag information may be used.
  • the playback state control unit 142 may use a simple sum of the degrees of preference in the tag information, an average value, or a median value. Etc. may be used.
  • FIG. 9 shows processing for creating content of a predetermined playback time based on tag information for a plurality of different moving images.
  • tag information 410 assigned to “Movie A” and tag information 420 assigned to “Movie B” are illustrated.
  • the tag information 410 and the tag information 420 are social tag information in which tag information created by a plurality of other users is integrated, for example.
  • the total extraction range for “Movie A” and “Movie B” is Threshold values are set in the tag information 410 and the tag information 420 so as to be 5 minutes, and a portion corresponding to a time range having a preference degree equal to or higher than the threshold value is extracted from each video, A 5-minute digest version of the video may be created.
  • Threshold values are set in the tag information 410 and the tag information 420 so as to be 5 minutes, and a portion corresponding to a time range having a preference degree equal to or higher than the threshold value is extracted from each video, A 5-minute digest version of the video may be created.
  • social tag information 410 and 420 are used as tag information.
  • the present embodiment is not limited to this example, and the same processing is performed based on tag information created by the user. Of course it is also possible to do.
  • the playback state control unit 142 extracts a part of the content based on the tag information, and creates a digest version of the movie for a predetermined time.
  • a threshold is provided for the score associated with the position information of the operating tool in the Y-axis direction, and the time range in which the score is equal to or greater than the threshold A portion corresponding to may be extracted.
  • the user's preference level is reflected in the position information of the operating tool in the Y-axis direction, it is possible to create a digest version of a moving image in which a part of the content is extracted in descending order of preference level.
  • the digest version of the moving image creation process may be performed based on a plurality of pieces of tag information created by a plurality of other users. Therefore, it is possible to edit and view content based on a social preference level that reflects the preferences of other users.
  • the content editing process using the tag information is performed, so that a variety of content viewing methods are provided to the user, and a user-friendly way of enjoying the content is realized.
  • the content editing process based on a plurality of tag information according to the present embodiment has been described.
  • the content editing process according to the present embodiment is not limited to such an example.
  • content editing processing may be performed using tag information according to the present embodiment and another tag different from the tag information according to the present embodiment.
  • the other tag may be, for example, metadata including information about an event occurrence time, an event type, content, and the like of each scene included in the moving image that is the content set by the content provider.
  • CM commercial
  • the tag information according to the present embodiment in addition to the tag information according to the present embodiment, it is possible to set a preference level using such other tags for content.
  • the tag information according to the present embodiment and the different method are set.
  • the content editing process may be performed using both the degree of preference. For example, by using the metadata as described above, it is possible to realize a content editing process more in line with the user's preference, such as cutting the CM portion or extracting only the appearance scene of the favorite performer.
  • FIG. 10 is a flowchart showing a processing procedure of the tagging method according to the present embodiment.
  • the processing procedure of the tagging method according to the present embodiment the case of performing the tagging process shown in FIG. 2 will be described as an example.
  • the functions of the storage unit 130, the position information acquisition unit 141, the playback state control unit 142, and the tagging unit 144 are described in ⁇ 1. Since it is described in “Configuration of Information Processing Device>, detailed description thereof is omitted.
  • the length (playback time) of a moving image to be displayed at a time on the display screen 210 in the tagging process is set (step S501). This is because ⁇ 1.
  • the reproduction state control unit 142 described in the configuration of the information processing apparatus corresponds to the process of associating one end to the other end in the X-axis direction on the display screen 210 with a portion corresponding to an arbitrary time range in the content. ing. In this way, a part of the content is associated from one end to the other end in the X-axis direction on the display screen 210, thereby enabling tagging processing with a high resolution.
  • the tagging start position is set (step S503). This is because ⁇ 1.
  • the position information acquisition unit 141 described in the section of the configuration of the information processing apparatus corresponds to the process of acquiring the position information of the operating tool for a portion corresponding to an arbitrary time range in the content. In this way, the user seeks the playback position of the content to a desired position by moving the operating body on the indicator 220 displayed in the playback position seek area of the display screen 210, and from the playback position.
  • Tag information can be input only in an arbitrary part.
  • a tagging process is performed (step S505). That is, when the operating tool is moved in the tag information input area on the display screen 210, the tag information is input and the tag information is given to the content. Specifically, the position information acquisition unit 141 acquires the position information of the operating tool associated with the elapsed time during content reproduction, and the tagging unit 144 assigns the position information to the content as tag information. The content is tagged.
  • step S505 When the tagging process in step S505 ends, the content data after the tagging process is stored in the storage unit 130 (step S507).
  • the end of the tagging process is, for example, that a predetermined time has elapsed since the operating tool is no longer detected (in the example shown in FIG. 2, the predetermined time elapses when the finger 230 moves away from the display screen 210. Or a dedicated operation for ending the tagging process such as a button press.
  • step S509 content editing processing based on the tag information is performed on the content that has been tagged.
  • the content editing process in step S509 is, for example, ⁇ 3.
  • Various editing processes described in the specific example of content editing process using tag information may be used.
  • the processing procedure of the tagging method according to the present embodiment has been described above with reference to FIG.
  • the content after the tagging process is stored in the storage unit 130, but the present embodiment is not limited to such an example.
  • the content after the tagging process may be stored in a server or the like on the cloud and shared among specific users.
  • FIG. 11 is a block diagram for describing a hardware configuration of the information processing apparatus 10 according to the embodiment of the present disclosure.
  • the information processing apparatus 10 mainly includes a CPU 901, a ROM 903, and a RAM 905.
  • the information processing apparatus 10 further includes a host bus 907, a bridge 909, an external bus 911, an interface 913, an input device 915, an output device 917, a storage device 919, a communication device 921, and a drive 923. And a connection port 925.
  • the CPU 901 functions as an arithmetic processing unit and a control unit, and controls all or a part of the operation in the information processing apparatus 10 according to various programs recorded in the ROM 903, the RAM 905, the storage device 919, or the removable recording medium 929.
  • the ROM 903 stores programs used by the CPU 901, calculation parameters, and the like.
  • the RAM 905 temporarily stores programs used by the CPU 901, parameters that change as appropriate during execution of the programs, and the like. These are connected to each other by a host bus 907 constituted by an internal bus such as a CPU bus.
  • the CPU 901, the ROM 903, and the RAM 905 correspond to, for example, the control unit 140 illustrated in FIG.
  • the host bus 907 is connected to an external bus 911 such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 909.
  • PCI Peripheral Component Interconnect / Interface
  • the input device 915 is an operation means operated by the user, such as a mouse, a keyboard, a touch panel, a button, a switch, and a lever. Further, the input device 915 may be, for example, remote control means (so-called remote controller) using infrared rays or other radio waves, or an external connection device such as a mobile phone or a PDA corresponding to the operation of the information processing device 10. It may be 931. Furthermore, the input device 915 includes an input control circuit that generates an input signal based on information input by a user using the above-described operation means and outputs the input signal to the CPU 901, for example. The user of the information processing apparatus 10 can input various data and instruct a processing operation to the information processing apparatus 10 by operating the input device 915. In the present embodiment, the input device 915 corresponds to, for example, the input unit 110 illustrated in FIG.
  • the output device 917 is a device that can notify the user of the acquired information visually or audibly. Examples of such devices include CRT display devices, liquid crystal display devices, plasma display devices, EL display devices, display devices such as lamps, audio output devices such as speakers and headphones, printer devices, and the like.
  • the output device 917 outputs results obtained by various processes performed by the information processing apparatus 10. Specifically, the display device visually displays results obtained by various processes performed by the information processing device 10 in various formats such as text, images, tables, and graphs. In the present embodiment, the display device corresponds to, for example, the display unit 120 illustrated in FIG.
  • the audio output device converts an audio signal composed of reproduced audio data, acoustic data, and the like into an analog signal and outputs it aurally.
  • the storage device 919 is a data storage device configured as an example of a storage unit of the information processing device 10.
  • the storage device 919 corresponds to, for example, the storage unit 130 illustrated in FIG.
  • the storage device 919 includes, for example, a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, or a magneto-optical storage device.
  • the storage device 919 stores various information processed in the tagging process according to the present embodiment, such as a program executed by the CPU 901 and various data.
  • the storage device 919 has various contents to be played back by the information processing apparatus 10, tag information obtained in the process of tagging processing according to the present embodiment, and content to which the tag information is attached (that is, tagging). Data such as later content) is stored.
  • the information processing apparatus 10 may further include the following components.
  • the communication device 921 is a communication interface configured by a communication device for connecting to a communication network (network) 927, for example.
  • the communication device 921 is, for example, a communication card for wired or wireless LAN (Local Area Network), Bluetooth (registered trademark), or WUSB (Wireless USB).
  • the communication device 921 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), a modem for various communication, or the like.
  • the communication device 921 can transmit and receive signals and the like according to a predetermined protocol such as TCP / IP, for example, with the Internet or other communication devices.
  • the network 927 connected to the communication device 921 is configured by a wired or wireless network, and may be, for example, the Internet, a home LAN, infrared communication, radio wave communication, satellite communication, or the like.
  • various types of content played back by the information processing apparatus 10 tag information obtained in the tagging process according to the present embodiment, data such as content after tagging, and the like are transmitted by the communication device 921. It may be received via the network 927 or transmitted from the information processing apparatus 10 to another external device (for example, a server on the cloud).
  • the drive 923 is a recording medium reader / writer, and is built in or externally attached to the information processing apparatus 10.
  • the drive 923 reads information recorded on a removable recording medium 929 such as a mounted magnetic disk, optical disk, magneto-optical disk, or semiconductor memory, and outputs the information to the RAM 905.
  • the drive 923 can also write information to a removable recording medium 929 such as a mounted magnetic disk, optical disk, magneto-optical disk, or semiconductor memory.
  • the removable recording medium 929 is, for example, a DVD medium, an HD-DVD medium, a Blu-ray (registered trademark) medium, or the like.
  • the removable recording medium 929 may be a compact flash (registered trademark) (CompactFlash: CF), a flash memory, an SD memory card (Secure Digital memory card), or the like. Further, the removable recording medium 929 may be, for example, an IC card (Integrated Circuit card) on which a non-contact IC chip is mounted, an electronic device, or the like. In the present embodiment, various contents reproduced by the information processing apparatus 10, tag information obtained in the process of tagging processing according to the present embodiment, and data such as content after tagging are removed by the drive 923. It may be read from the recording medium 929 or written to the removable recording medium 929.
  • IC card Integrated Circuit card
  • the connection port 925 is a port for directly connecting a device to the information processing apparatus 10.
  • Examples of the connection port 925 include a USB (Universal Serial Bus) port, an IEEE 1394 port, and a SCSI (Small Computer System Interface) port.
  • As another example of the connection port 925 there are an RS-232C port, an optical audio terminal, an HDMI (registered trademark) (High-Definition Multimedia Interface) port, and the like.
  • various contents reproduced by the information processing apparatus 10 tag information obtained in the course of the tagging process according to the present embodiment, and data such as content after tagging are connected to the connection port 925.
  • the external connection device 931 Via the external connection device 931, or may be output to the external connection device 931.
  • each component described above may be configured using a general-purpose member, or may be configured by hardware specialized for the function of each component. Therefore, it is possible to change the hardware configuration to be used as appropriate according to the technical level at the time of carrying out this embodiment.
  • a computer program for realizing each function of the information processing apparatus 10 according to the present embodiment as described above can be produced and installed in a personal computer or the like.
  • a computer-readable recording medium storing such a computer program can be provided.
  • the recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like.
  • the above computer program may be distributed via a network, for example, without using a recording medium.
  • the position information acquisition unit 141 acquires the position information of the operating tool associated with the elapsed time during content playback, and the tagging unit 144 assigns the position information to the content as tag information.
  • the content tagging process is performed. Therefore, tag information can be added to the content by inputting the position information of the operating tool by the user, for example, by moving a finger on the display screen of the touch panel, so that tagging processing with a higher degree of freedom is possible.
  • the position information of the operating body in the first direction is used as tag information by the tagging unit 144 among the acquired position information of the operating body, and is different from the first direction.
  • the playback state of the content is controlled by the playback state control unit 142 in accordance with the position information of the operating body in the direction of the direction. Therefore, the user can input the tag information while controlling the playback state of the content, for example, while seeking the playback position of the content.
  • tag information may be input only with respect to a part of content, and may be overwritten.
  • the resolution of the position information of the operating body in the second direction assigned to seek the playback position of the content may be changed. Therefore, the user seeks the playback position of the content up to an arbitrary position and then inputs tag information only for an arbitrary part, or changes the resolution and inputs tag information multiple times. It becomes possible to input tag information.
  • the position information of the operating tool in the second direction of the display screen 210 may correspond to other reproduction control.
  • the playback state control unit 142 may control the playback speed of the content and the fast-forward or rewind operation of the content based on the position information of the operating tool in the second direction. Accordingly, the user can input the tag information while changing the playback speed of the content to a desired speed and moving to a desired playback position by performing fast forward or rewind, which is more convenient for the user. A highly tagging process is realized.
  • the playback state control unit 142 can edit the content based on the tag information given to the content and control the playback of the edited content. For example, the playback state control unit 142 extracts a part of the content based on the tag information, and creates a digest version of the movie for a predetermined time. In the process of extracting a part of the content, a threshold is provided for the score associated with the position information of the operating tool in the first direction, and the time when the score is equal to or greater than the threshold. A portion corresponding to the range may be extracted.
  • the user's preference level is reflected in the position information of the operating tool in the first direction, it is possible to create a digest version of a moving image in which a part of the content is extracted in descending order of preference level.
  • the digest version of the moving image creation process may be performed based on a plurality of pieces of tag information created by a plurality of other users. Therefore, it is possible to edit and view content based on a social preference level that reflects the preferences of other users.
  • the content editing process using the tag information is performed, so that a variety of content viewing methods are provided to the user, and a user-friendly way of enjoying the content is realized.
  • the position information is two-dimensional position information on the display screen 210 of the touch panel, but the present embodiment is not limited to such an example.
  • the tag information may be any position information of the operating tool associated with the elapsed time during content reproduction, and the type thereof is not limited.
  • the input unit 110 may include a sensor device that detects the position of the operating tool in a space such as a stereo camera or an infrared camera, and position information in a three-dimensional space may be input as the position information of the operating tool.
  • the operation body When the position information of the operation body is position information in a three-dimensional space, the operation body may be, for example, a user's hand, and the user reproduces content according to the position of the hand in the left-right direction with respect to the sensor device of the input unit 110
  • the state may be controlled and tag information may be input according to the vertical position of the hand.
  • the user can perform the tagging process by moving his / her hand up / down / left / right with respect to the sensor device of the input unit 110 while referring to the content displayed on the display screen of the display unit 120.
  • the position information acquisition unit 141 may detect the locus of the position information of the operating tool as characters, numbers, symbols, and the like, so that the tag information may be acquired.
  • the position information acquisition unit 141 may detect the locus of the position information of the operating tool as a symbol having a binary meaning such as “ ⁇ ” or “ ⁇ ”. For example, while referring to the content of the content displayed on the display screen of the display unit 120, the user moves the operation body so that the locus of the position information of the operation body draws a shape of “ ⁇ ” in a favorite scene.
  • the tag information on which the presence or absence of binary preference is superimposed is input can do.
  • the tagging unit 144 may add tag information on which the presence / absence of such a binary preference is superimposed.
  • the embodiment in which the content data to which the tag information is attached is the moving image data has been described, but the present embodiment is not limited to such an example.
  • the content data to be subjected to the tagging process may be music data or slide show data in which still images are continuously displayed for a predetermined time.
  • the information processing apparatus 10 includes an audio output device including a speaker, headphones, and the like, and the user listens to audio included in the music data output from the audio output device.
  • tag information may be input.
  • tagging processing similar to that for moving image data may be performed.
  • a position information acquisition unit that acquires position information of an operating tool associated with an elapsed time during content reproduction, and tagging the content by adding the position information to the content as tag information.
  • An information processing apparatus comprising a tagging unit.
  • a display control unit configured to display the content of the content on a display screen, wherein the position information acquisition unit acquires position information of the operating body on the display screen, and the first direction is The information processing apparatus according to (2), wherein the display screen has a vertical direction with respect to the display screen, and the second direction has a horizontal direction with respect to the display screen.
  • the reproduction state control unit associates the position information of the operation body in the second direction with the elapsed time during the content reproduction, and corresponds to the position information of the operation body in the second direction.
  • the information processing apparatus according to (2) or (3) wherein the content is reproduced at a reproduction position.
  • the playback state control unit associates a range from one end to the other end in the second direction on the display screen with an arbitrary time range in the content, and the second on the display screen.
  • the content is played back at a playback position corresponding to the position information of the operating tool in the direction of, and the position information acquisition unit acquires the position information of the operating tool on the display screen.
  • Information processing device (6) The information processing apparatus according to (2) or (3), wherein the reproduction state control unit changes a reproduction speed of the content based on the position information of the operation body in the second direction. .
  • the playback state control unit associates the content from one end to the other end in the second direction on the display screen with the playback speed of the content, and in the second direction on the display screen.
  • the information processing apparatus wherein the content is reproduced at a reproduction speed corresponding to the position information of the operation tool.
  • the reproduction state control unit fast-forwards the content
  • control for rewinding the content is performed when the position information of the operating tool is acquired on the other side of the reference point.
  • the position information acquisition unit acquires position information of the operation tool for a part corresponding to an arbitrary time range in the content, and the tagging unit corresponds to the time range from which the position information is acquired.
  • the information processing apparatus according to any one of (1) to (9), wherein the tag information is assigned to the content for a portion to be processed.
  • (12) (1) to (11) further comprising: a display control unit that displays the content of the content on a display screen, wherein the display control unit displays a locus of position information of the operating body on the display screen.
  • a playback state control unit for controlling the playback state of the content, wherein the playback state control unit extracts a part of the content based on the tag information;
  • the information processing apparatus according to any one of claims.
  • the tagging unit uses position information of the operating body in a first direction as the tag information, and the reproduction state control unit is configured to output the operating body in the first direction with respect to the tag information.
  • the information processing apparatus according to (13) wherein position information and a score are associated with each other, and a portion of the content in which the score is equal to or greater than a predetermined threshold is extracted.
  • the reproduction state control unit determines the threshold based on a reproduction time of the extracted content.
  • Information processing device 17.
  • the position information acquisition unit acquires three-dimensional position information of the operating body in space.
  • a tagging method including: (20) Tagging the content by giving the computer the function of acquiring the position information of the operating tool associated with the elapsed time during the content reproduction, and adding the position information to the content as tag information Function and program to realize.

Abstract

[Problem] To allow tagging having a higher degree of flexibility to be performed for contents. [Solution] Provided is an information processing apparatus comprising: a position information acquiring unit that acquires the position information of an operating element associated with the elapsed-time of a content being reproduced; and a tagging unit that adds, as tag information, the position information to the content, thereby performing a tagging for the content.

Description

情報処理装置、タグ付け方法及びプログラムInformation processing apparatus, tagging method, and program
 本開示は、情報処理装置、タグ付け方法及びプログラムに関する。 The present disclosure relates to an information processing apparatus, a tagging method, and a program.
 近年、音楽や動画等のコンテンツのデータ(コンテンツデータ)を、PC(Personal Computer)やスマートフォン等の端末に取り込み、視聴者(ユーザ)が所望のタイミングでコンテンツを視聴する楽しみ方が普及している。一方、ユーザのコンテンツの視聴の仕方は多様化しつつあり、例えば、コンテンツ全てを視聴するのではなく、コンテンツ内の所望の一部分のみを抽出して視聴したいという要望がある。このような要望に応えるために、コンテンツ内の各シーンに対してタグ付けを行う技術が開発されている。 2. Description of the Related Art In recent years, it has become popular to enjoy content in which content data (content data) such as music and video is imported to a terminal such as a PC (Personal Computer) or a smartphone and the viewer (user) views the content at a desired timing. . On the other hand, the way users view content is diversifying. For example, there is a demand for extracting only a desired part of content and viewing it, instead of viewing all the content. In order to meet such a demand, a technique for tagging each scene in the content has been developed.
 例えば、特許文献1には、コンテンツである動画に含まれる各シーンのイベント発生時刻やイベントの種類、内容等についての情報を含むメタデータを取得し、当該メタデータを動画内の該当シーンにイベントタイムタグとして付与する技術が開示されている。また、例えば、特許文献2には、動画に対してインターネット等のネットワークを介して投稿された複数のコメントを収集し、そのコメント投稿数に基づいて動画から注目シーンを抽出し、コメントに含まれるコメントキーワードを当該注目シーンにタグ付けする技術が開示されている。 For example, in Patent Document 1, metadata including information on event occurrence time, event type, content, and the like of each scene included in a moving image that is content is acquired, and the metadata is transferred to the corresponding scene in the moving image as an event. A technique for assigning as a time tag is disclosed. Further, for example, Patent Document 2 collects a plurality of comments posted to a moving image via a network such as the Internet, extracts a scene of interest from the moving image based on the number of comments posted, and includes the comment. A technique for tagging a comment keyword to the attention scene is disclosed.
特開2008-5010号公報JP 2008-5010 A 特開2012-155695号公報JP 2012-155695 A
 しかし、上記特許文献1に記載の技術では、タグとして用いられるメタデータは、タグ付け対象とするコンテンツの配信事業者等によって生成されるため、タグ付けがコンテンツに応じて一意に決定されてしまう。また、上記特許文献2に記載の技術では、タグとして用いられるコメントキーワードは、公開されている動画に対して複数のユーザから投稿されるコメントに基づいて生成されるものであるため、タグ付けがコメントを投稿した複数の他のユーザの嗜好が反映されたソーシャルな要素を含む。従って、これらの従来技術では、ユーザの主観的な嗜好に基づくタグ付けを行うことが困難である。また、例えば、コンテンツがユーザによって撮影されたプライベート動画である場合、特に他人には公開したくないプライベート動画である場合には、これらの技術を適用することができない。 However, in the technique described in Patent Document 1, since the metadata used as a tag is generated by a distribution company of the content to be tagged, tagging is uniquely determined according to the content. . Further, in the technique described in Patent Document 2, the comment keyword used as a tag is generated based on comments posted from a plurality of users with respect to a published video. It includes social elements that reflect the preferences of other users who have posted comments. Therefore, with these conventional techniques, it is difficult to perform tagging based on the user's subjective preference. Further, for example, when the content is a private video shot by the user, particularly when the content is a private video that is not desired to be disclosed to others, these techniques cannot be applied.
 そこで、本開示では、コンテンツに対してより自由度の高いタグ付けを行うことが可能な、新規かつ改良された情報処理装置、タグ付け方法及びプログラムを提案する。 Therefore, the present disclosure proposes a new and improved information processing apparatus, tagging method, and program capable of tagging content with a higher degree of freedom.
 本開示によれば、コンテンツ再生中の経過時間と関連付けられた操作体の位置情報を取得する位置情報取得部と、前記位置情報をタグ情報として前記コンテンツに付与することにより、前記コンテンツへのタグ付けを行うタグ付け部と、を備える、情報処理装置が提供される。 According to the present disclosure, a position information acquisition unit that acquires position information of an operating tool associated with an elapsed time during content reproduction, and a tag for the content by adding the position information to the content as tag information. There is provided an information processing apparatus including a tagging unit that performs tagging.
 また、本開示によれば、コンテンツ再生中の経過時間と関連付けられた操作体の位置情報を取得することと、前記位置情報をタグ情報として前記コンテンツに付与することにより、前記コンテンツへのタグ付けを行うことと、を含む、タグ付け方法が提供される。 Further, according to the present disclosure, tagging of the content is obtained by acquiring position information of the operating tool associated with the elapsed time during content playback, and adding the position information to the content as tag information. And a tagging method is provided.
 また、本開示によれば、コンピュータに、コンテンツ再生中の経過時間と関連付けられた操作体の位置情報を取得する機能と、前記位置情報をタグ情報として前記コンテンツに付与することにより、前記コンテンツへのタグ付けを行う機能と、を実現させるためのプログラムが提供される。 In addition, according to the present disclosure, a function of acquiring position information of an operating tool associated with an elapsed time during content playback and a function of acquiring the position information as tag information to the content to the computer. And a program for realizing the tagging function are provided.
 本開示によれば、位置情報取得部によってコンテンツ再生中の経過時間と関連付けられた操作体の位置情報が取得され、タグ付け部によって当該位置情報をタグ情報として当該コンテンツに付与することにより、コンテンツへのタグ付けが行われる。このように、ユーザが操作体を移動させる操作により、コンテンツへのタグ付けを行うことができるため、ユーザの嗜好を反映したタグ付けがより簡易な操作によって実現される。 According to the present disclosure, the position information acquisition unit acquires position information of the operating tool associated with the elapsed time during content reproduction, and the tagging unit assigns the position information as tag information to the content. Is tagged. As described above, since the user can tag the content by the operation of moving the operating tool, the tagging reflecting the user's preference is realized by a simpler operation.
 以上説明したように本開示によれば、コンテンツに対してより自由度の高いタグ付けを行うことが可能となる。 As described above, according to the present disclosure, it is possible to tag content with a higher degree of freedom.
本開示の一実施形態に係る情報処理装置の概略構成を示す機能ブロック図である。1 is a functional block diagram illustrating a schematic configuration of an information processing apparatus according to an embodiment of the present disclosure. 本実施形態に係るタグ付け処理の一例について説明するための説明図である。It is explanatory drawing for demonstrating an example of the tagging process which concerns on this embodiment. 本実施形態におけるタグ付け処理の変形例である、コンテンツの再生速度が制御されるタグ付け処理について説明するための説明図である。It is explanatory drawing for demonstrating the tagging process by which the reproduction speed of a content is controlled which is a modification of the tagging process in this embodiment. 図3に示すタグ付け処理の変形例における、X軸方向における操作体の位置情報とコンテンツの再生速度との対応付けについて説明するための説明図である。FIG. 10 is an explanatory diagram for explaining the association between the position information of the operating tool in the X-axis direction and the content playback speed in the modification of the tagging process shown in FIG. 3. 図3に示すタグ付け処理の変形例における、X軸方向における操作体の位置情報とコンテンツの再生速度との対応付けについて説明するための説明図である。FIG. 10 is an explanatory diagram for explaining the association between the position information of the operating tool in the X-axis direction and the content playback speed in the modification of the tagging process shown in FIG. 3. 図3に示すタグ付け処理の変形例における、X軸方向における操作体の位置情報とコンテンツの再生速度との対応付けについて説明するための説明図である。FIG. 10 is an explanatory diagram for explaining the association between the position information of the operating tool in the X-axis direction and the content playback speed in the modification of the tagging process shown in FIG. 3. 本実施形態におけるタグ付け処理の変形例である、コンテンツの早送り又は巻き戻しが制御されるタグ付け処理について説明するための説明図である。It is explanatory drawing for demonstrating the tagging process which is a modification of the tagging process in this embodiment, and the fast-forward or rewind of content is controlled. 所定の再生時間のコンテンツの作成処理について説明するための説明図である。It is explanatory drawing for demonstrating the production | generation process of the content of predetermined reproduction | regeneration time. コンテンツの一部分を抽出する際のスムージング処理について説明するための説明図である。It is explanatory drawing for demonstrating the smoothing process at the time of extracting a part of content. 複数の互いに異なるタグ情報について説明するための説明図である。It is explanatory drawing for demonstrating several mutually different tag information. 複数の互いに異なる動画に対するタグ情報に基づいて、所定の再生時間のコンテンツの作成処理について説明するための説明図である。It is explanatory drawing for demonstrating the production | generation process of the content of predetermined reproduction | regeneration time based on the tag information with respect to several mutually different moving images. 本実施形態に係るタグ付け方法の処理手順を示すフロー図である。It is a flowchart which shows the process sequence of the tagging method which concerns on this embodiment. 本実施形態に係る情報処理装置のハードウェア構成を説明するためのブロック図である。It is a block diagram for demonstrating the hardware constitutions of the information processing apparatus which concerns on this embodiment.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In addition, in this specification and drawing, about the component which has the substantially same function structure, duplication description is abbreviate | omitted by attaching | subjecting the same code | symbol.
 なお、説明は以下の順序で行うものとする。
 1.情報処理装置の構成
 2.タグ付け処理の変形例
  2-1.再生速度の制御
  2-2.早送り、巻き戻しの制御
 3.タグ情報を利用したコンテンツ編集処理の具体例
  3-1.所定の再生時間のコンテンツの作成
  3-2.タグ情報の共有
 4.タグ付け方法の処理手順
 5.ハードウェア構成
 6.まとめ
The description will be made in the following order.
1. 1. Configuration of information processing apparatus 2. Modification of tagging process 2-1. Control of playback speed 2-2. 2. Control of fast forward and rewind Specific example of content editing process using tag information 3-1. Creation of content with a predetermined playback time 3-2. Sharing tag information 4. Processing procedure of tagging method Hardware configuration Summary
 <1.情報処理装置の構成>
 本実施形態においては、コンテンツの再生状態が制御されながら操作体の位置が検出されることにより、コンテンツ再生中の経過時間と関連付けられた操作体の位置情報が取得される。そして、当該位置情報をタグ情報としてコンテンツに付与することにより、コンテンツへのタグ付けが行われる。以下の説明では、タグ情報が取得され、コンテンツに当該タグ情報が付与される上記の一連の処理のことを、タグ付け処理と呼称する。
<1. Configuration of information processing apparatus>
In the present embodiment, the position information of the operating tool associated with the elapsed time during content playback is acquired by detecting the position of the operating tool while controlling the playback state of the content. Then, tagging of the content is performed by giving the position information as tag information to the content. In the following description, the above-described series of processes in which tag information is acquired and the tag information is assigned to content is referred to as tagging process.
 まず、図1を参照して、上記の本実施形態に係るタグ付け処理を実行するための情報処理装置の一構成例について説明する。図1は、本開示の一実施形態に係る情報処理装置の概略構成を示す機能ブロック図である。 First, a configuration example of an information processing apparatus for executing the tagging process according to the present embodiment will be described with reference to FIG. FIG. 1 is a functional block diagram illustrating a schematic configuration of an information processing apparatus according to an embodiment of the present disclosure.
 図1を参照すると、本開示の一実施形態に係る情報処理装置10は、入力部110、表示部120、記憶部130及び制御部140を備える。 Referring to FIG. 1, the information processing apparatus 10 according to an embodiment of the present disclosure includes an input unit 110, a display unit 120, a storage unit 130, and a control unit 140.
 入力部110は、ユーザが情報処理装置10に各種の処理動作に関する情報や命令等を入力するための入力インターフェースである。本実施形態においては、入力部110は操作体の位置を検出し、その位置情報を情報処理装置10に入力する機能を有する。例えば、入力部110は、操作体の位置を検出するためのセンサ装置を有する。ユーザは、入力部110のセンサ装置の検出範囲内で操作体を移動させることにより、情報処理装置10に操作体の位置情報を入力することができる。例えば、入力部110が有するセンサ装置は、タッチパッド等の平面上での操作体の位置を検出する装置であってもよく、操作体の位置情報として2次元平面上における位置情報が入力されてもよい。また、例えば、入力部110が有するセンサ装置は、ステレオカメラや赤外線カメラ等の空間上での操作体の位置を検出する装置であってもよく、操作体の位置情報として3次元空間における位置情報が入力されてもよい。 The input unit 110 is an input interface for allowing a user to input information and commands related to various processing operations to the information processing apparatus 10. In the present embodiment, the input unit 110 has a function of detecting the position of the operating tool and inputting the position information to the information processing apparatus 10. For example, the input unit 110 includes a sensor device for detecting the position of the operating body. The user can input the position information of the operating tool to the information processing apparatus 10 by moving the operating tool within the detection range of the sensor device of the input unit 110. For example, the sensor device included in the input unit 110 may be a device that detects the position of the operation body on a plane such as a touch pad, and position information on a two-dimensional plane is input as position information of the operation body. Also good. Further, for example, the sensor device included in the input unit 110 may be a device that detects the position of the operating tool in a space such as a stereo camera or an infrared camera, and position information in a three-dimensional space as the position information of the operating tool. May be input.
 表示部120は、情報処理装置10において処理される各種の情報や処理された結果を表示画面に視覚的に表示する出力インターフェースである。本実施形態においては、表示部120は、制御部140からの制御により、各種のコンテンツ(例えば、動画や静止画等)の内容を表示画面上に表示する。また、表示部120は、入力部110から入力された操作体の位置情報の軌跡を表示画面上に表示してもよい。 The display unit 120 is an output interface that visually displays various types of information processed in the information processing apparatus 10 and processed results on a display screen. In the present embodiment, the display unit 120 displays the contents of various contents (for example, moving images and still images) on the display screen under the control of the control unit 140. Further, the display unit 120 may display the locus of the position information of the operating tool input from the input unit 110 on the display screen.
 記憶部130は、情報処理装置10によって処理される各種の情報や処理された結果を記憶するための記憶媒体の一例である。本実施形態においては、記憶部130は、情報処理装置10において処理されるコンテンツデータを記憶する。また、記憶部130は、制御部140によって行われるタグ付け処理の結果生成される、タグ情報が付与されたコンテンツデータを記憶する。 The storage unit 130 is an example of a storage medium for storing various types of information processed by the information processing apparatus 10 and processed results. In the present embodiment, the storage unit 130 stores content data processed by the information processing apparatus 10. In addition, the storage unit 130 stores content data to which tag information is added, which is generated as a result of the tagging process performed by the control unit 140.
 ここで、制御部140の機能及び構成について説明するに先立ち、図2を参照して、本実施形態に係るタグ付け処理の概要について説明する。図2は、本実施形態に係るタグ付け処理の一例について説明するための説明図である。 Here, prior to describing the function and configuration of the control unit 140, an overview of the tagging process according to the present embodiment will be described with reference to FIG. FIG. 2 is an explanatory diagram for explaining an example of a tagging process according to the present embodiment.
 図2に示す例では、入力部110はタッチパッド等の平面上での操作体の位置を検出するセンサ装置を有し、表示部120の表示画面210と一体化している。すなわち、入力部110と表示部120とは、いわゆるタッチパネルを構成している。また、図2では、操作体の一例として、操作体がユーザの指である場合について図示している。 In the example shown in FIG. 2, the input unit 110 includes a sensor device that detects the position of the operating body on a plane such as a touch pad, and is integrated with the display screen 210 of the display unit 120. That is, the input unit 110 and the display unit 120 constitute a so-called touch panel. FIG. 2 illustrates a case where the operation body is a user's finger as an example of the operation body.
 図2を参照すると、表示画面210には、コンテンツの一例として動画の1シーン(動画データに含まれる画像データ)が表示画面210に表示されている。また、表示画面210には、同時に、当該動画の再生中の経過時間(再生位置)を示すインジケータ220が表示されている。なお、以下の説明では、表示画面210に表示されている画像を基準として横方向をX軸方向、縦方向をY軸方向と呼称することとする。 Referring to FIG. 2, on the display screen 210, one scene of the moving image (image data included in the moving image data) is displayed on the display screen 210 as an example of the content. In addition, an indicator 220 indicating the elapsed time (reproduction position) during reproduction of the moving image is displayed on the display screen 210 at the same time. In the following description, the horizontal direction is referred to as the X-axis direction and the vertical direction is referred to as the Y-axis direction based on the image displayed on the display screen 210.
 例えば、指230を表示画面210上の任意の点に接触させると、表示画面210上には接触点を表すポイント240が表示される。このようにして、表示画面210上におけるポイント240の2次元的な位置情報が情報処理装置10に入力される。また、指230を表示画面210に接触させた状態で指230を移動させることにより、図2に示すように、表示画面210上に位置情報の軌跡250が表示されてもよい。このように、図2に示す例では、操作体の位置情報は、表示画面210上における座標値(X,Y)として取得される。 For example, when the finger 230 is brought into contact with an arbitrary point on the display screen 210, a point 240 representing the contact point is displayed on the display screen 210. In this way, the two-dimensional position information of the point 240 on the display screen 210 is input to the information processing apparatus 10. Further, by moving the finger 230 in a state where the finger 230 is in contact with the display screen 210, a locus 250 of position information may be displayed on the display screen 210 as shown in FIG. As described above, in the example illustrated in FIG. 2, the position information of the operation tool is acquired as the coordinate value (X, Y) on the display screen 210.
 ここで、本実施形態においては、操作体の位置情報のうち、第1の方向における操作体の位置情報がタグ情報として利用され、当該第1の方向とは異なる第2の方向における操作体の位置情報がコンテンツの再生状態制御に利用されてよい。具体的には、図2に示す例であれば、Y軸方向、すなわち、表示画面の表示に対して上下方向が第1の方向であってよく、X軸方向、すなわち、表示画面の表示に対して左右方向が第2の方向であってよい。例えば、操作体の位置情報のうち、X軸方向における位置情報に応じて、コンテンツの再生位置が制御されたり、コンテンツの再生速度が制御されたりしてもよい。 Here, in the present embodiment, among the position information of the operating tool, the position information of the operating tool in the first direction is used as tag information, and the operating tool in a second direction different from the first direction is used. The position information may be used for content playback state control. Specifically, in the example shown in FIG. 2, the first direction may be the Y-axis direction, that is, the vertical direction with respect to the display screen, and the X-axis direction, ie, the display screen display. On the other hand, the left-right direction may be the second direction. For example, the playback position of the content or the playback speed of the content may be controlled according to the position information in the X-axis direction among the position information of the operating tool.
 このように、本実施形態においては、操作体の位置情報として2次元の位置情報が取得される際に、そのうちの一方、例えばX軸方向における位置情報によって再生状態が制御されながら、他方、例えばY軸方向における位置情報が取得される。従って、Y軸方向における位置情報がコンテンツ再生中の経過時間と関連付けられた操作体の位置情報として取得され、タグ情報として利用される。 As described above, in the present embodiment, when the two-dimensional position information is acquired as the position information of the operating tool, the reproduction state is controlled by the position information in one of them, for example, the X-axis direction, while the other, for example, Position information in the Y-axis direction is acquired. Therefore, the position information in the Y-axis direction is acquired as the position information of the operating tool associated with the elapsed time during content reproduction, and is used as tag information.
 図1に戻り、以上説明した本実施形態に係るタグ付け処理を実現するための制御部140の機能及び構成について説明する。ただし、以下では、本実施形態の一例として、図2に示したように、入力部110及び表示部120がタッチパネルを構成し、操作体が指であり、コンテンツデータが動画データである場合を例に挙げて、本実施形態についての説明を行うこととする。ただし、本実施形態はかかる例に限定されない。本実施形態においては、入力部110は、操作体の位置を検出できればどのような構成であってもよく、例えば、操作体はマウスによって操作されるマウスポインタであってもよく、入力部110は、表示部120の表示画面上における当該マウスポインタの位置を検出してもよい。また、上述したように、入力部110は空間上での操作体の位置を検出するセンサ装置を有していてもよく、操作体としてユーザの手の位置が検出されてもよい。更に、本実施形態においては、コンテンツデータは動画データでなくてもよく、例えば、音楽データ、静止画像が所定の時間で連続的に表示されるスライドショーデータ等、あらゆるコンテンツデータが適用されてよい。 Referring back to FIG. 1, the function and configuration of the control unit 140 for realizing the tagging process according to the present embodiment described above will be described. However, in the following, as an example of the present embodiment, as illustrated in FIG. 2, as an example, the input unit 110 and the display unit 120 configure a touch panel, the operation body is a finger, and the content data is moving image data. In this regard, the present embodiment will be described. However, this embodiment is not limited to this example. In the present embodiment, the input unit 110 may have any configuration as long as the position of the operating body can be detected. For example, the operating body may be a mouse pointer operated by a mouse. The position of the mouse pointer on the display screen of the display unit 120 may be detected. Further, as described above, the input unit 110 may include a sensor device that detects the position of the operating tool in space, and the position of the user's hand may be detected as the operating tool. Furthermore, in the present embodiment, the content data may not be moving image data, and for example, any content data such as music data and slide show data in which still images are continuously displayed at a predetermined time may be applied.
 制御部140は、情報処理装置10を統合的に制御するとともに、本実施形態に係るタグ付け処理における各種の情報処理を行う。以下、制御部140の機能及び構成についてより詳細に説明する。 The control unit 140 controls the information processing apparatus 10 in an integrated manner, and performs various types of information processing in the tagging process according to the present embodiment. Hereinafter, the function and configuration of the control unit 140 will be described in more detail.
 制御部140は、位置情報取得部141、再生状態制御部142、表示制御部143及びタグ付け部144を有する。 The control unit 140 includes a position information acquisition unit 141, a playback state control unit 142, a display control unit 143, and a tagging unit 144.
 位置情報取得部141は、入力部110によって検出された、コンテンツ再生中の経過時間と関連付けられた操作体の位置情報を取得する。本実施形態においては、位置情報取得部141が取得する位置情報は、表示部120の表示画面上における操作体の位置情報であり、当該位置情報は、例えば、表示画面上の2次元座標として取得されてよい。図2に示す例であれば、位置情報取得部141は、操作体の表示画面210への接触点であるポイント240に対応する座標値(X,Y)を位置情報として取得してもよい。位置情報取得部141は、取得した操作体の位置情報を、再生状態制御部142、表示制御部143及びタグ付け部144に送信する。 The position information acquisition unit 141 acquires the position information of the operating tool that is detected by the input unit 110 and is associated with the elapsed time during content playback. In the present embodiment, the position information acquired by the position information acquisition unit 141 is the position information of the operating tool on the display screen of the display unit 120, and the position information is acquired as, for example, two-dimensional coordinates on the display screen. May be. In the example illustrated in FIG. 2, the position information acquisition unit 141 may acquire the coordinate value (X, Y) corresponding to the point 240 that is a contact point of the operating tool with respect to the display screen 210 as the position information. The position information acquisition unit 141 transmits the acquired position information of the operating tool to the reproduction state control unit 142, the display control unit 143, and the tagging unit 144.
 再生状態制御部142は、情報処理装置10におけるコンテンツの再生状態を制御する。ここで、コンテンツの再生状態制御とは、コンテンツの再生に関する各種の動作の制御を意味し、例えば、コンテンツの(通常)再生、停止、一時停止、早送り、巻き戻し、高速再生、スロー再生、リピート再生等の各種の動作の制御を含むものである。また、コンテンツの再生状態制御には、コンテンツを任意の再生位置から再生する制御やコンテンツの一部分を抽出して再生する制御等も含まれる。 The playback state control unit 142 controls the playback state of content in the information processing apparatus 10. Here, content playback state control means control of various operations related to content playback. For example, (normal) playback, stop, pause, fast forward, rewind, high speed playback, slow playback, and repeat of content. This includes control of various operations such as reproduction. The content playback state control includes control for playing back content from an arbitrary playback position, control for extracting and playing back a part of the content, and the like.
 上述したように、本実施形態においては、位置情報取得部141によって取得される操作体の位置情報について、第1の方向における操作体の位置情報がタグ情報として利用され、当該第1の方向とは異なる第2の方向における操作体の位置情報がコンテンツの再生状態制御に利用される。図2に示す例であれば、再生状態制御部142は、表示画面210のX軸方向における操作体の位置情報に応じてコンテンツの再生状態を制御してよい。 As described above, in the present embodiment, the position information of the operating tool in the first direction is used as tag information for the position information of the operating tool acquired by the position information acquiring unit 141, and the first direction and The position information of the operating body in a different second direction is used for content playback state control. In the example illustrated in FIG. 2, the playback state control unit 142 may control the playback state of the content according to the position information of the operating tool in the X-axis direction of the display screen 210.
 具体的には、再生状態制御部142は、X軸方向における操作体の位置情報と、コンテンツ再生中の経過時間とを対応付け、X軸方向における操作体の位置情報に対応する再生位置でコンテンツを再生することができる。つまり、表示画面210におけるX軸の座標値がコンテンツ再生中の経過時間に対応しており、表示画面210のX軸方向における操作体の位置情報が変化することにより、コンテンツの再生位置がシークされる。なお、X軸の座標値をコンテンツ再生中の経過時間に対応させる際には、表示画面210の左から右に向かうにつれて、すなわち、X軸の座標値が大きくなるにつれて、コンテンツ内の時間が経過するように両者を対応付けることができる。このような対応付けを行うことにより、操作体を移動させるユーザの直感により合った再生位置のシークが可能となる。また、X軸の座標値がコンテンツ再生中の経過時間に対応している場合には、コンテンツの再生位置を示すインジケータ220も、X軸方向における操作体の位置情報に応じて変化してもよい。 Specifically, the playback state control unit 142 associates the position information of the operating tool in the X-axis direction with the elapsed time during content playback, and the content at the playback position corresponding to the position information of the operating tool in the X-axis direction. Can be played. That is, the X-axis coordinate value on the display screen 210 corresponds to the elapsed time during content reproduction, and the position of the operation body in the X-axis direction on the display screen 210 changes to seek the content reproduction position. The When the X-axis coordinate value corresponds to the elapsed time during content playback, the time in the content elapses from the left to the right of the display screen 210, that is, as the X-axis coordinate value increases. Both can be associated with each other. By performing such association, it is possible to seek the reproduction position that is more suitable for the user's intuition for moving the operating tool. When the X-axis coordinate value corresponds to the elapsed time during content playback, the indicator 220 indicating the content playback position may also change according to the position information of the operating tool in the X-axis direction. .
 なお、上記では、再生状態制御部142が、X軸方向における操作体の位置情報に応じて、コンテンツの再生位置を制御する場合について説明したが、本実施形態はかかる例に限定されない。再生状態制御部142は、X軸方向における操作体の位置情報に応じて、コンテンツに対して他の再生制御を行ってもよい。このような再生状態制御部142による他の再生制御については、下記<2.タグ付け処理の変形例>で詳しく後述する。 In the above description, the case where the playback state control unit 142 controls the playback position of the content according to the position information of the operating tool in the X-axis direction has been described, but the present embodiment is not limited to such an example. The playback state control unit 142 may perform other playback control on the content in accordance with the position information of the operating tool in the X-axis direction. Regarding other reproduction control by the reproduction state control unit 142, the following <2. This will be described later in detail in a modification of the tagging process.
 また、再生状態制御部142は、タグ付け処理が行われた後のコンテンツに対して、当該コンテンツに付与されているタグ情報に基づいて、当該コンテンツを編集し、編集したコンテンツの再生を制御してもよい。このようなタグ情報を利用したコンテンツの編集処理については、下記<3.タグ情報を利用したコンテンツ編集の具体例>で詳しく説明する。 In addition, the playback state control unit 142 edits the content after the tagging process based on the tag information given to the content, and controls the playback of the edited content. May be. Regarding content editing processing using such tag information, the following <3. Specific example of content editing using tag information> will be described in detail.
 再生状態制御部142は、自身が行うコンテンツの再生制御に関する情報を、表示制御部143に送信する。 The playback state control unit 142 transmits information related to the playback control of content performed by the playback state control unit 142 to the display control unit 143.
 表示制御部143は、表示部120の駆動を制御し、表示部120の表示画面に、情報処理装置10において処理される各種の情報を、テキスト、表、グラフ、画像等、あらゆる形式で視覚的に表示させる。本実施形態においては、表示制御部143は、コンテンツの内容を表示部120の表示画面に表示させる。具体的には、表示制御部143は、再生状態制御部142によるコンテンツの再生状態制御に応じて、コンテンツである動画に含まれる画像を表示画面に表示させる。また、表示制御部143は、位置情報取得部141が取得した操作体の位置情報に対応する点を表示部120の表示画面上に表示させる。例えば、図2に示す例であれば、表示制御部143は、表示画面210の、操作体の位置情報に対応する位置にポイント240を表示させている。また、表示制御部143は、図2に示すように、操作体の位置情報の軌跡250を表示画面210に表示させてもよい。 The display control unit 143 controls the driving of the display unit 120 and visually displays various types of information processed in the information processing apparatus 10 on the display screen of the display unit 120 in all formats such as text, tables, graphs, and images. To display. In the present embodiment, the display control unit 143 displays the content content on the display screen of the display unit 120. Specifically, the display control unit 143 displays an image included in the moving image that is the content on the display screen in accordance with the content reproduction state control by the reproduction state control unit 142. In addition, the display control unit 143 displays a point corresponding to the position information of the operating tool acquired by the position information acquisition unit 141 on the display screen of the display unit 120. For example, in the example illustrated in FIG. 2, the display control unit 143 displays the point 240 on the display screen 210 at a position corresponding to the position information of the operating tool. Further, as shown in FIG. 2, the display control unit 143 may display the locus 250 of the position information of the operating tool on the display screen 210.
 タグ付け部144は、位置情報取得部141によって取得された位置情報をタグ情報としてコンテンツに付与することにより、コンテンツへのタグ付けを行う。本実施形態においては、上述したように、タグ付け部144は、位置情報取得部141によって取得される操作体の位置情報のうち、第1の方向における操作体の位置情報をタグ情報として利用する。具体的には、図2に示す例であれば、タグ付け部144は、Y軸方向における操作体の位置情報をタグ情報として利用する。より具体的には、タグ付け部144は、Y軸方向における操作体の位置情報を数値化し、その数値(例えば、Y軸の座標値)をタグ情報として利用してよい。 The tagging unit 144 tags the content by adding the location information acquired by the location information acquisition unit 141 to the content as tag information. In the present embodiment, as described above, the tagging unit 144 uses the position information of the operating tool in the first direction among the position information of the operating tool acquired by the position information acquiring unit 141 as tag information. . Specifically, in the example illustrated in FIG. 2, the tagging unit 144 uses position information of the operating tool in the Y-axis direction as tag information. More specifically, the tagging unit 144 may digitize the position information of the operating tool in the Y-axis direction and use the numerical value (for example, the coordinate value of the Y-axis) as tag information.
 上述したように、操作体の位置情報は、例えば、表示画面210上における座標値(X,Y)である。そして、再生状態制御部142について説明した際に上述したように、操作体の位置情報のうち、X軸の座標値は、例えばコンテンツの再生位置、すなわちコンテンツの再生中の経過時間に対応している。従って、位置情報取得部141によって取得される操作体の位置情報は、コンテンツの再生中の経過時間に関連付けられた値(Y軸の座標値)を有するものであるといえる。よって、タグ付け部144は、操作体の位置情報をタグ情報として利用することにより、コンテンツに対してタグ付けを行うことができる。 As described above, the position information of the operating tool is, for example, coordinate values (X, Y) on the display screen 210. As described above when describing the playback state control unit 142, the coordinate value of the X axis in the position information of the operating tool corresponds to, for example, the playback position of the content, that is, the elapsed time during playback of the content. Yes. Accordingly, it can be said that the position information of the operating tool acquired by the position information acquisition unit 141 has a value (Y-axis coordinate value) associated with the elapsed time during the reproduction of the content. Therefore, the tagging unit 144 can tag content by using the position information of the operating tool as tag information.
 ここで、再び図2を参照して、本実施形態に係るタグ付け処理についてより具体的に説明するとともに、本実施形態に係る位置情報取得部141、再生状態制御部142、表示制御部143及びタグ付け部144の機能についてより詳細に説明する。図2は、本実施形態に係るタグ付け処理の一例について説明するための説明図である。なお、図2を参照して説明したタグ付け処理は、本実施形態に係るタグ付け処理の一例であり、本実施形態においては、他のタグ付け処理が行われてもよい。このような本実施形態における他のタグ付け処理の詳細については、下記<2.タグ付け処理の変形例>で改めて詳しく説明する。 Here, referring to FIG. 2 again, the tagging process according to the present embodiment will be described more specifically, and the position information acquisition unit 141, the playback state control unit 142, the display control unit 143, and the control unit according to the present embodiment will be described. The function of the tagging unit 144 will be described in more detail. FIG. 2 is an explanatory diagram for explaining an example of a tagging process according to the present embodiment. The tagging process described with reference to FIG. 2 is an example of the tagging process according to the present embodiment, and other tagging processes may be performed in the present embodiment. Details of such other tagging processes in the present embodiment are described in <2. This will be described in detail again in “Modification of Tagging Process>.
 図2に示すように、タグ付け処理を行う際には、ユーザは、表示画面210に指230を接触させることにより位置情報を入力する。ここで、X軸はコンテンツ再生中の経過時間(再生位置)に対応している。また、Y軸の値は、例えばユーザのそのコンテンツに対する「好み度」を示す指標であってよい。例えば、ユーザは、指230を表示画面210に接触させた状態で、表示画面210の左端から右端まで指230を移動させてコンテンツの再生位置をシークしながら、気に入ったシーンでは指230を上方向(Y軸の座標値が大きくなる方向)に移動させ、あまり魅力を感じなかったシーンでは指230を下方向(Y軸の座標値が小さくなる方向)に移動させる。このように位置情報が入力されることにより、位置情報取得部141によって取得される位置情報は、コンテンツの再生中の経過時間に対応した、すなわちコンテンツのシーンごとのユーザの好み度を表すものとなる。従って、タグ付け部144が、当該位置情報をタグ情報としてコンテンツに付与することにより、コンテンツのシーンごとのユーザの好み度を表すタグをコンテンツに付与することができる。なお、以下の説明では、タグ情報となる位置情報を入力することを、タグ情報を入力するとも呼称する。 As shown in FIG. 2, when performing the tagging process, the user inputs position information by bringing a finger 230 into contact with the display screen 210. Here, the X axis corresponds to the elapsed time (reproduction position) during content reproduction. Further, the value of the Y axis may be an index indicating, for example, the “preference level” of the user for the content. For example, while the finger 230 is in contact with the display screen 210, the user moves the finger 230 from the left end to the right end of the display screen 210 to seek the playback position of the content, and in the desired scene, moves the finger 230 upward. The finger 230 is moved downward (in the direction in which the Y-axis coordinate value decreases) in a scene that does not feel so attractive. When the position information is input in this manner, the position information acquired by the position information acquisition unit 141 corresponds to the elapsed time during the reproduction of the content, that is, represents the user's preference for each scene of the content. Become. Therefore, the tagging unit 144 can add a tag representing the user's preference level for each scene of the content to the content by adding the position information to the content as the tag information. In the following description, inputting position information serving as tag information is also referred to as inputting tag information.
 ここで、図2に示すタグ付け処理においては、位置情報を入力している間、表示制御部143によって、指230のX軸方向における位置情報に対応するシーンの画像が表示画面210に表示される。従って、ユーザは、表示画面210に表示される画像を参照しながら、つまり、動画のサムネイルを参照しながら自身の好み度を入力することができる。また、位置情報入力中に、ユーザが途中で指を止めた場合又は指を表示画面210から離した場合は、再生状態制御部142は、その位置のX軸の座標値に対応するシーンでコンテンツの再生を一時停止し、再度ユーザによって位置情報が入力されることによってコンテンツの再生を続行するように、再生状態を制御してもよい。このように、本実施形態に係るタグ付け処理においては、位置情報の入力は必ずしも連続的に行われなくてもよく、途中で中断されてもよい。このように、必要に応じて動画の再生を一時停止しながら、また動画のサムネイルを参照しながら位置情報の入力が行えることにより、動画の各シーンに対してよりユーザの意向が反映された好み度の入力が可能となる。 Here, in the tagging process shown in FIG. 2, while the position information is being input, the display control unit 143 displays an image of the scene corresponding to the position information of the finger 230 in the X-axis direction on the display screen 210. The Accordingly, the user can input his / her preference degree while referring to the image displayed on the display screen 210, that is, referring to the thumbnail of the moving image. In addition, when the user stops the finger in the middle of the position information input or releases the finger from the display screen 210, the playback state control unit 142 displays the content in the scene corresponding to the X-axis coordinate value of the position. The reproduction state may be controlled such that the reproduction of the content is paused and the reproduction of the content is continued when the position information is input again by the user. As described above, in the tagging process according to the present embodiment, the position information may not necessarily be input continuously, and may be interrupted in the middle. In this way, it is possible to input position information while pausing playback of a video as necessary and referring to a thumbnail of the video, so that the user's intention is reflected more in each scene of the video. The degree can be input.
 また、上記では、表示画面210の左端から右端まで指230を移動させながら、すなわち、コンテンツの全てのシーンに対して位置情報を入力する場合について説明したが、本実施形態はかかる例に限定されない。本実施形態では、位置情報取得部141は、コンテンツ内の任意の時間範囲に対応する部分について操作体の位置情報を取得し、タグ付け部144は、位置情報が取得された時間範囲に対応する部分についてコンテンツにタグ情報を付与してもよい。任意の時間についてのみ位置情報を入力するためには、X軸上の任意の地点から位置情報の入力を開始し、任意の地点で位置情報の入力を終了すればよい。例えば、表示画面210が、タグ情報入力用エリアと再生位置シーク用エリアとに分かれており、再生状態制御部142及びタグ付け部144は、タグ情報入力用エリアにおいて取得された位置情報だけをタグ情報として利用し、再生位置シーク用エリアにおいて取得された位置情報はタグ情報としては利用せず、コンテンツの再生位置をシークするために利用してもよい。図2に示す例であれば、表示画面210のうち、再生位置シーク用エリアはインジケータ220が表示されている領域であってもよく、タグ情報入力用エリアはインジケータ220が表示されている領域よりも上側の領域であってもよい。ユーザは、まず、インジケータ220上で操作体を移動させることにより所望の位置までコンテンツの再生位置をシークした後、タグ情報入力用エリアで操作体を移動させることによりタグ情報を入力することができる。 In the above description, the case where the finger 230 is moved from the left end to the right end of the display screen 210, that is, the case where the position information is input to all scenes of the content has been described. However, the present embodiment is not limited to such an example. . In the present embodiment, the position information acquisition unit 141 acquires the position information of the operating tool for a portion corresponding to an arbitrary time range in the content, and the tagging unit 144 corresponds to the time range in which the position information is acquired. Tag information may be added to the content for the portion. In order to input position information only for an arbitrary time, input of position information may be started from an arbitrary point on the X axis, and input of position information may be ended at an arbitrary point. For example, the display screen 210 is divided into a tag information input area and a reproduction position seek area, and the reproduction state control unit 142 and tagging unit 144 tag only the position information acquired in the tag information input area. The position information used in the information and acquired in the playback position seek area may not be used as tag information but may be used to seek the playback position of the content. In the example shown in FIG. 2, in the display screen 210, the playback position seek area may be an area where the indicator 220 is displayed, and the tag information input area is from the area where the indicator 220 is displayed. May be the upper region. First, the user seeks the playback position of the content to a desired position by moving the operating tool on the indicator 220, and then inputs the tag information by moving the operating tool in the tag information input area. .
 ここで、上記の説明におけるタグ付け処理において、X軸をコンテンツ再生中の経過時間に対応付ける際に、X軸をコンテンツの全再生時間と対応付けると、コンテンツの再生時間の長さに応じてX軸における解像度が異なることになる。例えば、再生時間が10分の動画と再生時間が100分の動画とでは、X軸上の同じ距離に対応付けられるコンテンツの再生時間は10倍異なることとなる。従って、コンテンツの再生時間が比較的長い場合には、操作体のX軸方向における移動距離に対するコンテンツ再生中の経過時間の進み具合(シーク量)が大きくなり、シーンごとに細かい位置情報を入力することが困難となる可能性がある。従って、本実施形態においては、再生状態制御部142が、表示画面210上でのX軸方向における一端から他端までをコンテンツ内の任意の時間範囲に対応する部分に対応付け、表示画面210上でのX軸方向における操作体の位置情報に対応する再生位置でコンテンツを再生してもよい。つまり、操作体が表示画面210の左端から右端まで移動する間にシークされるコンテンツの時間範囲が任意に設定可能であってよい。例えば、コンテンツが再生時間100分の動画である場合、表示画面210の左端から右端までを10分間に割り当てたとすると、再生状態制御部142によって当該コンテンツが10分割されて再生され、その各々に対して表示画面210の左端から右端まで操作体を移動させながら位置情報の入力が行われてよい。このように、コンテンツ内の任意の時間範囲に対応する部分を抜き出してタグ付け処理を行うことにより、より詳細で、ユーザの意向が反映されたタグ付け処理が可能となる。 Here, in the tagging process in the above description, when the X axis is associated with the total reproduction time of the content when the X axis is associated with the elapsed time during content reproduction, the X axis is set according to the length of the content reproduction time. The resolution will be different. For example, for a movie with a playback time of 10 minutes and a movie with a playback time of 100 minutes, the playback time of content associated with the same distance on the X-axis is 10 times different. Therefore, when the content playback time is relatively long, the progress (seek amount) of the elapsed time during content playback with respect to the movement distance of the operating tool in the X-axis direction increases, and fine position information is input for each scene. Can be difficult. Therefore, in the present embodiment, the playback state control unit 142 associates one end to the other end in the X-axis direction on the display screen 210 with a portion corresponding to an arbitrary time range in the content, and The content may be reproduced at a reproduction position corresponding to the position information of the operating body in the X-axis direction. That is, the time range of the content to be sought while the operating body moves from the left end to the right end of the display screen 210 may be arbitrarily set. For example, if the content is a video with a playback time of 100 minutes, and if the left end to the right end of the display screen 210 is assigned to 10 minutes, the playback state control unit 142 plays the content by dividing it into 10 parts. Then, the position information may be input while moving the operating body from the left end to the right end of the display screen 210. Thus, by extracting a portion corresponding to an arbitrary time range in the content and performing the tagging process, it is possible to perform a tagging process that reflects the user's intention in more detail.
 更に、本実施形態においては、タグ情報は上書きされてもよい。つまり、タグ付け部144は、最新の操作体の位置情報に基づいてコンテンツにタグ付けを行ってもよい。また、タグ情報が上書きされる際には、コンテンツの全時間範囲に対して位置情報が再取得される必要はなく、位置情報取得部141によってコンテンツ内の任意の時間範囲に対応する部分のみ位置情報が再取得され、タグ付け部144によって当該時間範囲に対応する部分のタグ情報のみ上書きされてもよい。従って、例えば、1度目はコンテンツの全時間範囲(全再生時間)を表示画面210の左端から右端に対応付けて、解像度の小さい状態で位置情報を入力する。そして、ユーザが気になっている部分については、その部分に対応する時間範囲を表示画面210の左端から右端に対応付け、解像度の大きい状態で再度位置情報を入力する。このようなタグ付け処理を行うことにより、コンテンツの全時間範囲に対して粗く位置情報の入力を行った後に、気になっている時間範囲に対応する部分には再度細かい位置情報の入力を行うことができ、効率的なタグ付け処理が可能となる。 Furthermore, in this embodiment, tag information may be overwritten. That is, the tagging unit 144 may tag content based on the latest position information of the operating tool. Further, when the tag information is overwritten, the position information does not need to be reacquired for the entire time range of the content, and only the portion corresponding to the arbitrary time range in the content is positioned by the position information acquisition unit 141. Information may be reacquired, and only the tag information of the portion corresponding to the time range may be overwritten by the tagging unit 144. Accordingly, for example, the position information is input with a small resolution in association with the entire time range (total playback time) of the content from the left end to the right end of the display screen 210 for the first time. And about the part which a user is worried about, the time range corresponding to the part is matched with the left end from the left end of the display screen 210, and position information is input again in a state with a large resolution. By performing such a tagging process, after inputting the position information roughly for the entire time range of the content, the position information corresponding to the time range of interest is input again again. This enables efficient tagging processing.
 また、本実施形態においては、指230のX軸方向における位置情報によってコンテンツの再生状態が制御されなくてもよい。例えば、指230のX軸方向における位置情報に関係なく、コンテンツが所定の速度で再生され表示画面210にその内容が表示されており、表示画面210に再生表示されているコンテンツの再生位置と、入力された指230のY軸方向における位置情報とが対応付けられることにより、タグ付け処理が行われてもよい。このようなタグ付け処理を行うことにより、ユーザは、指230のX軸方向における位置情報に注意を払う必要がなく、通常の速度で再生されるコンテンツを観賞しながら、指230をY軸方向に動かすことにより位置情報を入力することができる。従って、ユーザは、コンテンツの内容を楽しみながら、気に入ったシーンに付箋を付ける感覚で位置情報の入力を行うことができ、ユーザにとってより利便性の高いタグ付け処理が可能となる。 In the present embodiment, the content reproduction state may not be controlled by the position information of the finger 230 in the X-axis direction. For example, regardless of the position information of the finger 230 in the X-axis direction, the content is played back at a predetermined speed and the content is displayed on the display screen 210, and the playback position of the content played back and displayed on the display screen 210; The tagging process may be performed by associating the input position information of the finger 230 in the Y-axis direction. By performing such a tagging process, the user does not need to pay attention to the positional information of the finger 230 in the X-axis direction, and the finger 230 can be viewed in the Y-axis direction while watching content that is played back at a normal speed. The position information can be input by moving to. Therefore, the user can input position information as if he / she attaches a tag to a scene he / she likes while enjoying the contents, and tagging processing more convenient for the user can be performed.
 以上、図1を参照して、本開示の一実施形態に係る情報処理装置の概略構成について説明した。また、図2を参照して、本実施形態に係るタグ付け処理の一例について説明した。以上説明したように、本実施形態においては、位置情報取得部141によってコンテンツ再生中の経過時間と関連付けられた操作体の位置情報が取得され、タグ付け部144によって当該位置情報がタグ情報として前記コンテンツに付与されることにより、コンテンツのタグ付け処理が行われる。従って、ユーザによる操作体の位置情報入力、例えば、タッチパネルの表示画面上での指の移動によってコンテンツにタグ情報を付与することができるため、より自由度の高いタグ付け処理が可能となる。 The schematic configuration of the information processing apparatus according to an embodiment of the present disclosure has been described above with reference to FIG. In addition, an example of the tagging process according to the present embodiment has been described with reference to FIG. As described above, in the present embodiment, the position information acquisition unit 141 acquires the position information of the operating tool associated with the elapsed time during content reproduction, and the tagging unit 144 uses the position information as the tag information. By being given to the content, a content tagging process is performed. Therefore, tag information can be added to the content by inputting the position information of the operating tool by the user, for example, by moving a finger on the display screen of the touch panel, so that tagging processing with a higher degree of freedom is possible.
 また、本実施形態においては、取得された操作体の位置情報のうち、第1の方向における操作体の位置情報がタグ付け部144によってタグ情報として利用され、第1の方向とは異なる第2の方向における操作体の位置情報に応じて再生状態制御部142によってコンテンツの再生状態が制御される。従って、ユーザは、コンテンツの再生状態を制御しながら、例えばコンテンツの再生位置をシークしながらタグ情報を入力することができる。また、タグ情報は、コンテンツの一部に対してのみ入力されてもよく、上書きされてもよい。更に、コンテンツの再生位置のシークに割り当てられる上記第2の方向における操作体の位置情報の解像度が変更されてもよい。従って、ユーザは、任意の位置までコンテンツの再生位置をシークしてから任意の部分についてのみタグ情報を入力したり、解像度を変化させて複数回タグ情報を入力したり、といった、より効率的なタグ情報の入力を行うことが可能となる。 In the present embodiment, the position information of the operating body in the first direction is used as tag information by the tagging unit 144 among the acquired position information of the operating body, and is different from the first direction. The playback state of the content is controlled by the playback state control unit 142 in accordance with the position information of the operating body in the direction of the direction. Therefore, the user can input the tag information while controlling the playback state of the content, for example, while seeking the playback position of the content. Moreover, tag information may be input only with respect to a part of content, and may be overwritten. Furthermore, the resolution of the position information of the operating body in the second direction assigned to seek the playback position of the content may be changed. Therefore, the user seeks the playback position of the content up to an arbitrary position and then inputs tag information only for an arbitrary part, or changes the resolution and inputs tag information multiple times. It becomes possible to input tag information.
 <2.タグ付け処理の変形例>
 次に、図3、図4A-図4C及び図5を参照して、本実施形態におけるタグ付け処理の変形例について説明する。上記<1.情報処理装置の構成>で図2を参照して説明したタグ付け処理では、再生状態制御部142によって、表示画面210のX軸方向における操作体の位置情報とコンテンツ再生中の経過時間とが対応付けられ、X軸方向における操作体の位置情報に対応する再生位置でコンテンツを再生する再生制御が行われていた。一方、本変形例に係るタグ付け処理では、再生状態制御部142は、X軸方向における操作体の位置情報に応じて、コンテンツに対して他の再生制御を行ってもよい。例えば、再生状態制御部142は、X軸方向における操作体の位置情報に基づいて、コンテンツの再生速度を変化させてもよい。また、例えば、再生状態制御部142は、X軸方向における操作体の位置情報に基づいて、コンテンツの早送り又は巻き戻しを行ってもよい。ここでは、本実施形態に係るタグ付け処理の変形例のうち、これら2種類のタグ付け処理について詳しく説明する。
<2. Modification of tagging process>
Next, a modification of the tagging process in the present embodiment will be described with reference to FIG. 3, FIG. 4A to FIG. 4C, and FIG. <1. In the tagging process described with reference to FIG. 2 in the configuration of the information processing apparatus, the reproduction state control unit 142 associates the position information of the operating tool in the X-axis direction of the display screen 210 with the elapsed time during content reproduction. In addition, reproduction control for reproducing content at a reproduction position corresponding to the position information of the operating tool in the X-axis direction has been performed. On the other hand, in the tagging process according to this modification, the playback state control unit 142 may perform other playback controls on the content in accordance with the position information of the operating tool in the X-axis direction. For example, the playback state control unit 142 may change the playback speed of the content based on the position information of the operating tool in the X-axis direction. For example, the playback state control unit 142 may perform fast forward or rewind of the content based on the position information of the operating tool in the X axis direction. Here, among the modifications of the tagging process according to the present embodiment, these two types of tagging processes will be described in detail.
 [2-1.再生速度の制御]
 まず、本実施形態に係るタグ付け処理の変形例のうち、コンテンツの再生速度が制御されるタグ付け処理について、図3を参照して詳しく説明する。図3は、本実施形態におけるタグ付け処理の変形例である、コンテンツの再生速度が制御されるタグ付け処理について説明するための説明図である。なお、図3及び後述ずる図5における表示画面210、インジケータ220、指230、ポイント240及び軌跡250は、図2に示すものと同様であるため、詳細な説明は省略する。
[2-1. Controlling playback speed]
First, among the modifications of the tagging process according to this embodiment, a tagging process in which the playback speed of content is controlled will be described in detail with reference to FIG. FIG. 3 is an explanatory diagram for describing a tagging process in which the playback speed of content is controlled, which is a modification of the tagging process in the present embodiment. The display screen 210, the indicator 220, the finger 230, the point 240, and the locus 250 in FIG. 3 and FIG. 5 described later are the same as those shown in FIG.
 図3に示すタグ付け処理においては、表示画面210のX軸方向における操作体の位置情報とコンテンツの再生速度とが対応付けられており、再生状態制御部142は、X軸方向における操作体の位置情報に基づいて、コンテンツの再生速度を変化させる。例えば、本変形例に係るタグ付け処理では、X軸方向における操作体の位置情報とコンテンツの再生速度とは、X軸方向において表示画面210の左端が通常の再生速度であり、右端に向かうにつれて、すなわち、X軸の値が大きくなるにつれて再生速度が大きくなるように対応付けられている。再生状態制御部142は、ポイント240のX軸方向における位置情報に基づいて、対応する再生速度でコンテンツを再生することができる。 In the tagging process shown in FIG. 3, the position information of the operating body in the X-axis direction of the display screen 210 and the content playback speed are associated with each other, and the playback state control unit 142 displays the operating body in the X-axis direction. Based on the position information, the playback speed of the content is changed. For example, in the tagging process according to this modification, the position information of the operating tool and the content playback speed in the X-axis direction are the normal playback speed at the left end of the display screen 210 in the X-axis direction, and as it goes toward the right end. That is, the reproduction speed is increased as the X-axis value increases. The reproduction state control unit 142 can reproduce the content at a corresponding reproduction speed based on the position information of the point 240 in the X-axis direction.
 また、本変形例においても、Y軸方向の位置情報は、ユーザのそのコンテンツに対する「好み度」を示す指標であってよい。例えば、ユーザは、指230を表示画面210に接触させた状態で、X軸方向における指230の位置によってコンテンツの再生速度を調整しながら、気に入ったシーンでは指230を上方向(Y軸の座標値が大きくなる方向)に移動させ、あまり魅力を感じなかったシーンでは指230を下方向(Y軸の座標値が小さくなる方向)に移動させる。ユーザが、このような位置情報の入力操作をコンテンツの全時間範囲又は任意の時間範囲に対応する部分に対して行うことにより、コンテンツの再生中の経過時間に対応したシーンごとのユーザの好み度を表す位置情報が位置情報取得部141によって取得され、当該位置情報をタグ情報としてコンテンツへのタグ情報の付与がタグ付け部144によって行われる。 Also in this modification, the position information in the Y-axis direction may be an index indicating the “preference level” of the user for the content. For example, while the finger 230 is in contact with the display screen 210, the user adjusts the playback speed of the content according to the position of the finger 230 in the X-axis direction, and moves the finger 230 upward (Y-axis coordinates in a favorite scene). The finger 230 is moved downward (in the direction in which the Y-axis coordinate value decreases) in a scene that is not very attractive. The user's preference operation for each scene corresponding to the elapsed time during the reproduction of the content by performing the input operation of the position information on the entire time range of the content or the portion corresponding to the arbitrary time range. Is acquired by the position information acquisition unit 141, and the tagging unit 144 adds tag information to the content using the position information as tag information.
 また、ユーザが指230による位置情報の入力を行っている間、表示制御部143は、再生状態制御部142によるコンテンツの再生速度の制御に応じて、当該再生速度に対応する速さで動画内の画像を表示画面210に表示させる。従って、ユーザは、表示画面210に表示される画像(動画のサムネイル)を参照しながら、自身の好み度を入力することができる。 In addition, while the user inputs position information with the finger 230, the display control unit 143 controls the playback state control unit 142 to control the playback speed of the content at a speed corresponding to the playback speed. Are displayed on the display screen 210. Therefore, the user can input his / her preference degree while referring to the image (moving image thumbnail) displayed on the display screen 210.
 ここで、本変形例においては、X軸方向における操作体の位置情報とコンテンツの再生速度とが対応付けられるが、その対応のさせ方は、X軸の座標値と再生速度とが比例的に対応付けられていなくてもよい。このような、本変形例におけるX軸方向における操作体の位置情報とコンテンツの再生速度との対応付けについて、図4A-図4Cを参照して説明する。図4A-図4Cは、図3に示すタグ付け処理の変形例における、X軸方向における操作体の位置情報とコンテンツの再生速度との対応付けについて説明するための説明図である。 Here, in this modification, the position information of the operating body in the X-axis direction and the content playback speed are associated with each other. However, the coordinate value of the X-axis and the playback speed are proportional to each other. It does not need to be associated. Such association between the position information of the operating tool in the X-axis direction and the content reproduction speed in this modification will be described with reference to FIGS. 4A to 4C. 4A to 4C are explanatory diagrams for explaining the association between the position information of the operating tool in the X-axis direction and the content reproduction speed in the modification of the tagging process shown in FIG.
 図4A-図4Cでは、横軸は表示画面210におけるX軸の座標値を示しており、縦軸はコンテンツの再生速度を示している。従って、図4A-図4Cに示す曲線は、X軸方向における操作体の位置情報とコンテンツの再生速度との関係を示している。 4A to 4C, the horizontal axis represents the X-axis coordinate value on the display screen 210, and the vertical axis represents the content playback speed. Therefore, the curves shown in FIGS. 4A to 4C show the relationship between the position information of the operating tool in the X-axis direction and the content playback speed.
 本変形例においては、図4Aに直線Aで示すように、X軸の座標値とコンテンツの再生速度とは、X軸の座標値が大きくなるにつれてコンテンツの再生速度も同じ割合で大きくなる、比例関係であってもよい。また、図4Bに曲線Bで示すように、X軸の座標値とコンテンツの再生速度とが、X軸上のある値までは再生速度が緩やかに増加し、その後は急激に増加するような、下に凸の曲線に示す関係であってもよい。また、図4Cに曲線Cで示すように、X軸の座標値とコンテンツの再生速度とが、X軸上のある値までは再生速度が急激に増加し、その後は緩やかに増加するような、上に凸の曲線に示す関係であってもよい。 In this modification, as indicated by a straight line A in FIG. 4A, the X-axis coordinate value and the content playback speed are proportional to each other as the X-axis coordinate value increases and the content playback speed also increases at the same rate. Relationship may be. Further, as shown by a curve B in FIG. 4B, the X-axis coordinate value and the content playback speed gradually increase until a certain value on the X-axis, and then increase rapidly. The relationship shown by the downwardly convex curve may be sufficient. Further, as indicated by a curve C in FIG. 4C, the X-axis coordinate value and the content playback speed rapidly increase until a certain value on the X-axis, and then gradually increase. The relationship shown by a convex curve may be sufficient.
 このように、本変形例におけるX軸方向における操作体の位置情報とコンテンツの再生速度との対応付けには、図4A-図4Cに示すような様々な対応関係が用いられてよい。なお、本変形例におけるX軸方向における操作体の位置情報とコンテンツの再生速度との対応付けは、図4A-図4Cに示す関係に限定されず、他のあらゆる対応関係が用いられてよい。例えば、ユーザが表示画面210上で操作体を移動させることにより、X軸の座標値とコンテンツの再生速度との関係について任意の関係性を入力できるようにしてもよい。 As described above, various correspondence relationships as shown in FIGS. 4A to 4C may be used for associating the position information of the operating tool in the X-axis direction and the content playback speed in this modification. Note that the association between the position information of the operating tool in the X-axis direction and the content playback speed in the present modification is not limited to the relationship shown in FIGS. 4A to 4C, and any other correspondence relationship may be used. For example, the user may be able to input an arbitrary relationship regarding the relationship between the X-axis coordinate value and the content playback speed by moving the operating tool on the display screen 210.
 以上、図3及び図4A-図4Cを参照して説明したように、本変形例に係るタグ付け処理においては、X軸方向における操作体の位置情報とコンテンツの再生速度とが対応付けられており、ユーザは、コンテンツの再生速度を制御しながら、操作体の位置情報を入力することができる。従って、例えば、ユーザは、表示画面210に表示されている動画のサムネイルを参照しながら、あまり魅力を感じないシーンでは再生速度を速くするとともに指230を下方向に移動させ(すなわち、好み度として小さい値を入力し)、気に入ったシーンでは再生速度を遅くするとともに指230を上下方向に動かして細かな好み度の入力を行うことが可能となる。このように、本変形例においては、よりユーザにとって利便性の高いタグ付け処理が実現される。 As described above with reference to FIGS. 3 and 4A to 4C, in the tagging process according to this modification, the position information of the operating tool in the X-axis direction and the content playback speed are associated with each other. The user can input the position information of the operating tool while controlling the playback speed of the content. Therefore, for example, while referring to the thumbnail of the moving image displayed on the display screen 210, the user increases the playback speed and moves the finger 230 downward in a scene that does not feel much attractive (that is, as a preference level). It is possible to input a fine degree of preference by slowing down the playback speed and moving the finger 230 up and down in a favorite scene. Thus, in this modification, tagging processing that is more convenient for the user is realized.
 [2-2.早送り、巻き戻しの制御]
 次に、本実施形態に係るタグ付け処理の変形例のうち、コンテンツの早送り及び巻き戻しが制御されるタグ付け処理について、図5を参照して詳しく説明する。図5は、本実施形態におけるタグ付け処理の変形例である、コンテンツの早送り又は巻き戻しが制御されるタグ付け処理について説明するための説明図である。
[2-2. Control of fast forward and rewind]
Next, among the modifications of the tagging process according to the present embodiment, the tagging process in which fast-forwarding and rewinding of content is controlled will be described in detail with reference to FIG. FIG. 5 is an explanatory diagram for explaining a tagging process in which fast-forwarding or rewinding of content is controlled, which is a modification of the tagging process in the present embodiment.
 図5に示すタグ付け処理においては、表示画面210のX軸方向における操作体の位置情報とコンテンツの早送り、巻き戻しの制御とが対応付けられており、再生状態制御部142は、X軸方向における操作体の位置情報に基づいて、コンテンツの早送り又は巻き戻しを行ってもよい。図5に示す例では、再生状態制御部142は、表示画面210のX軸方向における略中点を基準点とし、操作体の位置情報が当該基準点よりも右側で取得される場合にはコンテンツを早送りし、操作体の位置情報が当該基準点よりも左側で取得される場合にはコンテンツを巻き戻す制御を行う。また、再生状態制御部142がコンテンツの早送り又は巻き戻しを行う場合には、X軸方向における操作体の位置情報に応じて、早送り又は巻き戻しの速度が制御されてもよい。例えば、図5に示す例であれば、操作体の位置情報におけるX軸の値が、表示画面210上で右側に向かうにつれて早送りの速度が速くなり、左側に向かうにつれて巻き戻しの速度が速くなってもよい。 In the tagging process shown in FIG. 5, the position information of the operating tool in the X-axis direction of the display screen 210 is associated with the fast-forward and rewind control of the content. The content may be fast-forwarded or rewound based on the position information of the operating body. In the example shown in FIG. 5, the playback state control unit 142 uses the approximate midpoint in the X-axis direction of the display screen 210 as a reference point, and the content when the position information of the operating tool is acquired on the right side of the reference point. And when the position information of the operating tool is acquired on the left side of the reference point, control is performed to rewind the content. Further, when the playback state control unit 142 performs fast-forward or rewind of the content, the fast-forward or rewind speed may be controlled according to the position information of the operating body in the X-axis direction. For example, in the example shown in FIG. 5, the fast-forward speed increases as the value of the X-axis in the position information of the operating body moves rightward on the display screen 210, and the rewinding speed increases faster toward the left side. May be.
 また、本変形例においても、Y軸方向の位置情報は、ユーザのそのコンテンツに対する「好み度」を示す指標であってよい。例えば、ユーザは、指230を表示画面210に接触させた状態で、X軸方向における指230の位置によってコンテンツの巻き戻しや早送りを行いながら、気に入ったシーンでは指230を上方向(Y軸の座標値が大きくなる方向)に移動させ、あまり魅力を感じなかったシーンでは指230を下方向(Y軸の座標値が小さくなる方向)に移動させる。ユーザが、このような位置情報の入力操作をコンテンツの全時間範囲又は任意の時間範囲に対応する部分に対して行うことにより、コンテンツの再生中の経過時間に対応したシーンごとのユーザの好み度を表す位置情報が位置情報取得部141によって取得され、当該位置情報をタグ情報としてコンテンツへのタグ情報の付与がタグ付け部144によって行われる。 Also in this modification, the position information in the Y-axis direction may be an index indicating the “preference level” of the user for the content. For example, while the finger 230 is in contact with the display screen 210, the user rewinds or fast-forwards the content according to the position of the finger 230 in the X-axis direction, and moves the finger 230 upward (in the Y-axis direction in a favorite scene). The finger 230 is moved downward (in the direction in which the Y-axis coordinate value decreases) in a scene that is not very attractive. The user's preference operation for each scene corresponding to the elapsed time during the reproduction of the content by performing the input operation of the position information on the entire time range of the content or the portion corresponding to the arbitrary time range. Is acquired by the position information acquisition unit 141, and the tagging unit 144 adds tag information to the content using the position information as tag information.
 また、ユーザが指230による位置情報の入力を行っている間、表示制御部143は、再生状態制御部142によるコンテンツの早送り又は巻き戻しの制御に応じて、当該早送り又は巻き戻しに対応する動画内の画像を表示画面210に表示させる。従って、ユーザは、表示画面210に表示される画像(動画のサムネイル)を参照しながら、自身の好み度を入力することができる。 In addition, while the user inputs position information with the finger 230, the display control unit 143 controls the moving image corresponding to the fast-forward or rewind according to the fast-forward or rewind control of the content by the playback state control unit 142. The image inside is displayed on the display screen 210. Therefore, the user can input his / her preference degree while referring to the image (moving image thumbnail) displayed on the display screen 210.
 以上、図5を参照して説明したように、本変形例に係るタグ付け処理においては、X軸方向における操作体の位置情報とコンテンツの早送り、巻き戻しの制御とが対応付けられており、ユーザは、コンテンツの早送り又は巻き戻しを行いながら位置情報を入力することができる。また、その早送り又は巻き戻しの速度も制御することができる。従って、例えば、ユーザは、表示画面210に表示されている画像データを参照しながら、あまり魅力を感じないシーンではコンテンツを早送りしながら指230を下方向に移動させ(すなわち、好み度が小さい旨を入力し)、気に入ったシーンでは再生速度を通常速度に戻すとともに指230を上下方向に動かして細かな好み度の入力を行うことが可能となる。また、タグ情報を上書きしたい場合には、コンテンツを所望の再生位置まで巻き戻し、位置情報を再入力することができる。このように、本変形例においては、よりユーザにとって利便性の高いタグ付け処理が実現される。 As described above with reference to FIG. 5, in the tagging process according to this modification, the position information of the operating tool in the X-axis direction is associated with the fast-forward and rewind control of the content, The user can input position information while fast-forwarding or rewinding the content. In addition, the speed of fast forward or rewind can be controlled. Therefore, for example, while referring to the image data displayed on the display screen 210, the user moves the finger 230 downward while fast-forwarding the content in a scene that does not feel much attractive (that is, the preference is small). In a favorite scene, it is possible to return the playback speed to the normal speed and move the finger 230 up and down to input a fine degree of preference. When it is desired to overwrite the tag information, the content can be rewound to a desired reproduction position, and the position information can be re-input. Thus, in this modification, tagging processing that is more convenient for the user is realized.
 以上、図3、図4A-図4C及び図5を参照して、本実施形態におけるタグ付け処理の変形例について説明した。なお、上記説明した各変形例に係るタグ付け処理においても、図2を参照して説明したタグ付け処理と同様、コンテンツの任意の時間範囲に対応する部分にのみ位置情報が入力されることにより部分的なタグ付け処理が行われてもよいし、操作体の位置情報が再入力されることにより、タグ情報は上書きされてもよい。 As described above, the modification example of the tagging process in the present embodiment has been described with reference to FIGS. 3, 4A to 4C, and FIG. In addition, in the tagging process according to each of the modifications described above, as in the tagging process described with reference to FIG. 2, position information is input only to a portion corresponding to an arbitrary time range of the content. Partial tagging processing may be performed, or tag information may be overwritten by re-inputting position information of the operating tool.
 以上説明したように、本変形例においては、表示画面210のX軸方向における操作体の位置情報に基づいて、コンテンツの再生速度の制御や、コンテンツの早送り又は巻き戻し動作の制御が行われる。従って、ユーザは、コンテンツの再生速度を所望の速度に変化させながら、また、早送り又は巻き戻しを行うことにより所望の再生位置まで移動しながら、タグ情報を入力することができ、よりユーザにとって利便性の高いタグ付け処理が実現される。なお、本実施形態に係るタグ付け処理は上記説明したものに限定されず、X軸方向における操作体の位置情報が他の再生制御に対応付けられた他のタグ付け処理が行われてもよい。 As described above, in this modification, based on the position information of the operating body in the X-axis direction of the display screen 210, the control of the content playback speed and the fast-forward or rewind operation of the content are performed. Accordingly, the user can input the tag information while changing the playback speed of the content to a desired speed and moving to a desired playback position by performing fast forward or rewind, which is more convenient for the user. A highly tagging process is realized. Note that the tagging process according to the present embodiment is not limited to the one described above, and other tagging processes in which the position information of the operating tool in the X-axis direction is associated with other reproduction controls may be performed. .
 <3.タグ情報を利用したコンテンツ編集処理の具体例>
 次に、再生状態制御部142による、タグ情報を利用したコンテンツ編集処理について具体的に説明する。上述したように、再生状態制御部142は、コンテンツに付与されているタグ情報に基づいて、当該コンテンツを編集し、編集したコンテンツの再生を制御することができる。ここでは、図6、図7、図8及び図9を参照して、本実施形態に係るタグ情報を利用したコンテンツ編集の具体例について詳細に説明する。
<3. Specific example of content editing processing using tag information>
Next, the content editing process using tag information by the playback state control unit 142 will be specifically described. As described above, the playback state control unit 142 can edit the content based on the tag information given to the content, and can control the playback of the edited content. Here, a specific example of content editing using tag information according to the present embodiment will be described in detail with reference to FIGS. 6, 7, 8, and 9.
 [3-1.所定の再生時間のコンテンツの作成]
 本実施形態に係る再生状態制御部142は、タグ情報に基づいて、コンテンツの一部分を抽出することができる。まず、図6及び図7を参照して、タグ情報に基づいてコンテンツの一部分を抽出し、所定の再生時間のコンテンツを作成する処理について説明する。図6は、所定の再生時間のコンテンツの作成処理について説明するための説明図である。図7は、コンテンツの一部分を抽出する際のスムージング処理について説明するための説明図である。
[3-1. Creation of content with a specified playback time]
The playback state control unit 142 according to the present embodiment can extract a part of the content based on the tag information. First, a process of extracting a part of content based on tag information and creating content with a predetermined reproduction time will be described with reference to FIGS. FIG. 6 is an explanatory diagram for explaining a process of creating content with a predetermined playback time. FIG. 7 is an explanatory diagram for explaining a smoothing process when a part of content is extracted.
 図6では、横軸(x軸)はコンテンツの再生中の経過時間を示しており、縦軸(y軸)はタグ付け処理において入力された操作体の位置情報の表示画面210のY軸上における座標値を示している。従って、図6に示す曲線310は、コンテンツの再生中の経過時間と操作体の位置情報(Y軸方向の位置情報)とが関連付けられたタグ情報であるといえる。従って、以下の説明では、曲線310のことをタグ情報310とも呼称する。なお、後述する図8及び図9における横軸及び縦軸も、図6における横軸及び縦軸と同様の意味を有するものであるため、以下の説明では、図8及び図9に示す曲線410、420のことをタグ情報410、420とも呼称する。また、以下の図6、図7、図8及び図9についての説明においては、操作体の位置情報の一例として、縦軸の値はユーザの好み度を表す指標であるとする。 In FIG. 6, the horizontal axis (x-axis) indicates the elapsed time during playback of the content, and the vertical axis (y-axis) is on the Y axis of the position information display screen 210 input in the tagging process. The coordinate value in is shown. Therefore, it can be said that the curve 310 shown in FIG. 6 is tag information in which the elapsed time during the reproduction of the content and the position information of the operating tool (position information in the Y-axis direction) are associated with each other. Therefore, in the following description, the curve 310 is also referred to as tag information 310. 8 and 9 to be described later also have the same meaning as the horizontal and vertical axes in FIG. 6, and in the following description, the curve 410 shown in FIGS. 8 and 9 is used. , 420 are also referred to as tag information 410, 420. In the following description of FIGS. 6, 7, 8, and 9, as an example of the position information of the operating tool, the value on the vertical axis is an index representing the user's preference.
 図6を参照すると、タグ情報310に対して縦軸の値が所定のしきい値以上である範囲をコンテンツから抽出することにより、所定の再生時間のコンテンツが作成される様子が示されている。縦軸の値は、コンテンツの各シーンにおけるユーザの好み度を表しているため、このような処理を行うことにより、ユーザの好み度の高い部分、すなわち、ユーザが興味を持っている部分だけが抽出されたダイジェスト版の動画データを作成することができる。このように、再生状態制御部142は、タグ情報について、Y軸方向における操作体の位置情報とY軸の座標値に割り当てられた好み度(スコア)とを対応付け、コンテンツのうち当該好み度が所定のしきい値以上である部分を抽出することができる。 Referring to FIG. 6, a state in which content having a predetermined playback time is created by extracting a range in which the value of the vertical axis is greater than or equal to a predetermined threshold with respect to the tag information 310 is shown. . Since the value on the vertical axis represents the user's preference level in each scene of the content, by performing such processing, only the portion where the user's preference level is high, that is, the portion in which the user is interested is displayed. The extracted digest version of the moving image data can be created. As described above, the reproduction state control unit 142 associates the position information of the operating tool in the Y-axis direction with the preference degree (score) assigned to the coordinate value of the Y-axis for the tag information, and the preference degree of the content A portion where is equal to or greater than a predetermined threshold can be extracted.
 図6に示す例であれば、例えばダイジェスト版の動画データを作成するためのしきい値の例として、5分ダイジェスト用しきい値、10分ダイジェスト用しきい値及び20分ダイジェスト用しきい値が、タグ情報310上に模式的に図示されている。例えば、5分間のダイジェスト版の動画データを作成する場合であれば、縦軸の値が5分ダイジェスト用しきい値以上である時間範囲に対応する部分がコンテンツから抽出される。具体的には、図6に示す例であれば、コンテンツから、再生中の経過時間がT11~T12及びT17~動画終了時までの時間範囲に対応する部分が抽出され、これらの動画データがつなぎ合わされることによって、5分ダイジェスト版の動画データが作成される。 In the example shown in FIG. 6, for example, as a threshold value for creating digest version moving image data, a threshold value for 5 minute digest, a threshold value for 10 minute digest, and a threshold value for 20 minute digest Is schematically shown on the tag information 310. For example, in the case of creating a 5 minute digest version of moving image data, a portion corresponding to a time range in which the value on the vertical axis is equal to or greater than the 5 minute digest threshold is extracted from the content. Specifically, in the example shown in FIG. 6, portions corresponding to the time ranges from the time T 11 to T 12 and T 17 to the end of the moving image are extracted from the content, and these moving images are extracted from the content. By combining the data, a 5-minute digest version of moving image data is created.
 同様に、例えば、10分間のダイジェスト版の動画データを作成する場合であれば、縦軸の値が10分ダイジェスト用しきい値以上である時間範囲、すなわち、再生中の経過時間がT~T、T~T、T10~T13及びT16~動画終了時までの時間範囲に対応する部分がコンテンツから抽出され、これらの動画データがつなぎ合わされることによって、10分ダイジェスト版の動画データが作成される。また、例えば、20分間のダイジェスト版の動画データを作成する場合であれば、縦軸の値が20分ダイジェスト用しきい値以上である時間範囲、すなわち、再生中の経過時間がT~T、T~T、T~T14及びT15~動画終了時までの時間範囲に対応する部分がコンテンツから抽出され、これらの動画データがつなぎ合わされることによって、20分ダイジェスト版の動画データが作成される。 Similarly, for example, in the case of creating a 10 minute digest version of moving image data, a time range in which the value on the vertical axis is equal to or greater than the 10 minute digest threshold, that is, the elapsed time during playback is T 2 . The portions corresponding to the time ranges from T 3 , T 6 to T 7 , T 10 to T 13, and T 16 to the end of the moving image are extracted from the content, and these moving image data are joined together to produce a 10 minute digest version. Video data is created. Also, for example, in the case of creating a 20-minute digest version of moving image data, a time range in which the value on the vertical axis is equal to or greater than the 20-minute digest threshold, that is, the elapsed time during playback is T 1 to T 4 , T 5 -T 8 , T 9 -T 14 and T 15 -Parts corresponding to the time range from the end of the video are extracted from the content, and these video data are joined together to create a 20-minute digest version. Movie data is created.
 このように、再生状態制御部142は、好み度におけるしきい値を調整することにより、トータルの再生時間が所定の再生時間になるように、コンテンツの一部分を好み度の高い方から順に抽出することができる。 As described above, the playback state control unit 142 adjusts the threshold value for the preference level, and extracts a part of the content in order from the highest preference level so that the total playback time becomes a predetermined playback time. be able to.
 ここで、図6に示す方法においては、連続していない複数の時間範囲に対応する部分がコンテンツから抽出され、それらの動画データがつなぎ合わされることによって、所定の再生時間のダイジェスト版の動画データが作成される。従って、抽出された部分がつなぎ合わされる際に、そのつなぎ目において動画が非連続的になってしまう可能性がある。当該現象に対して、本実施形態においては、例えば動画の内容に基づいたスムージング処理を行うことが可能である。 Here, in the method shown in FIG. 6, portions corresponding to a plurality of non-consecutive time ranges are extracted from the content, and the moving image data are connected to each other, so that the digest version of the moving image data of a predetermined reproduction time is obtained. Is created. Therefore, when the extracted portions are joined together, the moving image may become discontinuous at the joint. In the present embodiment, for example, smoothing processing based on the content of a moving image can be performed for this phenomenon.
 このような所定の再生時間のコンテンツ作成時におけるスムージング処理について、図7を参照して説明する。図7に示す曲線320は、図6に示すタグ情報310の一部分を抽出したものである。図7を参照すると、例えば、ダイジェスト版の動画を作成しようとする際、図6を参照して説明したように、本来であれば、しきい値以上の時間範囲に対応する部分である抽出範囲Tの部分が抽出される。しかし、スムージング処理を行う際には、再生状態制御部142は、当該抽出範囲Tで示す範囲よりも広い抽出範囲Tの部分を抽出し、前後の他の抽出した部分とつなぎ合わせることにより、ダイジェスト版の動画を作成してもよい。 With reference to FIG. 7, a description will be given of the smoothing process at the time of creating content with such a predetermined playback time. A curve 320 shown in FIG. 7 is obtained by extracting a part of the tag information 310 shown in FIG. Referring to FIG. 7, for example, when trying to create a digest version of a moving image, as described with reference to FIG. 6, an extraction range that is originally a portion corresponding to a time range equal to or greater than a threshold value. A portion of TA is extracted. However, when performing smoothing processing, playback state control unit 142 extracts a portion of the broad extraction range T B than the range shown in the extraction range T A, by joining with other extracted portions of the front and rear A digest version of the video may be created.
 ここで、抽出範囲Tは、コンテンツデータに含まれる画像データや音声データ等に基づいて決定されてよい。例えば、動画データに含まれる画像データにおいてピクセル情報が大きく変化した場合には、画面内における明度、色彩等が大きく変化しており、動画内においてカメラの撮影方向が変化していたり場面転換が行われたりしている可能性が高い。また、音声データにおいて、音声入力レベル(音量)が著しく変化したり、音声方向が変化したりしている場合にも、動画内において場面転換が行われている可能性が高い。従って、再生状態制御部142は、コンテンツデータに含まれる、画像データにおけるピクセル情報や、音声データにおける音声入力レベル及び音声方向等の変化量が比較的大きい点を境界とすることにより、抽出範囲Tを設定してもよい。また、コンテンツデータに、本実施形態に係るタグ情報とは異なる他のタグが付与されている場合には、このような他のタグに基づいて抽出範囲Tが設定されてもよい。ここで、他のタグとは、例えば、コンテンツ提供者によって設定される、コンテンツである動画に含まれる各シーンのイベント発生時刻やイベントの種類、内容等についての情報を含むメタデータである。当該メタデータは、例えばコンテンツがテレビ用に配信されるプログラムである場合には、いわゆるコマーシャル(CM)が開始又は終了するタイミングや、特定の出演者が登場又は退場するタイミング、又はシーンが切り替わるタイミング等についての情報であってよい。従って、再生状態制御部142は、当該メタデータに基づいて、場面転換を示すタイミングを境界として、コンテンツデータの抽出範囲Tを設定してもよい。このように抽出範囲Tを設定することにより、抽出範囲Tの直前及び直後では、動画内において場面転換等が行われている可能性が高くなる。従って、これらの抽出範囲に対応する部分をつなぎ合わせることにより、作成されるダイジェスト版の動画においては、つなぎ目における不連続性が緩和され、よりユーザにとって自然な動画が作成される。なお、抽出範囲Tを設定するためにどのような情報を用いるかは、ユーザによって適宜設定可能であってよい。 Here, the extraction range T B may be determined based on image data and audio data included in the content data. For example, when the pixel information changes greatly in the image data included in the moving image data, the brightness, color, etc. in the screen change greatly, and the shooting direction of the camera changes or the scene changes within the moving image. There is a high possibility of being broken. In addition, in audio data, when the audio input level (volume) changes significantly or the audio direction changes, there is a high possibility that a scene change is performed in the moving image. Accordingly, the playback state control unit 142 uses the extraction range T2 as a boundary by using a point where the amount of change in the pixel information in the image data, the audio input level and the audio direction in the audio data, and the like included in the content data is relatively large. B may be set. Further, the content data, when the tag information according to the present embodiment is different from the tags applied may be set extraction range T B based on such other tags. Here, the other tag is, for example, metadata including information about the event occurrence time, the event type, and the content of each scene included in the moving image that is the content set by the content provider. For example, when the content is a program that is distributed for television, the timing at which a so-called commercial (CM) starts or ends, the timing at which a specific performer appears or leaves, or the timing at which a scene changes It may be information about etc. Therefore, the reproduction state control unit 142, based on the metadata, as the boundary a timing indicating a scene change, may set the extracted range T B of the content data. By setting the extraction range T B, the immediately preceding and immediately following the extraction range T B, possibly scene change or the like is performed in the moving image is increased. Therefore, by connecting portions corresponding to these extraction ranges, discontinuity at the joints is reduced in the created digest version of the moving image, and a more natural moving image for the user is generated. Note that whether to use what information to set an extraction range T B may be configurable as appropriate by the user.
 [3-2.タグ情報の共有]
 次に、図8及び図9を参照して、複数の互いに異なるタグ情報に基づくコンテンツの編集処理について説明する。図8は、複数の互いに異なるタグ情報について説明するための説明図である。図9は、複数の互いに異なる動画に対するタグ情報に基づいて、所定の再生時間のコンテンツの作成処理について説明するための説明図である。
[3-2. Tag information sharing]
Next, content editing processing based on a plurality of different tag information will be described with reference to FIGS. 8 and 9. FIG. 8 is an explanatory diagram for describing a plurality of different tag information. FIG. 9 is an explanatory diagram for explaining content creation processing for a predetermined playback time based on tag information for a plurality of different moving images.
 図8を参照すると、2つの互いに異なるタグ情報310、410が図示されている。タグ情報310は、図6に示すタグ情報310と同じものであり、ユーザによって入力された操作体の位置情報に基づいて作成されたものである。一方、タグ情報410は、例えば、同一の動画に対して、他のユーザによって入力された操作体の位置情報に基づいて作成されたタグ情報である。このように、本実施形態においては、複数の互いに異なるユーザによって作成された複数のタグ情報がユーザ間で共有されてよい。 Referring to FIG. 8, two different tag information 310 and 410 are illustrated. The tag information 310 is the same as the tag information 310 shown in FIG. 6, and is created based on the position information of the operating tool input by the user. On the other hand, the tag information 410 is, for example, tag information created based on the position information of the operating tool input by another user for the same moving image. Thus, in this embodiment, a plurality of tag information created by a plurality of different users may be shared between users.
 例えば、ユーザは、ある動画に対して自身が作成したタグ情報を、クラウド上に存在するサーバにアップし、他のユーザに公開することができる。また、ユーザは、当該サーバにアップされている、当該動画に対して他のユーザによって作成されたタグ情報を閲覧することができる。ここで、タグ情報を共有可能なユーザの範囲は任意に設定されてよい。例えば、タグ情報を共有可能なユーザの範囲は、同じSNS(Social Networking Service)に属しているユーザの範囲であってもよいし、当該SNSにおいて設定される任意のユーザの範囲(例えば、いわゆる「マイフレンド」に属するユーザの範囲)であってもよい。このように、ユーザ間で、互いに異なるユーザによって作成されたタグ情報を共有することにより、同一の動画に対する自分の好み度と、他者の好み度とを容易に比較することが可能となる。 For example, a user can upload the tag information created by himself / herself for a certain video to a server existing on the cloud and make it available to other users. Moreover, the user can browse the tag information created by other users for the moving image uploaded to the server. Here, the range of users who can share tag information may be set arbitrarily. For example, the range of users who can share tag information may be a range of users belonging to the same SNS (Social Networking Service), or an arbitrary range of users set in the SNS (for example, so-called “ Range of users belonging to “My Friend”. In this way, by sharing tag information created by different users between users, it is possible to easily compare the degree of preference for the same moving image with the degree of preference of others.
 また、他のユーザによるタグ情報は、タグ付け処理が完了した時点でクラウド上のサーバにアップされてもよいし、タグ付け処理を行っている最中のタグ情報がリアルタイムで更新されながら当該サーバにアップされてもよい。タグ付け処理を行っている最中のタグ情報を複数のユーザで互いに閲覧できることにより、複数の他のユーザのタグ情報を参照しながら、すなわち、ソーシャルなタグ情報を参照しながら、自身のタグ付け処理を行うことが可能となる。 In addition, tag information by other users may be uploaded to a server on the cloud when the tagging process is completed, or the tag information during the tagging process is updated in real time while the server is updated. May be up. Tag information while tagging is being performed can be viewed by multiple users, so that tagging can be performed while referring to tag information of other users, that is, referring to social tag information. Processing can be performed.
 また、クラウド上のサーバにアップされた他のユーザによるタグ情報は、所定の期間保存されていてよく、同一の動画に対する互いに異なる複数のタグ情報が当該サーバに随時蓄積されていってもよい。そして、ユーザは、サーバにアップされた他のユーザによるタグ情報を閲覧する際に、タグ情報が登録された期間ごと(デイリー、ウイークリー等)に表示する、好み度が高い順に表示する等、所望の順にソートして表示して閲覧することができてもよい。 Also, tag information by other users uploaded to the server on the cloud may be stored for a predetermined period, and a plurality of different tag information for the same video may be accumulated in the server as needed. Then, when browsing the tag information by other users uploaded to the server, the user displays the tag information for each registered period (daily, weekly, etc.), displays it in order of preference, etc. It may be possible to sort and display the items in order.
 本実施形態においては、このように、互いに異なる複数のタグ情報を利用して、再生状態制御部142によるコンテンツの編集処理が行われてもよい。例えば、再生状態制御部142は、互いに異なる複数のタグ情報に基づいて、上記[3-1.所定の再生時間のコンテンツの作成]で説明したような、コンテンツの一部分を抽出する処理を行うことができる。また、その際、再生状態制御部142が、互いに異なる複数のタグ情報として、他のユーザによって作成されたタグ情報を利用することにより、ソーシャルな好み度が反映されたダイジェスト版の動画を作成することができる。更に、ダイジェスト版の動画の作成には、他のユーザによって作成されたタグ情報のうち、所望の期間に作成されたタグ情報や、所望のユーザによって作成されたタグ情報等、特定の条件が付与されたソーシャルなタグ情報が利用されてもよい。ここで、再生状態制御部142が互いに異なる複数のタグ情報を用いてコンテンツを編集する際には、それらのタグ情報における好み度の単純な和が用いられてもよいし、平均値や中央値等が用いられてもよい。 In this embodiment, the content editing process by the playback state control unit 142 may be performed using a plurality of pieces of tag information that are different from each other. For example, the playback state control unit 142 performs the above [3-1. A process of extracting a part of the content as described in “Creating content with a predetermined playback time” can be performed. At that time, the playback state control unit 142 uses the tag information created by other users as a plurality of different tag information, thereby creating a digest version of the movie that reflects the social preference. be able to. Furthermore, the creation of a digest version of a movie is given specific conditions such as tag information created during a desired period, tag information created by a desired user, among tag information created by other users. Social tag information may be used. Here, when the content is edited using a plurality of pieces of tag information that are different from each other, the playback state control unit 142 may use a simple sum of the degrees of preference in the tag information, an average value, or a median value. Etc. may be used.
 また、互いに異なる複数の動画に対するタグ情報に基づいて、それら複数の動画をテレビ番組の視聴におけるザッピングにように視聴することも可能である。例えば、図9では、複数の互いに異なる動画に対するタグ情報に基づいて、所定の再生時間のコンテンツを作成する処理について示している。 Also, based on tag information for a plurality of different videos, it is possible to view the videos as if they were zapping when watching a TV program. For example, FIG. 9 shows processing for creating content of a predetermined playback time based on tag information for a plurality of different moving images.
 図9を参照すると、「Movie A」に対して付与されているタグ情報410と、「Movie B」に対して付与されているタグ情報420とが図示されている。タグ情報410及びタグ情報420は、例えば、複数の他のユーザによって作成されたタグ情報が統合されたソーシャルなタグ情報である。例えば、ユーザが、「Movie A」と「Movie B」とを所定の時間、例えば5分間でダイジェスト的に視聴したい場合には、「Movie A」と「Movie B」に対してトータルの抽出範囲が5分間になるように、タグ情報410及びタグ情報420にそれぞれしきい値が設定され、当該しきい値以上の好み度を有する時間範囲に対応する部分がそれぞれの動画から抽出されることにより、5分間のダイジェスト版の動画が作成されてもよい。図9に示す例であれば、「Movie A」からは、再生中の経過時間がT18~T19及びT20~T21の時間範囲に対応する部分が抽出され、「Movie B」からは、再生中の経過時間がT22~T23の時間範囲に対応する部分が抽出され、全体として5分間のダイジェスト版の動画が作成される。なお、タグ情報410及びタグ情報420に設定されるしきい値は、両者の好み度を均等に扱い、単純に好み度の大きい方からトータルで5分間の動画が抽出されるように設定されてもよいし、「Movie A」からの抽出範囲と「Movie B」からの抽出範囲とに一定の比率を指定して設定されてもよい。例えば、「Movie A」と「Movie B」とからトータルで5分間の動画を抽出したい場合に、「Movie A」から3分間分の動画が抽出され、「Movie B」から2分間分の動画が抽出されるようにユーザが指定し、その条件に合うようにしきい値が設定されてもよい。また、図9に示す例では、タグ情報としてソーシャルなタグ情報410、420が用いられているが、本実施形態はかかる例に限定されず、自分で作成したタグ情報に基づいて同様の処理を行うことも当然可能である。 Referring to FIG. 9, tag information 410 assigned to “Movie A” and tag information 420 assigned to “Movie B” are illustrated. The tag information 410 and the tag information 420 are social tag information in which tag information created by a plurality of other users is integrated, for example. For example, if the user wants to view “Movie A” and “Movie B” in a predetermined time, for example, 5 minutes in a digest manner, the total extraction range for “Movie A” and “Movie B” is Threshold values are set in the tag information 410 and the tag information 420 so as to be 5 minutes, and a portion corresponding to a time range having a preference degree equal to or higher than the threshold value is extracted from each video, A 5-minute digest version of the video may be created. In the example shown in FIG. 9, from “Movie A”, the portion corresponding to the time range of T 18 to T 19 and T 20 to T 21 is extracted from “Movie A”, and from “Movie B” A portion corresponding to the time range of T 22 to T 23 during extraction is extracted, and a digest version of a moving image of 5 minutes is created as a whole. Note that the threshold values set in the tag information 410 and the tag information 420 are set so that the degree of preference of both is handled equally, and a total of five minutes of moving images are extracted from the one with the highest degree of preference. Alternatively, the extraction range from “Movie A” and the extraction range from “Movie B” may be set by specifying a certain ratio. For example, if you want to extract a total of 5 minutes from “Movie A” and “Movie B”, a 3 minute movie is extracted from “Movie A”, and a 2 minute movie from “Movie B”. The user may specify that the data is to be extracted, and the threshold value may be set so as to meet the conditions. In the example shown in FIG. 9, social tag information 410 and 420 are used as tag information. However, the present embodiment is not limited to this example, and the same processing is performed based on tag information created by the user. Of course it is also possible to do.
 このように、本実施形態においては、任意の互いに異なる複数の動画に対して、任意の再生時間に収まるダイジェスト版の動画を、好み度の高い順に作成することが可能となる。従って、よりユーザにとって利便性の高いコンテンツの楽しみ方が提供される。 As described above, in this embodiment, it is possible to create a digest version of a moving image within an arbitrary playback time for a plurality of different moving images in descending order of preference. Therefore, it is possible to provide a way of enjoying content that is more convenient for the user.
 以上、図6、図7、図8及び図9を参照して、本実施形態に係るタグ情報を利用したコンテンツの編集処理の具体例について詳細に説明した。以上説明したように、本実施形態においては、例えば、再生状態制御部142によって、タグ情報に基づいてコンテンツの一部分が抽出され、所定の時間のダイジェスト版の動画が作成される。また、コンテンツの一部分を抽出する処理においては、Y軸方向における操作体の位置情報に対応付けられているスコアに対してしきい値が設けられ、当該スコアが当該しきい値以上となる時間範囲に対応する部分が抽出されてもよい。従って、Y軸方向における操作体の位置情報にユーザの好み度を反映されている場合、好み度の高い順にコンテンツの一部分が抽出されたダイジェスト版の動画を作成することが可能となる。また、上記のダイジェスト版の動画の作成処理は、複数の他のユーザによって作成された複数のタグ情報に基づいて行われてもよい。従って、他のユーザの嗜好が反映されたソーシャルな好み度に基づいたコンテンツの編集及び視聴も可能となる。このように、本実施形態においては、タグ情報を利用したコンテンツの編集処理が行われることにより、ユーザに多様なコンテンツの視聴方法が提供され、ユーザにとって利便性の高いコンテンツの楽しみ方が実現される。 The specific example of the content editing process using the tag information according to the present embodiment has been described in detail above with reference to FIGS. 6, 7, 8, and 9. As described above, in the present embodiment, for example, the playback state control unit 142 extracts a part of the content based on the tag information, and creates a digest version of the movie for a predetermined time. In the process of extracting a part of the content, a threshold is provided for the score associated with the position information of the operating tool in the Y-axis direction, and the time range in which the score is equal to or greater than the threshold A portion corresponding to may be extracted. Therefore, when the user's preference level is reflected in the position information of the operating tool in the Y-axis direction, it is possible to create a digest version of a moving image in which a part of the content is extracted in descending order of preference level. The digest version of the moving image creation process may be performed based on a plurality of pieces of tag information created by a plurality of other users. Therefore, it is possible to edit and view content based on a social preference level that reflects the preferences of other users. As described above, in the present embodiment, the content editing process using the tag information is performed, so that a variety of content viewing methods are provided to the user, and a user-friendly way of enjoying the content is realized. The
 なお、上記では、複数の本実施形態に係るタグ情報に基づくコンテンツの編集処理について説明したが、本実施形態に係るコンテンツの編集処理はかかる例に限定されない。例えば、本実施形態に係るタグ情報と、本実施形態に係るタグ情報とは異なる他のタグとを利用して、コンテンツの編集処理が行われてもよい。ここで、他のタグとは、例えば、コンテンツ提供者によって設定される、コンテンツである動画に含まれる各シーンのイベント発生時刻やイベントの種類、内容等についての情報を含むメタデータであってよい。当該メタデータは、例えばコンテンツがテレビ用に配信されるプログラムである場合には、いわゆるコマーシャル(CM)が開始又は終了するタイミングや、特定の出演者が登場又は退場するタイミング、又はシーンが切り替わるタイミング等についての情報であってよい。本実施形態においては、例えば、コンテンツに対して、本実施形態に係るタグ情報に加えて、このような他のタグを利用した好み度の設定が可能である。このように、コンテンツに対して本実施形態に係るタグ情報とは異なる方法によって設定された好み度が付与されている場合には、本実施形態に係るタグ情報と、当該異なる方法によって設定された好み度とを両方用いて、コンテンツの編集処理が行われてもよい。例えば、上記のようなメタデータを利用することにより、CMの部分をカットする、好みの出演者の出演シーンのみを抜粋する等、よりユーザの嗜好に沿ったコンテンツの編集処理が実現される。 In the above description, the content editing process based on a plurality of tag information according to the present embodiment has been described. However, the content editing process according to the present embodiment is not limited to such an example. For example, content editing processing may be performed using tag information according to the present embodiment and another tag different from the tag information according to the present embodiment. Here, the other tag may be, for example, metadata including information about an event occurrence time, an event type, content, and the like of each scene included in the moving image that is the content set by the content provider. . For example, when the content is a program that is distributed for television, the timing at which a so-called commercial (CM) starts or ends, the timing at which a specific performer appears or leaves, or the timing at which a scene changes It may be information about etc. In the present embodiment, for example, in addition to the tag information according to the present embodiment, it is possible to set a preference level using such other tags for content. As described above, when the preference level set by the method different from the tag information according to the present embodiment is assigned to the content, the tag information according to the present embodiment and the different method are set. The content editing process may be performed using both the degree of preference. For example, by using the metadata as described above, it is possible to realize a content editing process more in line with the user's preference, such as cutting the CM portion or extracting only the appearance scene of the favorite performer.
 <4.タグ付け方法の処理手順>
 次に、図10を参照して、本実施形態に係るタグ付け方法の処理手順について説明する。図10は、本実施形態に係るタグ付け方法の処理手順を示すフロー図である。なお、以下の本実施形態に係るタグ付け方法の処理手順についての説明では、図2に示すタグ付け処理を行う場合を例に挙げて説明を行う。また、記憶部130、位置情報取得部141、再生状態制御部142及びタグ付け部144の機能については、上記<1.情報処理装置の構成>で説明しているため、詳細な説明は省略する。
<4. Processing procedure for tagging method>
Next, a processing procedure of the tagging method according to the present embodiment will be described with reference to FIG. FIG. 10 is a flowchart showing a processing procedure of the tagging method according to the present embodiment. In the following description of the processing procedure of the tagging method according to the present embodiment, the case of performing the tagging process shown in FIG. 2 will be described as an example. The functions of the storage unit 130, the position information acquisition unit 141, the playback state control unit 142, and the tagging unit 144 are described in <1. Since it is described in “Configuration of Information Processing Device>, detailed description thereof is omitted.
 図10を参照すると、本実施形態に係るタグ付け方法においては、まず、タグ付け処理における表示画面210に1度に表示する動画の長さ(再生時間)が設定される(ステップS501)。これは、上記<1.情報処理装置の構成>で説明した、再生状態制御部142が、表示画面210上でのX軸方向における一端から他端までをコンテンツ内の任意の時間範囲に対応する部分に対応付ける処理に対応している。このように、コンテンツ内の一部分が表示画面210上でのX軸方向における一端から他端までに対応付けられることにより、解像度の大きい状態でのタグ付け処理が可能となる。 Referring to FIG. 10, in the tagging method according to the present embodiment, first, the length (playback time) of a moving image to be displayed at a time on the display screen 210 in the tagging process is set (step S501). This is because <1. The reproduction state control unit 142 described in the configuration of the information processing apparatus corresponds to the process of associating one end to the other end in the X-axis direction on the display screen 210 with a portion corresponding to an arbitrary time range in the content. ing. In this way, a part of the content is associated from one end to the other end in the X-axis direction on the display screen 210, thereby enabling tagging processing with a high resolution.
 次に、タグ付け開始位置が設定される(ステップS503)。これは、上記<1.情報処理装置の構成>で説明した、位置情報取得部141は、コンテンツ内の任意の時間範囲に対応する部分について操作体の位置情報を取得する処理に対応している。このように、ユーザは、表示画面210のうち、再生位置シーク用エリアに表示されているインジケータ220上で操作体を移動させることにより所望の位置までコンテンツの再生位置をシークし、当該再生位置から任意の部分にのみタグ情報を入力することができる。 Next, the tagging start position is set (step S503). This is because <1. The position information acquisition unit 141 described in the section of the configuration of the information processing apparatus corresponds to the process of acquiring the position information of the operating tool for a portion corresponding to an arbitrary time range in the content. In this way, the user seeks the playback position of the content to a desired position by moving the operating body on the indicator 220 displayed in the playback position seek area of the display screen 210, and from the playback position. Tag information can be input only in an arbitrary part.
 次に、タグ付け処理が行われる(ステップS505)。すなわち、表示画面210のうち、タグ情報入力用エリアで操作体が移動されることにより、タグ情報が入力され、コンテンツへの当該タグ情報の付与が行われる。具体的には、位置情報取得部141によってコンテンツ再生中の経過時間と関連付けられた操作体の位置情報が取得され、タグ付け部144によって当該位置情報がタグ情報としてコンテンツに付与されることにより、コンテンツへのタグ付けが行われる。 Next, a tagging process is performed (step S505). That is, when the operating tool is moved in the tag information input area on the display screen 210, the tag information is input and the tag information is given to the content. Specifically, the position information acquisition unit 141 acquires the position information of the operating tool associated with the elapsed time during content reproduction, and the tagging unit 144 assigns the position information to the content as tag information. The content is tagged.
 ステップS505におけるタグ付け処理が終了すると、タグ付け処理後のコンテンツデータが記憶部130に記憶される(ステップS507)。なお、タグ付け処理の終了は、例えば、操作体が検出されなくなって所定の時間が経過すること(図2に示す例であれば、指230が表示画面210から離れて所定の時間が経過すること)によって判断されてもよいし、ボタン押下等のタグ付け処理を終了するための専用の操作が行われることによって判断されてもよい。 When the tagging process in step S505 ends, the content data after the tagging process is stored in the storage unit 130 (step S507). The end of the tagging process is, for example, that a predetermined time has elapsed since the operating tool is no longer detected (in the example shown in FIG. 2, the predetermined time elapses when the finger 230 moves away from the display screen 210. Or a dedicated operation for ending the tagging process such as a button press.
 次に、タグ付け処理が行われたコンテンツに対して、当該タグ情報に基づくコンテンツの編集処理が行われる(ステップS509)。ステップS509におけるコンテンツの編集処理は、例えば、上記<3.タグ情報を利用したコンテンツ編集処理の具体例>で説明した各種の編集処理であってよい。 Next, content editing processing based on the tag information is performed on the content that has been tagged (step S509). The content editing process in step S509 is, for example, <3. Various editing processes described in the specific example of content editing process using tag information may be used.
 以上、図6を参照して、本実施形態に係るタグ付け方法の処理手順について説明した。なお、上記では、タグ付け処理後のコンテンツが記憶部130に記憶されているが、本実施形態はかかる例に限定されない。例えば、上記<3.タグ情報を利用したコンテンツ編集処理の具体例>で説明したように、タグ付け処理後のコンテンツがクラウド上のサーバ等に保存され、特定のユーザ間で共有されてもよい。 The processing procedure of the tagging method according to the present embodiment has been described above with reference to FIG. In the above description, the content after the tagging process is stored in the storage unit 130, but the present embodiment is not limited to such an example. For example, <3. As described in Specific Example of Content Editing Process Using Tag Information>, the content after the tagging process may be stored in a server or the like on the cloud and shared among specific users.
 <5.ハードウェア構成>
 次に、図11を参照しながら、本開示の実施形態に係る情報処理装置10のハードウェア構成について、詳細に説明する。図11は、本開示の実施形態に係る情報処理装置10のハードウェア構成を説明するためのブロック図である。
<5. Hardware configuration>
Next, the hardware configuration of the information processing apparatus 10 according to the embodiment of the present disclosure will be described in detail with reference to FIG. FIG. 11 is a block diagram for describing a hardware configuration of the information processing apparatus 10 according to the embodiment of the present disclosure.
 情報処理装置10は、主に、CPU901と、ROM903と、RAM905と、を備える。また、情報処理装置10は、更に、ホストバス907と、ブリッジ909と、外部バス911と、インターフェース913と、入力装置915と、出力装置917と、ストレージ装置919と、通信装置921と、ドライブ923と、接続ポート925と、を備える。 The information processing apparatus 10 mainly includes a CPU 901, a ROM 903, and a RAM 905. The information processing apparatus 10 further includes a host bus 907, a bridge 909, an external bus 911, an interface 913, an input device 915, an output device 917, a storage device 919, a communication device 921, and a drive 923. And a connection port 925.
 CPU901は、演算処理装置及び制御装置として機能し、ROM903、RAM905、ストレージ装置919又はリムーバブル記録媒体929に記録された各種プログラムに従って、情報処理装置10内の動作全般又はその一部を制御する。ROM903は、CPU901が使用するプログラムや演算パラメータ等を記憶する。RAM905は、CPU901が使用するプログラムや、プログラムの実行において適宜変化するパラメータ等を一時記憶する。これらはCPUバス等の内部バスにより構成されるホストバス907により相互に接続されている。CPU901、ROM903及びRAM905は、本実施形態においては、例えば、図1に示す制御部140に対応している。 The CPU 901 functions as an arithmetic processing unit and a control unit, and controls all or a part of the operation in the information processing apparatus 10 according to various programs recorded in the ROM 903, the RAM 905, the storage device 919, or the removable recording medium 929. The ROM 903 stores programs used by the CPU 901, calculation parameters, and the like. The RAM 905 temporarily stores programs used by the CPU 901, parameters that change as appropriate during execution of the programs, and the like. These are connected to each other by a host bus 907 constituted by an internal bus such as a CPU bus. In the present embodiment, the CPU 901, the ROM 903, and the RAM 905 correspond to, for example, the control unit 140 illustrated in FIG.
 ホストバス907は、ブリッジ909を介して、PCI(Peripheral Component Interconnect/Interface)バスなどの外部バス911に接続されている。 The host bus 907 is connected to an external bus 911 such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 909.
 入力装置915は、例えば、マウス、キーボード、タッチパネル、ボタン、スイッチ及びレバー等、ユーザが操作する操作手段である。また、入力装置915は、例えば、赤外線やその他の電波を利用したリモートコントロール手段(いわゆる、リモコン)であってもよいし、情報処理装置10の操作に対応した携帯電話やPDA等の外部接続機器931であってもよい。さらに、入力装置915は、例えば、上記の操作手段を用いてユーザにより入力された情報に基づいて入力信号を生成し、CPU901に出力する入力制御回路などから構成されている。情報処理装置10のユーザは、この入力装置915を操作することにより、情報処理装置10に対して各種のデータを入力したり処理動作を指示したりすることができる。入力装置915は、本実施形態においては、例えば、図1に示す入力部110に対応している。 The input device 915 is an operation means operated by the user, such as a mouse, a keyboard, a touch panel, a button, a switch, and a lever. Further, the input device 915 may be, for example, remote control means (so-called remote controller) using infrared rays or other radio waves, or an external connection device such as a mobile phone or a PDA corresponding to the operation of the information processing device 10. It may be 931. Furthermore, the input device 915 includes an input control circuit that generates an input signal based on information input by a user using the above-described operation means and outputs the input signal to the CPU 901, for example. The user of the information processing apparatus 10 can input various data and instruct a processing operation to the information processing apparatus 10 by operating the input device 915. In the present embodiment, the input device 915 corresponds to, for example, the input unit 110 illustrated in FIG.
 出力装置917は、取得した情報をユーザに対して視覚的又は聴覚的に通知することが可能な装置で構成される。このような装置として、CRTディスプレイ装置、液晶ディスプレイ装置、プラズマディスプレイ装置、ELディスプレイ装置及びランプ等の表示装置や、スピーカ及びヘッドホン等の音声出力装置や、プリンタ装置等がある。出力装置917は、例えば、情報処理装置10が行った各種処理により得られた結果を出力する。具体的には、表示装置は、情報処理装置10が行った各種処理により得られた結果を、テキスト、イメージ、表、グラフ等、様々な形式で視覚的に表示する。当該表示装置は、本実施形態においては、例えば、図1に示す表示部120に対応している。他方、音声出力装置は、再生された音声データや音響データ等からなるオーディオ信号をアナログ信号に変換して聴覚的に出力する。 The output device 917 is a device that can notify the user of the acquired information visually or audibly. Examples of such devices include CRT display devices, liquid crystal display devices, plasma display devices, EL display devices, display devices such as lamps, audio output devices such as speakers and headphones, printer devices, and the like. For example, the output device 917 outputs results obtained by various processes performed by the information processing apparatus 10. Specifically, the display device visually displays results obtained by various processes performed by the information processing device 10 in various formats such as text, images, tables, and graphs. In the present embodiment, the display device corresponds to, for example, the display unit 120 illustrated in FIG. On the other hand, the audio output device converts an audio signal composed of reproduced audio data, acoustic data, and the like into an analog signal and outputs it aurally.
 ストレージ装置919は、情報処理装置10の記憶部の一例として構成されたデータ格納用の装置である。ストレージ装置919は、本実施形態においては、例えば、図1に示す記憶部130に対応している。ストレージ装置919は、例えば、HDD(Hard Disk Drive)等の磁気記憶部デバイス、半導体記憶デバイス、光記憶デバイス又は光磁気記憶デバイス等により構成される。このストレージ装置919は、CPU901が実行するプログラムや各種データ等、本実施形態に係るタグ付け処理において処理される各種の情報を格納する。例えば、ストレージ装置919は、情報処理装置10において再生される各種のコンテンツや、本実施形態に係るタグ付け処理の過程で得られるタグ情報や、当該タグ情報が付与されたコンテンツ(すなわち、タグ付け後のコンテンツ)等のデータを格納する。 The storage device 919 is a data storage device configured as an example of a storage unit of the information processing device 10. In this embodiment, the storage device 919 corresponds to, for example, the storage unit 130 illustrated in FIG. The storage device 919 includes, for example, a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, or a magneto-optical storage device. The storage device 919 stores various information processed in the tagging process according to the present embodiment, such as a program executed by the CPU 901 and various data. For example, the storage device 919 has various contents to be played back by the information processing apparatus 10, tag information obtained in the process of tagging processing according to the present embodiment, and content to which the tag information is attached (that is, tagging). Data such as later content) is stored.
 また、図1には明示しなかったが、本実施形態に係る情報処理装置10は、以下の構成部材を更に備えてもよい。 Although not explicitly shown in FIG. 1, the information processing apparatus 10 according to the present embodiment may further include the following components.
 通信装置921は、例えば、通信網(ネットワーク)927に接続するための通信デバイス等で構成された通信インターフェースである。通信装置921は、例えば、有線若しくは無線LAN(Local Area Network)、Bluetooth(登録商標)又はWUSB(Wireless USB)用の通信カード等である。また、通信装置921は、光通信用のルータ、ADSL(Asymmetric Digital Subscriber Line)用のルータ又は各種通信用のモデム等であってもよい。この通信装置921は、例えば、インターネットや他の通信機器との間で、例えばTCP/IP等の所定のプロトコルに則して信号等を送受信することができる。また、通信装置921に接続されるネットワーク927は、有線又は無線によって接続されたネットワーク等により構成され、例えば、インターネット、家庭内LAN、赤外線通信、ラジオ波通信又は衛星通信等であってもよい。本実施形態においては、情報処理装置10において再生される各種のコンテンツや、本実施形態に係るタグ付け処理の過程で得られるタグ情報や、タグ付け後のコンテンツ等のデータが、通信装置921によってネットワーク927を介して受信されたり、情報処理装置10から他の外部機器(例えばクラウド上のサーバ等)に送信されたりしてもよい。 The communication device 921 is a communication interface configured by a communication device for connecting to a communication network (network) 927, for example. The communication device 921 is, for example, a communication card for wired or wireless LAN (Local Area Network), Bluetooth (registered trademark), or WUSB (Wireless USB). The communication device 921 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), a modem for various communication, or the like. The communication device 921 can transmit and receive signals and the like according to a predetermined protocol such as TCP / IP, for example, with the Internet or other communication devices. The network 927 connected to the communication device 921 is configured by a wired or wireless network, and may be, for example, the Internet, a home LAN, infrared communication, radio wave communication, satellite communication, or the like. In the present embodiment, various types of content played back by the information processing apparatus 10, tag information obtained in the tagging process according to the present embodiment, data such as content after tagging, and the like are transmitted by the communication device 921. It may be received via the network 927 or transmitted from the information processing apparatus 10 to another external device (for example, a server on the cloud).
 ドライブ923は、記録媒体用リーダライタであり、情報処理装置10に内蔵、あるいは外付けされる。ドライブ923は、装着されている磁気ディスク、光ディスク、光磁気ディスク又は半導体メモリ等のリムーバブル記録媒体929に記録されている情報を読み出して、RAM905に出力する。また、ドライブ923は、装着されている磁気ディスク、光ディスク、光磁気ディスク又は半導体メモリ等のリムーバブル記録媒体929に情報を書き込むことも可能である。リムーバブル記録媒体929は、例えば、DVDメディア、HD-DVDメディア、Blu-ray(登録商標)メディア等である。また、リムーバブル記録媒体929は、コンパクトフラッシュ(登録商標)(CompactFlash:CF)、フラッシュメモリ又はSDメモリカード(Secure Digital memory card)等であってもよい。また、リムーバブル記録媒体929は、例えば、非接触型ICチップを搭載したICカード(Integrated Circuit card)又は電子機器等であってもよい。本実施形態においては、情報処理装置10において再生される各種のコンテンツや、本実施形態に係るタグ付け処理の過程で得られるタグ情報や、タグ付け後のコンテンツ等のデータが、ドライブ923によってリムーバブル記録媒体929から読み出されたり、リムーバブル記録媒体929に書き込まれたりしてもよい。 The drive 923 is a recording medium reader / writer, and is built in or externally attached to the information processing apparatus 10. The drive 923 reads information recorded on a removable recording medium 929 such as a mounted magnetic disk, optical disk, magneto-optical disk, or semiconductor memory, and outputs the information to the RAM 905. The drive 923 can also write information to a removable recording medium 929 such as a mounted magnetic disk, optical disk, magneto-optical disk, or semiconductor memory. The removable recording medium 929 is, for example, a DVD medium, an HD-DVD medium, a Blu-ray (registered trademark) medium, or the like. Further, the removable recording medium 929 may be a compact flash (registered trademark) (CompactFlash: CF), a flash memory, an SD memory card (Secure Digital memory card), or the like. Further, the removable recording medium 929 may be, for example, an IC card (Integrated Circuit card) on which a non-contact IC chip is mounted, an electronic device, or the like. In the present embodiment, various contents reproduced by the information processing apparatus 10, tag information obtained in the process of tagging processing according to the present embodiment, and data such as content after tagging are removed by the drive 923. It may be read from the recording medium 929 or written to the removable recording medium 929.
 接続ポート925は、機器を情報処理装置10に直接接続するためのポートである。接続ポート925の一例として、USB(Universal Serial Bus)ポート、IEEE1394ポート及びSCSI(Small Computer System Interface)ポート等がある。接続ポート925の別の例として、RS-232Cポート、光オーディオ端子及びHDMI(登録商標)(High-Definition Multimedia Interface)ポート等がある。この接続ポート925に外部接続機器931を接続することで、情報処理装置10は、外部接続機器931から直接各種のデータを取得したり、外部接続機器931に各種のデータを提供したりする。本実施形態においては、情報処理装置10において再生される各種のコンテンツや、本実施形態に係るタグ付け処理の過程で得られるタグ情報や、タグ付け後のコンテンツ等のデータが、接続ポート925を介して外部接続機器931から取得されたり、外部接続機器931に出力されたりしてもよい。 The connection port 925 is a port for directly connecting a device to the information processing apparatus 10. Examples of the connection port 925 include a USB (Universal Serial Bus) port, an IEEE 1394 port, and a SCSI (Small Computer System Interface) port. As another example of the connection port 925, there are an RS-232C port, an optical audio terminal, an HDMI (registered trademark) (High-Definition Multimedia Interface) port, and the like. By connecting the external connection device 931 to the connection port 925, the information processing apparatus 10 acquires various data directly from the external connection device 931 or provides various data to the external connection device 931. In the present embodiment, various contents reproduced by the information processing apparatus 10, tag information obtained in the course of the tagging process according to the present embodiment, and data such as content after tagging are connected to the connection port 925. Via the external connection device 931, or may be output to the external connection device 931.
 以上、本開示の実施形態に係る情報処理装置10の機能を実現可能なハードウェア構成の一例を示した。上記の各構成要素は、汎用的な部材を用いて構成されていてもよいし、各構成要素の機能に特化したハードウェアにより構成されていてもよい。従って、本実施形態を実施する時々の技術レベルに応じて、適宜、利用するハードウェア構成を変更することが可能である。 Heretofore, an example of a hardware configuration capable of realizing the function of the information processing apparatus 10 according to the embodiment of the present disclosure has been shown. Each component described above may be configured using a general-purpose member, or may be configured by hardware specialized for the function of each component. Therefore, it is possible to change the hardware configuration to be used as appropriate according to the technical level at the time of carrying out this embodiment.
 なお、上述のような本実施形態に係る情報処理装置10の各機能を実現するためのコンピュータプログラムを作製し、パーソナルコンピュータ等に実装することが可能である。また、このようなコンピュータプログラムが格納された、コンピュータで読み取り可能な記録媒体も提供することができる。記録媒体は、例えば、磁気ディスク、光ディスク、光磁気ディスク、フラッシュメモリ等である。また、上記のコンピュータプログラムは、記録媒体を用いずに、例えばネットワークを介して配信してもよい。 Note that a computer program for realizing each function of the information processing apparatus 10 according to the present embodiment as described above can be produced and installed in a personal computer or the like. In addition, a computer-readable recording medium storing such a computer program can be provided. The recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. Further, the above computer program may be distributed via a network, for example, without using a recording medium.
 <6.まとめ>
 以上説明したように、本実施形態においては、以下の効果を得ることができる。
<6. Summary>
As described above, in the present embodiment, the following effects can be obtained.
 本実施形態においては、位置情報取得部141によってコンテンツ再生中の経過時間と関連付けられた操作体の位置情報が取得され、タグ付け部144によって当該位置情報がタグ情報として前記コンテンツに付与されることにより、コンテンツのタグ付け処理が行われる。従って、ユーザによる操作体の位置情報入力、例えば、タッチパネルの表示画面上での指の移動によってコンテンツにタグ情報を付与することができるため、より自由度の高いタグ付け処理が可能となる。 In the present embodiment, the position information acquisition unit 141 acquires the position information of the operating tool associated with the elapsed time during content playback, and the tagging unit 144 assigns the position information to the content as tag information. Thus, the content tagging process is performed. Therefore, tag information can be added to the content by inputting the position information of the operating tool by the user, for example, by moving a finger on the display screen of the touch panel, so that tagging processing with a higher degree of freedom is possible.
 また、本実施形態においては、取得された操作体の位置情報のうち、第1の方向における操作体の位置情報がタグ付け部144によってタグ情報として利用され、第1の方向とは異なる第2の方向における操作体の位置情報に応じて再生状態制御部142によってコンテンツの再生状態が制御される。従って、ユーザは、コンテンツの再生状態を制御しながら、例えばコンテンツの再生位置をシークしながらタグ情報を入力することができる。また、タグ情報は、コンテンツの一部に対してのみ入力されてもよく、上書きされてもよい。更に、コンテンツの再生位置のシークに割り当てられる上記第2の方向における操作体の位置情報の解像度が変更されてもよい。従って、ユーザは、任意の位置までコンテンツの再生位置をシークしてから任意の部分についてのみタグ情報を入力したり、解像度を変化させて複数回タグ情報を入力したり、といった、より効率的なタグ情報の入力を行うことが可能となる。 In the present embodiment, the position information of the operating body in the first direction is used as tag information by the tagging unit 144 among the acquired position information of the operating body, and is different from the first direction. The playback state of the content is controlled by the playback state control unit 142 in accordance with the position information of the operating body in the direction of the direction. Therefore, the user can input the tag information while controlling the playback state of the content, for example, while seeking the playback position of the content. Moreover, tag information may be input only with respect to a part of content, and may be overwritten. Furthermore, the resolution of the position information of the operating body in the second direction assigned to seek the playback position of the content may be changed. Therefore, the user seeks the playback position of the content up to an arbitrary position and then inputs tag information only for an arbitrary part, or changes the resolution and inputs tag information multiple times. It becomes possible to input tag information.
 また、本実施形態においては、表示画面210の第2の方向における操作体の位置情報は他の再生制御に対応していてもよい。例えば、再生状態制御部142は、第2の方向における操作体の位置情報に基づいて、コンテンツの再生速度の制御や、コンテンツの早送り又は巻き戻し動作の制御を行ってもよい。従って、ユーザは、コンテンツの再生速度を所望の速度に変化させながら、また、早送り又は巻き戻しを行うことにより所望の再生位置まで移動しながら、タグ情報を入力することができ、よりユーザにとって利便性の高いタグ付け処理が実現される。 Further, in the present embodiment, the position information of the operating tool in the second direction of the display screen 210 may correspond to other reproduction control. For example, the playback state control unit 142 may control the playback speed of the content and the fast-forward or rewind operation of the content based on the position information of the operating tool in the second direction. Accordingly, the user can input the tag information while changing the playback speed of the content to a desired speed and moving to a desired playback position by performing fast forward or rewind, which is more convenient for the user. A highly tagging process is realized.
 また、本実施形態においては、再生状態制御部142は、コンテンツに付与されているタグ情報に基づいて、当該コンテンツを編集し、編集したコンテンツの再生を制御することができる。例えば、再生状態制御部142によって、タグ情報に基づいてコンテンツの一部分が抽出され、所定の時間のダイジェスト版の動画が作成される。また、コンテンツの一部分を抽出する処理においては、第1の方向における操作体の位置情報に対応付けられているスコアに対してしきい値が設けられ、当該スコアが当該しきい値以上となる時間範囲に対応する部分が抽出されてもよい。従って、例えば第1の方向における操作体の位置情報にユーザの好み度を反映されている場合、好み度の高い順にコンテンツの一部分が抽出されたダイジェスト版の動画を作成することが可能となる。また、上記のダイジェスト版の動画の作成処理は、複数の他のユーザによって作成された複数のタグ情報に基づいて行われてもよい。従って、他のユーザの嗜好が反映されたソーシャルな好み度に基づいたコンテンツの編集及び視聴も可能となる。このように、本実施形態においては、タグ情報を利用したコンテンツの編集処理が行われることにより、ユーザに多様なコンテンツの視聴方法が提供され、ユーザにとって利便性の高いコンテンツの楽しみ方が実現される。 Further, in the present embodiment, the playback state control unit 142 can edit the content based on the tag information given to the content and control the playback of the edited content. For example, the playback state control unit 142 extracts a part of the content based on the tag information, and creates a digest version of the movie for a predetermined time. In the process of extracting a part of the content, a threshold is provided for the score associated with the position information of the operating tool in the first direction, and the time when the score is equal to or greater than the threshold. A portion corresponding to the range may be extracted. Therefore, for example, when the user's preference level is reflected in the position information of the operating tool in the first direction, it is possible to create a digest version of a moving image in which a part of the content is extracted in descending order of preference level. The digest version of the moving image creation process may be performed based on a plurality of pieces of tag information created by a plurality of other users. Therefore, it is possible to edit and view content based on a social preference level that reflects the preferences of other users. As described above, in the present embodiment, the content editing process using the tag information is performed, so that a variety of content viewing methods are provided to the user, and a user-friendly way of enjoying the content is realized. The
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本開示の技術的範囲はかかる例に限定されない。本開示の技術分野における通常の知識を有する者であれば、特許請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。 The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the technical scope of the present disclosure is not limited to such examples. It is obvious that a person having ordinary knowledge in the technical field of the present disclosure can come up with various changes or modifications within the scope of the technical idea described in the claims. Of course, it is understood that it belongs to the technical scope of the present disclosure.
 例えば、上記では、位置情報がタッチパネルの表示画面210上における2次元の位置情報である実施形態について説明したが、本実施形態はかかる例に限定されない。本実施形態では、タグ情報は、コンテンツ再生中における経過時間に関連付けられた操作体の位置情報であればよく、その種類は限定されない。例えば、入力部110が、ステレオカメラや赤外線カメラ等の空間上での操作体の位置を検出するセンサ装置を有し、操作体の位置情報として3次元空間における位置情報が入力されてもよい。操作体の位置情報として3次元空間における位置情報である場合、当該操作体は例えばユーザの手であってよく、ユーザが入力部110のセンサ装置に対する手の左右方向の位置に応じてコンテンツの再生状態が制御され、手の上下方向の位置に応じてタグ情報が入力されてもよい。例えば、ユーザは、表示部120の表示画面に表示されているコンテンツの内容を参照しながら、入力部110のセンサ装置に対して手を上下左右に動かすことにより、タグ付け処理を行うことができる。 For example, in the above description, the embodiment has been described in which the position information is two-dimensional position information on the display screen 210 of the touch panel, but the present embodiment is not limited to such an example. In the present embodiment, the tag information may be any position information of the operating tool associated with the elapsed time during content reproduction, and the type thereof is not limited. For example, the input unit 110 may include a sensor device that detects the position of the operating tool in a space such as a stereo camera or an infrared camera, and position information in a three-dimensional space may be input as the position information of the operating tool. When the position information of the operation body is position information in a three-dimensional space, the operation body may be, for example, a user's hand, and the user reproduces content according to the position of the hand in the left-right direction with respect to the sensor device of the input unit 110 The state may be controlled and tag information may be input according to the vertical position of the hand. For example, the user can perform the tagging process by moving his / her hand up / down / left / right with respect to the sensor device of the input unit 110 while referring to the content displayed on the display screen of the display unit 120. .
 また、例えば、位置情報取得部141によって、操作体の位置情報の軌跡が、文字、数字、記号等として検出されることにより、タグ情報が取得されてもよい。例えば、位置情報取得部141によって、操作体の位置情報の軌跡が、「○」又は「×」のような、2値的な意味を有する記号として検出されてもよい。例えば、ユーザは、表示部120の表示画面に表示されているコンテンツの内容を参照しながら、気に入ったシーンでは操作体の位置情報の軌跡が「○」の形状を描くように操作体を移動させ、あまり魅力を感じなかったシーンでは操作体の位置情報の軌跡が「×」の形状を描くように操作体を移動させることにより、2値的な好み度の有無が重畳されたタグ情報を入力することができる。タグ付け部144は、このような2値的な好み度の有無が重畳されたタグ情報をコンテンツに付与してもよい。 Further, for example, the position information acquisition unit 141 may detect the locus of the position information of the operating tool as characters, numbers, symbols, and the like, so that the tag information may be acquired. For example, the position information acquisition unit 141 may detect the locus of the position information of the operating tool as a symbol having a binary meaning such as “◯” or “×”. For example, while referring to the content of the content displayed on the display screen of the display unit 120, the user moves the operation body so that the locus of the position information of the operation body draws a shape of “◯” in a favorite scene. In the scene that did not feel much attractive, by moving the operation body so that the locus of the position information of the operation body draws a shape of “×”, the tag information on which the presence or absence of binary preference is superimposed is input can do. The tagging unit 144 may add tag information on which the presence / absence of such a binary preference is superimposed.
 また、上記では、タグ情報が付与されるコンテンツデータが動画データである実施形態について説明したが、本実施形態はかかる例に限定されない。本実施形態では、タグ付け処理の対象となるコンテンツデータは、音楽データであってもよいし、静止画像が所定の時間で連続的に表示されるスライドショーデータであってもよい。例えば、コンテンツデータが音楽データである場合、情報処理装置10は、スピーカやヘッドホン等から構成される音声出力装置を備え、ユーザは、当該音声出力装置から出力される音楽データに含まれる音声を聞きながら、タグ情報を入力してもよい。また、例えば、コンテンツデータがスライドショーデータである場合は、動画データと同様のタグ付け処理が行われてもよい。 In the above description, the embodiment in which the content data to which the tag information is attached is the moving image data has been described, but the present embodiment is not limited to such an example. In the present embodiment, the content data to be subjected to the tagging process may be music data or slide show data in which still images are continuously displayed for a predetermined time. For example, when the content data is music data, the information processing apparatus 10 includes an audio output device including a speaker, headphones, and the like, and the user listens to audio included in the music data output from the audio output device. However, tag information may be input. For example, when the content data is slide show data, tagging processing similar to that for moving image data may be performed.
 なお、以下のような構成も本開示の技術的範囲に属する。
(1)コンテンツ再生中の経過時間と関連付けられた操作体の位置情報を取得する位置情報取得部と、前記位置情報をタグ情報として前記コンテンツに付与することにより、前記コンテンツへのタグ付けを行うタグ付け部と、を備える、情報処理装置。
(2)コンテンツの再生状態を制御する再生状態制御部、を更に備え、前記タグ付け部は、第1の方向における前記操作体の位置情報を前記タグ情報として利用し、前記再生状態制御部は、前記第1の方向とは異なる第2の方向における前記操作体の位置情報に応じて前記コンテンツの再生状態を制御する、前記(1)に記載の情報処理装置。
(3)前記コンテンツの内容を表示画面上に表示させる表示制御部、を更に備え、前記位置情報取得部は、前記表示画面上における前記操作体の位置情報を取得し、前記第1の方向は前記表示画面の表示に対して上下方向であり、前記第2の方向は前記表示画面の表示に対して左右方向である、前記(2)に記載の情報処理装置。
(4)前記再生状態制御部は、前記第2の方向における前記操作体の位置情報と、前記コンテンツ再生中の経過時間とを対応付け、前記第2の方向における前記操作体の位置情報に対応する再生位置で前記コンテンツを再生させる、前記(2)又は(3)に記載の情報処理装置。
(5)前記再生状態制御部は、前記表示画面上での前記第2の方向における一端から他端までの間を前記コンテンツ内の任意の時間範囲に対応付け、前記表示画面上における前記第2の方向における前記操作体の位置情報に対応する再生位置で前記コンテンツを再生し、前記位置情報取得部は、前記表示画面上における前記操作体の位置情報を取得する、前記(3)に記載の情報処理装置。
(6)前記再生状態制御部は、前記第2の方向における前記操作体の前記位置情報に基づいて、前記コンテンツの再生速度を変化させる、前記(2)又は(3)に記載の情報処理装置。
(7)前記再生状態制御部は、前記表示画面上での前記第2の方向における一端から他端までの間を前記コンテンツの再生速度に対応付け、前記表示画面上における前記第2の方向における前記操作体の位置情報に対応する再生速度で前記コンテンツを再生させる、前記(3)に記載の情報処理装置。
(8)前記再生状態制御部は、前記第2の方向における前記操作体の位置情報に基づいて、前記コンテンツの早送り又は巻き戻しを行う、前記(2)又は(3)に記載の情報処理装置。
(9)前記再生状態制御部は、前記第2の方向における基準点に対して、前記操作体の位置情報が前記基準点よりも一側で取得される場合には前記コンテンツを早送りし、前記操作体の位置情報が前記基準点よりも他側で取得される場合にはコンテンツを巻き戻す制御を行う、前記(8)に記載の情報処理装置。
(10)前記位置情報取得部は、前記コンテンツ内の任意の時間範囲に対応する部分について前記操作体の位置情報を取得し、前記タグ付け部は、前記位置情報が取得された時間範囲に対応する部分について前記コンテンツに前記タグ情報を付与する、前記(1)~(9)のいずれか1項に記載の情報処理装置。
(11)前記タグ付け部は、最新の前記操作体の位置情報に基づいて前記コンテンツへのタグ付けを行う、前記(1)~(10)のいずれか1項に記載の情報処理装置。
(12)
 前記コンテンツの内容を表示画面上に表示させる表示制御部、を更に備え、前記表示制御部は、前記操作体の位置情報の軌跡を前記表示画面上に表示させる、前記(1)~(11)のいずれか1項に記載の情報処理装置。
(13)コンテンツの再生状態を制御する再生状態制御部、を更に備え、前記再生状態制御部は、前記タグ情報に基づいて、前記コンテンツの一部分を抽出する、前記(1)~(12)のいずれか1項に記載の情報処理装置。
(14)前記タグ付け部は、第1の方向における前記操作体の位置情報を前記タグ情報として利用し、前記再生状態制御部は、前記タグ情報について、前記第1の方向における前記操作体の位置情報とスコアとを対応付け、前記コンテンツのうち前記スコアが所定のしきい値以上である部分を抽出する、前記(13)に記載の情報処理装置。
(15)前記再生状態制御部は、前記しきい値を、抽出されるコンテンツの再生時間に基づいて決定する、前記(14)に記載の情報処理装置。
(16)前記再生状態制御部は、前記コンテンツのデータに含まれる画像データ及び音声データの少なくともいずれかに更に基づいて、前記コンテンツの一部分を抽出する、前記(14)又は(15)に記載の情報処理装置。
(17)前記再生状態制御部は、互いに異なる複数の前記タグ情報に基づいて、前記コンテンツの一部分を抽出する、前記(13)~(16)のいずれか1項に記載の情報処理装置。
(18)前記位置情報取得部は、空間上における前記操作体の3次元の位置情報を取得する、前記(1)~(17)のいずれか1項に記載の情報処理装置。
(19)コンテンツ再生中の経過時間と関連付けられた操作体の位置情報を取得することと、前記位置情報をタグ情報として前記コンテンツに付与することにより、前記コンテンツへのタグ付けを行うことと、を含む、タグ付け方法。
(20)コンピュータに、コンテンツ再生中の経過時間と関連付けられた操作体の位置情報を取得する機能と、前記位置情報をタグ情報として前記コンテンツに付与することにより、前記コンテンツへのタグ付けを行う機能と、を実現させるためのプログラム。
The following configurations also belong to the technical scope of the present disclosure.
(1) A position information acquisition unit that acquires position information of an operating tool associated with an elapsed time during content reproduction, and tagging the content by adding the position information to the content as tag information. An information processing apparatus comprising a tagging unit.
(2) a playback state control unit that controls the playback state of the content, wherein the tagging unit uses position information of the operating body in a first direction as the tag information, and the playback state control unit includes: The information processing apparatus according to (1), wherein the reproduction state of the content is controlled in accordance with position information of the operating body in a second direction different from the first direction.
(3) a display control unit configured to display the content of the content on a display screen, wherein the position information acquisition unit acquires position information of the operating body on the display screen, and the first direction is The information processing apparatus according to (2), wherein the display screen has a vertical direction with respect to the display screen, and the second direction has a horizontal direction with respect to the display screen.
(4) The reproduction state control unit associates the position information of the operation body in the second direction with the elapsed time during the content reproduction, and corresponds to the position information of the operation body in the second direction. The information processing apparatus according to (2) or (3), wherein the content is reproduced at a reproduction position.
(5) The playback state control unit associates a range from one end to the other end in the second direction on the display screen with an arbitrary time range in the content, and the second on the display screen. The content is played back at a playback position corresponding to the position information of the operating tool in the direction of, and the position information acquisition unit acquires the position information of the operating tool on the display screen. Information processing device.
(6) The information processing apparatus according to (2) or (3), wherein the reproduction state control unit changes a reproduction speed of the content based on the position information of the operation body in the second direction. .
(7) The playback state control unit associates the content from one end to the other end in the second direction on the display screen with the playback speed of the content, and in the second direction on the display screen. The information processing apparatus according to (3), wherein the content is reproduced at a reproduction speed corresponding to the position information of the operation tool.
(8) The information processing apparatus according to (2) or (3), wherein the reproduction state control unit performs fast-forward or rewind of the content based on position information of the operation body in the second direction. .
(9) When the position information of the operating tool is acquired on one side of the reference point with respect to the reference point in the second direction, the reproduction state control unit fast-forwards the content, The information processing apparatus according to (8), wherein control for rewinding the content is performed when the position information of the operating tool is acquired on the other side of the reference point.
(10) The position information acquisition unit acquires position information of the operation tool for a part corresponding to an arbitrary time range in the content, and the tagging unit corresponds to the time range from which the position information is acquired. The information processing apparatus according to any one of (1) to (9), wherein the tag information is assigned to the content for a portion to be processed.
(11) The information processing apparatus according to any one of (1) to (10), wherein the tagging unit tags the content based on the latest position information of the operation body.
(12)
(1) to (11), further comprising: a display control unit that displays the content of the content on a display screen, wherein the display control unit displays a locus of position information of the operating body on the display screen. The information processing apparatus according to any one of the above.
(13) a playback state control unit for controlling the playback state of the content, wherein the playback state control unit extracts a part of the content based on the tag information; The information processing apparatus according to any one of claims.
(14) The tagging unit uses position information of the operating body in a first direction as the tag information, and the reproduction state control unit is configured to output the operating body in the first direction with respect to the tag information. The information processing apparatus according to (13), wherein position information and a score are associated with each other, and a portion of the content in which the score is equal to or greater than a predetermined threshold is extracted.
(15) The information processing apparatus according to (14), wherein the reproduction state control unit determines the threshold based on a reproduction time of the extracted content.
(16) The playback state control unit according to (14) or (15), wherein the playback state control unit further extracts a part of the content based on at least one of image data and audio data included in the content data. Information processing device.
(17) The information processing apparatus according to any one of (13) to (16), wherein the reproduction state control unit extracts a part of the content based on a plurality of different pieces of tag information.
(18) The information processing apparatus according to any one of (1) to (17), wherein the position information acquisition unit acquires three-dimensional position information of the operating body in space.
(19) Acquiring position information of an operating tool associated with an elapsed time during content reproduction, and tagging the content by adding the position information to the content as tag information; A tagging method including:
(20) Tagging the content by giving the computer the function of acquiring the position information of the operating tool associated with the elapsed time during the content reproduction, and adding the position information to the content as tag information Function and program to realize.
 10  情報処理装置
 110  入力部
 120  出力部
 130  記憶部
 140  制御部
 141  位置情報取得部
 142  再生状態制御部
 143  表示制御部
 144  タグ付け部
 210  表示画面
 310、320、410、420  タグ情報
DESCRIPTION OF SYMBOLS 10 Information processing apparatus 110 Input part 120 Output part 130 Storage part 140 Control part 141 Position information acquisition part 142 Playback state control part 143 Display control part 144 Tagging part 210 Display screen 310,320,410,420 Tag information

Claims (20)

  1.  コンテンツ再生中の経過時間と関連付けられた操作体の位置情報を取得する位置情報取得部と、
     前記位置情報をタグ情報として前記コンテンツに付与することにより、前記コンテンツへのタグ付けを行うタグ付け部と、
     を備える、情報処理装置。
    A position information acquisition unit that acquires position information of an operating tool associated with an elapsed time during content playback;
    A tagging unit for tagging the content by giving the position information as tag information to the content;
    An information processing apparatus comprising:
  2.  コンテンツの再生状態を制御する再生状態制御部、
     を更に備え、
     前記タグ付け部は、第1の方向における前記操作体の位置情報を前記タグ情報として利用し、
     前記再生状態制御部は、前記第1の方向とは異なる第2の方向における前記操作体の位置情報に応じて前記コンテンツの再生状態を制御する、
     請求項1に記載の情報処理装置。
    A playback state control unit for controlling the playback state of the content,
    Further comprising
    The tagging unit uses position information of the operating body in a first direction as the tag information,
    The reproduction state control unit controls the reproduction state of the content according to position information of the operating body in a second direction different from the first direction;
    The information processing apparatus according to claim 1.
  3.  前記コンテンツの内容を表示画面上に表示させる表示制御部、
     を更に備え、
     前記位置情報取得部は、前記表示画面上における前記操作体の位置情報を取得し、
     前記第1の方向は前記表示画面の表示に対して上下方向であり、
     前記第2の方向は前記表示画面の表示に対して左右方向である、
     請求項2に記載の情報処理装置。
    A display control unit for displaying the content of the content on a display screen;
    Further comprising
    The position information acquisition unit acquires position information of the operating tool on the display screen,
    The first direction is a vertical direction with respect to the display of the display screen;
    The second direction is a left-right direction with respect to the display of the display screen;
    The information processing apparatus according to claim 2.
  4.  前記再生状態制御部は、前記第2の方向における前記操作体の位置情報と、前記コンテンツ再生中の経過時間とを対応付け、前記第2の方向における前記操作体の位置情報に対応する再生位置で前記コンテンツを再生させる、
     請求項2に記載の情報処理装置。
    The reproduction state control unit associates the position information of the operating body in the second direction with the elapsed time during the content reproduction, and the reproduction position corresponding to the position information of the operating body in the second direction To play the content,
    The information processing apparatus according to claim 2.
  5.  前記再生状態制御部は、前記表示画面上での前記第2の方向における一端から他端までの間を前記コンテンツ内の任意の時間範囲に対応付け、前記表示画面上における前記第2の方向における前記操作体の位置情報に対応する再生位置で前記コンテンツを再生し、
     前記位置情報取得部は、前記表示画面上における前記操作体の位置情報を取得する、
     請求項3に記載の情報処理装置。
    The reproduction state control unit associates a range from one end to the other end in the second direction on the display screen with an arbitrary time range in the content, and in the second direction on the display screen. Playing the content at a playback position corresponding to the position information of the operating body,
    The position information acquisition unit acquires position information of the operating body on the display screen.
    The information processing apparatus according to claim 3.
  6.  前記再生状態制御部は、前記第2の方向における前記操作体の前記位置情報に基づいて、前記コンテンツの再生速度を変化させる、
     請求項2に記載の情報処理装置。
    The playback state control unit changes the playback speed of the content based on the position information of the operating tool in the second direction.
    The information processing apparatus according to claim 2.
  7.  前記再生状態制御部は、前記表示画面上での前記第2の方向における一端から他端までの間を前記コンテンツの再生速度に対応付け、前記表示画面上における前記第2の方向における前記操作体の位置情報に対応する再生速度で前記コンテンツを再生させる、
     請求項3に記載の情報処理装置。
    The reproduction state control unit associates the content from one end to the other end in the second direction on the display screen with the reproduction speed of the content, and the operation body in the second direction on the display screen. Playing the content at a playback speed corresponding to the position information of
    The information processing apparatus according to claim 3.
  8.  前記再生状態制御部は、前記第2の方向における前記操作体の位置情報に基づいて、前記コンテンツの早送り又は巻き戻しを行う、
     請求項2に記載の情報処理装置。
    The reproduction state control unit performs fast-forwarding or rewinding of the content based on position information of the operating body in the second direction.
    The information processing apparatus according to claim 2.
  9.  前記再生状態制御部は、前記第2の方向における基準点に対して、前記操作体の位置情報が前記基準点よりも一側で取得される場合には前記コンテンツを早送りし、前記操作体の位置情報が前記基準点よりも他側で取得される場合にはコンテンツを巻き戻す制御を行う、
     請求項8に記載の情報処理装置。
    The reproduction state control unit fast-forwards the content when the position information of the operating tool is acquired on one side of the reference point with respect to the reference point in the second direction, When position information is acquired on the other side of the reference point, control to rewind the content is performed.
    The information processing apparatus according to claim 8.
  10.  前記位置情報取得部は、前記コンテンツ内の任意の時間範囲に対応する部分について前記操作体の位置情報を取得し、
     前記タグ付け部は、前記位置情報が取得された時間範囲に対応する部分について前記コンテンツに前記タグ情報を付与する、
     請求項1に記載の情報処理装置。
    The position information acquisition unit acquires position information of the operating tool for a portion corresponding to an arbitrary time range in the content,
    The tagging unit assigns the tag information to the content for a portion corresponding to a time range in which the position information is acquired;
    The information processing apparatus according to claim 1.
  11.  前記タグ付け部は、最新の前記操作体の位置情報に基づいて前記コンテンツへのタグ付けを行う、
     請求項1に記載の情報処理装置。
    The tagging unit tags the content based on the latest position information of the operation body.
    The information processing apparatus according to claim 1.
  12.  前記コンテンツの内容を表示画面上に表示させる表示制御部、
     を更に備え、
     前記表示制御部は、前記操作体の位置情報の軌跡を前記表示画面上に表示させる、
     請求項1に記載の情報処理装置。
    A display control unit for displaying the content of the content on a display screen;
    Further comprising
    The display control unit displays a locus of position information of the operating body on the display screen;
    The information processing apparatus according to claim 1.
  13.  コンテンツの再生状態を制御する再生状態制御部、
     を更に備え、
     前記再生状態制御部は、前記タグ情報に基づいて、前記コンテンツの一部分を抽出する、
     請求項1に記載の情報処理装置。
    A playback state control unit for controlling the playback state of the content,
    Further comprising
    The reproduction state control unit extracts a part of the content based on the tag information;
    The information processing apparatus according to claim 1.
  14.  前記タグ付け部は、第1の方向における前記操作体の位置情報を前記タグ情報として利用し、
     前記再生状態制御部は、前記タグ情報について、前記第1の方向における前記操作体の位置情報とスコアとを対応付け、前記コンテンツのうち前記スコアが所定のしきい値以上である部分を抽出する、
     請求項13に記載の情報処理装置。
    The tagging unit uses position information of the operating body in a first direction as the tag information,
    The reproduction state control unit associates the position information of the operating body in the first direction and the score with respect to the tag information, and extracts a portion of the content in which the score is a predetermined threshold value or more. ,
    The information processing apparatus according to claim 13.
  15.  前記再生状態制御部は、前記しきい値を、抽出されるコンテンツの再生時間に基づいて決定する、
     請求項14に記載の情報処理装置。
    The reproduction state control unit determines the threshold based on a reproduction time of the extracted content;
    The information processing apparatus according to claim 14.
  16.  前記再生状態制御部は、前記コンテンツのデータに含まれる画像データ及び音声データの少なくともいずれかに更に基づいて、前記コンテンツの一部分を抽出する
     請求項14に記載の情報処理装置。
    The information processing apparatus according to claim 14, wherein the reproduction state control unit extracts a part of the content based further on at least one of image data and audio data included in the content data.
  17.  前記再生状態制御部は、互いに異なる複数の前記タグ情報に基づいて、前記コンテンツの一部分を抽出する、
     請求項13に記載の情報処理装置。
    The reproduction state control unit extracts a part of the content based on a plurality of different tag information.
    The information processing apparatus according to claim 13.
  18.  前記位置情報取得部は、空間上における前記操作体の3次元の位置情報を取得する、
     請求項1に記載の情報処理装置。
    The position information acquisition unit acquires three-dimensional position information of the operating body in space.
    The information processing apparatus according to claim 1.
  19.  コンテンツ再生中の経過時間と関連付けられた操作体の位置情報を取得することと、
     前記位置情報をタグ情報として前記コンテンツに付与することにより、前記コンテンツへのタグ付けを行うことと、
     を含む、タグ付け方法。
    Obtaining the position information of the operating tool associated with the elapsed time during content playback;
    Tagging the content by giving the location information as tag information to the content;
    A tagging method including:
  20.  コンピュータに、
     コンテンツ再生中の経過時間と関連付けられた操作体の位置情報を取得する機能と、
     前記位置情報をタグ情報として前記コンテンツに付与することにより、前記コンテンツへのタグ付けを行う機能と、
     を実現させるためのプログラム。
    On the computer,
    A function of acquiring position information of an operating tool associated with an elapsed time during content playback;
    A function of tagging the content by giving the position information to the content as tag information;
    A program to realize
PCT/JP2014/050829 2013-04-04 2014-01-17 Information processing apparatus, tagging method and program WO2014162757A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013-078461 2013-04-04
JP2013078461 2013-04-04

Publications (1)

Publication Number Publication Date
WO2014162757A1 true WO2014162757A1 (en) 2014-10-09

Family

ID=51658064

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/050829 WO2014162757A1 (en) 2013-04-04 2014-01-17 Information processing apparatus, tagging method and program

Country Status (1)

Country Link
WO (1) WO2014162757A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006279320A (en) * 2005-03-28 2006-10-12 Canon Inc Program storage reproducing apparatus, program storage reproducing method, and recording medium and program thereof
JP2007142571A (en) * 2005-11-15 2007-06-07 Toshiba Corp Content reproducing apparatus and reproduction speed control method thereof
JP2010288015A (en) * 2009-06-10 2010-12-24 Sony Corp Information processing device, information processing method, and information processing program
JP2012088688A (en) * 2010-09-22 2012-05-10 Nikon Corp Image display device
JP2012155695A (en) * 2011-01-07 2012-08-16 Kddi Corp Program for imparting keyword tag to scene of interest in motion picture contents, terminal, server, and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006279320A (en) * 2005-03-28 2006-10-12 Canon Inc Program storage reproducing apparatus, program storage reproducing method, and recording medium and program thereof
JP2007142571A (en) * 2005-11-15 2007-06-07 Toshiba Corp Content reproducing apparatus and reproduction speed control method thereof
JP2010288015A (en) * 2009-06-10 2010-12-24 Sony Corp Information processing device, information processing method, and information processing program
JP2012088688A (en) * 2010-09-22 2012-05-10 Nikon Corp Image display device
JP2012155695A (en) * 2011-01-07 2012-08-16 Kddi Corp Program for imparting keyword tag to scene of interest in motion picture contents, terminal, server, and method

Similar Documents

Publication Publication Date Title
US20200286185A1 (en) Parallel echo version of media content for comment creation and delivery
US7739584B2 (en) Electronic messaging synchronized to media presentation
JP6044079B2 (en) Information processing apparatus, information processing method, and program
JP5857450B2 (en) Information processing apparatus, information processing method, and program
US10622021B2 (en) Method and system for video editing
CN107251550B (en) Information processing program and information processing method
US8726153B2 (en) Multi-user networked digital photo display with automatic intelligent organization by time and subject matter
US9083933B2 (en) Information processing apparatus, moving picture abstract method, and computer readable medium
US10325628B2 (en) Audio-visual project generator
US20180132006A1 (en) Highlight-based movie navigation, editing and sharing
US11343595B2 (en) User interface elements for content selection in media narrative presentation
TW201132122A (en) System and method in a television for providing user-selection of objects in a television program
US9558784B1 (en) Intelligent video navigation techniques
WO2014069114A1 (en) Information processing device, reproduction state control method, and program
US9564177B1 (en) Intelligent video navigation techniques
JP5870742B2 (en) Information processing apparatus, system, and information processing method
KR20160098949A (en) Apparatus and method for generating a video, and computer program for executing the method
US20200104030A1 (en) User interface elements for content selection in 360 video narrative presentations
JP2008217059A (en) Reproduction device and program for reproduction device
JP2013171599A (en) Display control device and display control method
EP2942949A1 (en) System for providing complex-dimensional content service using complex 2d-3d content file, method for providing said service, and complex-dimensional content file therefor
JP6344379B2 (en) Information processing apparatus and information processing method
KR20150048961A (en) System for servicing hot scene, method of servicing hot scene and apparatus for the same
WO2014162757A1 (en) Information processing apparatus, tagging method and program
KR102083997B1 (en) Method for providing motion image based on objects and server using the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14778250

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14778250

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP