WO2013161319A1 - コンテンツ再生方法、コンテンツ再生システムおよびコンテンツ撮影装置 - Google Patents
コンテンツ再生方法、コンテンツ再生システムおよびコンテンツ撮影装置 Download PDFInfo
- Publication number
- WO2013161319A1 WO2013161319A1 PCT/JP2013/002845 JP2013002845W WO2013161319A1 WO 2013161319 A1 WO2013161319 A1 WO 2013161319A1 JP 2013002845 W JP2013002845 W JP 2013002845W WO 2013161319 A1 WO2013161319 A1 WO 2013161319A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- moving image
- content
- camera
- spatial information
- image content
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/27—Server based end-user applications
- H04N21/274—Storing end-user multimedia data in response to end-user request, e.g. network recorder
- H04N21/2743—Video hosting of uploaded data from client
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/87—Regeneration of colour television signals
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/2665—Gathering content from different sources, e.g. Internet and satellite
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
Definitions
- the present invention relates to a content reproduction method, a content reproduction system, and a content photographing apparatus.
- YouTube registered trademark
- www.youtube.com the user can freely upload a moving image file taken by the user, and can freely view a moving image file taken by an unknown person.
- the present invention has been made in view of the above-described problem, and even if it is a moving image content shot individually by an stranger, it is possible to select and reproduce the related moving image content. It is an object to provide a method, a content reproduction system, and a content photographing apparatus.
- a content reproduction method is a content reproduction method in a system that selects and reproduces moving image content from a plurality of moving image contents photographed by a plurality of cameras, Each of the plurality of moving image contents is associated with metadata describing a time stamp indicating a shooting time and spatial information indicating a feature of the space related to the camera at the time of shooting. Based on the time stamp and the spatial information described in the metadata, the moving image content to be reproduced is selected from the plurality of moving image contents and described in the metadata associated with the selected moving image content Based on the selected time stamp, the playback start location of the selected moving image content is determined and selected. The moving image content that is reproduced from the determined said reproduction start position.
- the content playback method of the present invention can select and play back related moving image content even if it is a moving image content individually shot by an unknown person.
- FIG. 1 is a diagram illustrating a configuration example of a content reproduction system according to the first embodiment.
- FIG. 2 is a diagram for describing an example of a moving image content selection method according to the first embodiment.
- FIG. 3 is a diagram for describing an example of a moving image content selection method according to the first embodiment.
- FIG. 4 is a diagram for describing an example of a moving image content selection method according to the first embodiment.
- FIG. 5A is a diagram for describing an example of a moving image content reproduction method according to Embodiment 1.
- FIG. 5B is a diagram for describing an example of a moving image content reproduction method according to Embodiment 1.
- FIG. 6A is a diagram for describing an example of a moving image content reproduction method according to Embodiment 1.
- FIG. 6B is a diagram for describing an example of a moving image content reproduction method according to Embodiment 1.
- FIG. 7 is a flowchart showing the content reproduction processing of the system in the first embodiment.
- FIG. 8 is a block diagram illustrating an exemplary configuration of the content photographing apparatus according to the first embodiment.
- FIG. 9 is a diagram illustrating an example of the configuration of the content photographing apparatus according to the second embodiment.
- FIG. 10 is a diagram illustrating an example of a usage mode of the content reproduction apparatus according to the second embodiment.
- FIG. 11 is a diagram for explaining an example of the operation of the content reproduction apparatus according to the second embodiment.
- FIG. 12 is a diagram for explaining an example of the operation of the content reproduction apparatus according to the second embodiment.
- FIG. 12 is a diagram for explaining an example of the operation of the content reproduction apparatus according to the second embodiment.
- FIG. 13 is a diagram for explaining an example of the operation of the content reproduction apparatus according to the second embodiment.
- FIG. 14 is a flowchart for explaining the operation of the server in the second embodiment.
- FIG. 15 is a diagram illustrating an example of a configuration of an angle switching unit according to the third embodiment.
- FIG. 16 is a diagram illustrating details of S205 in the third embodiment.
- FIG. 17 is a diagram for explaining a camera position information and direction evaluation method in the theater mode.
- FIG. 18 is a diagram for describing a camera position information and direction evaluation method in the stadium mode.
- FIG. 19 is a diagram for describing a camera position information and direction evaluation method in the party mode.
- FIG. 20 is a diagram for describing a camera position information and direction evaluation method according to the second embodiment.
- Patent Document 1 a technique is disclosed in which a camera gives a shooting time, that is, a time stamp, as metadata to moving image data being shot.
- the moving image data created in this way is stored as a moving image content in the playback device by the amount captured by a plurality of cameras. Thereby, it is possible to reproduce moving image content shot by a plurality of cameras as if one moving image file has a plurality of camera angles.
- one moving image content among a plurality of moving image contents stored in a playback device is selected and played back, and during the playback, the user gives, for example, a button for instructing angle switching If you press.
- the playback device reads the time stamp attached to the scene being played, included in the moving picture content being played, and searches for another moving picture content having a time stamp that approximates the time stamp. Another moving image content that matches the above is newly reproduced after the scene to which the time stamp is added is cued.
- Patent Document 1 the technique disclosed in Patent Document 1 is based on the premise that a plurality of cameras are shooting the same event. Therefore, shooting with a plurality of cameras is limited to family members and friends, and for example, it does not work well for moving image content shot by strangers. That is, another searched moving image file often has no relevance at all, and cannot operate as a single moving image having a so-called multi-angle.
- the invention of the present application has been made in view of the above problems, and is a content playback method capable of selecting and playing back related moving image content even if it is moving image content individually shot by an unknown person, It is an object of the present invention to provide a content reproduction system and a content photographing apparatus.
- a content reproduction method is a content reproduction method in a system that selects and reproduces moving image content from a plurality of moving image contents photographed by a plurality of cameras, Each of the plurality of moving image contents is associated with metadata describing a time stamp indicating a shooting time and spatial information indicating a feature of the space related to the camera at the time of shooting. Based on the time stamp and the spatial information described in the metadata, the moving image content to be reproduced is selected from the plurality of moving image contents and described in the metadata associated with the selected moving image content Based on the selected time stamp, the playback start location of the selected moving image content is determined and selected. The moving image content that is reproduced from the determined said reproduction start position.
- the moving image content to be reproduced is selected from the plurality of moving image contents based on the time stamp and the spatial information described in the metadata.
- the at least two or more moving image contents may be played back simultaneously from the playback start location.
- the related moving image content can be selected, and a plurality of different moving image contents can be simultaneously selected from the same corresponding time (reproduction start location). Can be played.
- the content playback method when at least one first video content among the plurality of video content has already been played back, when selecting the video content to be played back, Among the plurality of moving image contents excluding one moving image content, at least one second moving image as the moving image content to be reproduced based on the time stamp and the spatial information described in the metadata.
- the first moving image content is switched at a location corresponding to the playback start location, and the second video content to be played back is played from the playback start location. It is good.
- the second moving image content may be moving image content obtained by photographing the same subject as the first moving image content at a different angle.
- the second moving image content related to the first moving image content to be reproduced can be selected.
- the second moving image content can be played back at a time corresponding to that time (playback start location) by switching at a certain time of the first video content, the same event was shot from a different angle. Moving image content can be played back.
- the second moving image content shot by another person can be reproduced as a dynamic multi-angle moving image as a single moving image content as if it were the first moving image content shot by the user.
- the spatial information may be described in the metadata at predetermined time intervals.
- the spatial information may include the altitude of the camera at the time of shooting, and the spatial information may include the orientation of the camera at the time of shooting.
- the space information may include at least one of temperature and humidity of a space including a camera at the time of photographing.
- the spatial information evaluated using the spatial information evaluation method specified in the evaluation mode specified from a plurality of evaluation modes may be selected based on the spatial information described in the data.
- the spatial information can be evaluated in the evaluation mode corresponding to the content such as the genre of the moving image content to be reproduced, so that more appropriate moving image content can be selected.
- a spatial information evaluation method defined in an evaluation mode designated by the user from among the plurality of evaluation modes may be used.
- the moving image content when selecting the moving image content to be reproduced, the moving image content is based on the subject photographed in the moving image content in addition to the time stamp and the spatial information described in the metadata. May be selected.
- the moving image content to be reproduced when selecting the moving image content to be reproduced, at least two or more moving image contents are selected based on the time stamp and the spatial information described in the metadata, and the selected two or more selected moving image contents are selected.
- the moving image content in which the same subject is photographed among the subjects photographed with the moving image content may be selected.
- the subject may be a person.
- a selection unit that selects content, and a determination unit that determines a playback start location of the selected moving image content based on a time stamp described in metadata associated with the moving image content selected by the selection unit;
- a playback unit that plays back the moving image content selected by the selection unit from the playback start location determined by the determination unit.
- the content photographing apparatus described in the content reproduction system includes a camera that captures moving image content, a moving image content captured by the camera, a time stamp indicating a photographing time, and a space related to the camera at the time of photographing. And a storage unit that stores metadata describing spatial information indicating features in association with each other.
- the spatial information may include an orientation of the camera at the time of shooting.
- FIG. 1 is a diagram illustrating a configuration example of a content reproduction system according to the first embodiment.
- 2 to 4 are diagrams for explaining an example of a moving image content selection method according to the first embodiment.
- 5A to 6B are diagrams for explaining an example of a moving image content reproduction method according to the first embodiment.
- the content reproduction system shown in FIG. 1 includes a plurality of content photographing devices 10 and a content reproduction device 20, and selects and reproduces a plurality of moving image contents.
- the plurality of content photographing devices 10 and the content reproduction device 20 are connected via a network 30.
- the plurality of content photographing devices 10 each photograph moving image content.
- Each of the plurality of content photographing devices 10 includes an imaging unit 11 and a storage unit 12.
- the imaging unit 11 is a camera and captures moving image content.
- the storage unit 12 stores the moving image content captured by the imaging unit 11 in association with metadata describing a time stamp indicating a shooting time and spatial information indicating a feature of the space captured by the imaging unit 11.
- the metadata may be described for each frame constituting the moving image content, or may be described for every fixed number of frames or every fixed time constituting the moving image content.
- the spatial information may be described in the metadata at predetermined (constant) time intervals.
- the spatial information includes, for example, at least the orientation when the imaging unit 11 has taken a picture (during shooting).
- the spatial information is not limited to the azimuth, and can include any information that shows the characteristics of the space.
- the spatial information includes the position information indicating the position of the camera at the time of shooting, the latitude and longitude of the camera at the time of shooting, the altitude of the camera at the time of shooting, the temperature of the space including the camera at the time of shooting, and the spatial information. At least one of the humidity may be included.
- the spatial information may include information appropriately combined from at least one of azimuth, position information, latitude and inclination, altitude, temperature, and humidity. Also, the spatial information is not limited to these examples as long as it indicates the characteristics of the space related to the camera.
- each of the plurality of moving image contents is associated with metadata in which a time stamp indicating the shooting time and spatial information indicating the spatial characteristics of the camera at the time of shooting are described.
- the content reproduction device 20 selects and reproduces moving image content from a plurality of moving image contents photographed by the content photographing device 10.
- the content reproduction apparatus 20 includes a selection unit 21, a determination unit 22, and a reproduction unit 23.
- the selecting unit 21 selects a moving image content to be reproduced from a plurality of moving image contents based on the time stamp and the spatial information described in the metadata.
- the selection unit 21 selects, for example, at least two or more moving image contents from among a plurality of moving image contents based on the time stamp and the spatial information described in the metadata. May be selected. This will be described with reference to the example shown in FIG. 2. From among the moving image files V1 to V14, a moving image having a time stamp in the vicinity of time t1 and similar spatial information (included in the dotted line region in FIG. 2). File V1 to moving image file V10 are selected. In FIG. 2, the vertical axis represents the similarity of spatial information, and the horizontal axis represents the time (time) indicated by the time stamp. When the similarity of spatial information is large, it shows that the time identity can be widened.
- the selection unit 21 selects a plurality of moving image contents excluding the first moving image content.
- the at least one second moving image content may be selected as the moving image content to be reproduced based on the time stamp and the spatial information described in the metadata.
- the second moving image content is moving image content obtained by photographing the same subject as the first moving image content at a different angle. In the example shown in FIG.
- a moving image file V2 to a moving image file V10 having a time stamp near the time t1 and similar spatial information are selected.
- the moving image content to be reproduced selected by the selection unit 21 may be one.
- the determination unit 22 determines the playback start location of the video content to be played based on the time stamp described in the metadata associated with the video content selected by the selection unit 21. This will be described with reference to the example shown in FIG. 3 or 4. Based on the time stamp described in the metadata associated with the moving image content (moving image file) selected by the selection unit 21, reproduction starts at time t 2 or t 3. Determine as the location.
- the reproduction unit 23 reproduces the moving image content selected by the selection unit from the reproduction start location determined by the determination unit 22.
- the playback unit 23 may play back at least two moving image contents to be played back selected by the selection unit 21 simultaneously on the same screen from the playback start location. This will be described with reference to the example shown in FIG. 3.
- the playback unit 23 selects time stamps after the time t2 that is the playback start location from the playback target moving image files V1 to V14 selected by the selection unit 21.
- the included moving image files V1, V2, V7, and V9 are simultaneously reproduced from the reproduction start location (time t2) on the same screen as shown in FIG. 5A, for example.
- FIG. 5A shows an example in which the moving image files V1, V2, V7, and V9 capturing the singer 40 are simultaneously displayed as four screens 51a to 51d on the same screen displayed on the display 50.
- two moving image contents may be reproduced by the reproducing unit 23.
- a moving image in which the singer 40 is photographed on the same screen displayed on the display 50 may be displayed in two split screens as the screens 51a and 51b.
- the playing unit 23 switches the first moving image content at a location corresponding to the playback start location, and sends it to the selection unit 21.
- the selected second moving image content to be reproduced may be reproduced from the reproduction start location. This will be described with reference to the example shown in FIG. 4.
- the moving image file V1 is reproduced as the first moving image content (screen 51) as shown in FIG. 6A.
- the playback unit 23 switches the moving image file V1 at time t3, which is the playback start position, and plays back the moving image file V2 to the moving image file V14 excluding the moving image file V1, as shown in FIG. 6B.
- FIG. 6A shows an example in which the moving image file V1 in which the singer 40 is photographed is reproduced on the screen 51 displayed on the display 50.
- 6B shows an example in which a moving image file V2 in which a singer 40 of another angle is photographed is reproduced on a screen 52 displayed on the display 50.
- FIG. 7 is a flowchart showing the content reproduction processing of the system in the first embodiment.
- a moving image content to be reproduced is selected based on the time stamp and the spatial information described in the metadata (S101).
- description here is abbreviate
- the playback start location is determined based on the time stamp described in the metadata associated with the selected moving image content (S102).
- description here is abbreviate
- the selected moving image content is reproduced from the determined reproduction start location (S103).
- description here is abbreviate
- content reproduction processing is performed in a content reproduction system that selects and reproduces moving image content from a plurality of moving image content shot by a plurality of different cameras.
- Example 1 a specific example of the content photographing apparatus 10 according to the present embodiment will be described using Example 1.
- FIG. 8 is a block diagram illustrating an example of the configuration of the content photographing apparatus according to the first embodiment.
- the imaging unit 11 includes an optical unit 112, a CMOS sensor 113, and an AD conversion unit 114.
- the optical unit 112 includes a lens, a diaphragm, and the like, and acquires light.
- the CMOS sensor 113 receives the light (light beam) acquired by the optical unit 112 and converts (generates) it into an analog signal.
- the AD converter 114 converts the analog signal generated by the CMOS sensor 113 into a digital signal.
- the means for obtaining the digital signal may be other means, and for example, a CCD sensor may be used instead of the CMOS sensor 113.
- the storage unit 12 includes an encoding unit 120, a GPS signal receiving unit 121, a geomagnetic sensor 122, a metadata creation unit 126, a MUX 127, and a storage device 128. Yes.
- the encoding unit 120 compresses and encodes the digital signal (moving image data) output from the imaging unit 11.
- This compression encoding method is an international standard such as MPEG1, MPEG2, MPEG4, H.264. Any compression method of H.264 or the like may be applied, but in the present embodiment, AVCHD standardized as a moving image encoding method in a video camera is applied. This is because AVCHD is defined so that time stamps and spatial information can be recorded in a format conforming to the EXIF standard for each frame constituting encoded moving image data.
- the GPS signal receiving unit 121 receives radio waves from GPS satellites, and acquires position information (latitude and longitude) on the earth of the content photographing apparatus 10 (hereinafter referred to as a camera). Since the GPS signal includes time information based on an atomic clock stored in a GPS satellite, the GPS signal receiving unit 121 may acquire the time stamp 124 from the GPS signal.
- the geomagnetic sensor 122 acquires the angle at which the camera is pointed (direction such as east, west, south, and north), that is, the direction of the camera.
- the atmospheric pressure sensor 123 obtains the height of the camera from the sea level, that is, the altitude of the camera, using the fact that the atmospheric pressure becomes weaker as it goes up.
- the spatial information 125 includes at least one of the above-described position information, direction, and altitude.
- the metadata creation unit 126 creates metadata using the time stamp 124 acquired by the GPS signal reception unit 121 and the spatial information 125 acquired by at least one of the geomagnetic sensor 122 and the atmospheric pressure sensor 123. For example, if the storage unit 12 stores the moving image data in the AVCHD format, the metadata generation unit 126 generates metadata encoded in the EXIF format.
- the spatial information 125 may be recorded for each frame with respect to the metadata associated with the captured moving image data, but is not limited thereto.
- the metadata may record (describe) one piece of moving image data, for example, spatial information including typical position information.
- a plurality of spatial information may be recorded (description) at a fixed time instead of every frame, or spatial information may be recorded (description) as appropriate at the change of scene.
- the time for each captured scene that is, the time stamp 124, and the spatial information 125 including the position information, orientation, altitude, and the like of the camera being shot are used.
- the time stamp 124 and the spatial information 125 may not be acquired by the method.
- the camera may acquire the time stamp 124 from a clock built in the camera.
- the camera may acquire position information from other than the GPS signal. For example, the camera first communicates with the wireless LAN device and acquires a MAC address uniquely assigned to the wireless LAN device. Then, the camera searches the database storing the installation location of the wireless LAN device existing on the Internet by using the MAC address as a key, and acquires the position where the wireless LAN device exists, thereby acquiring the position information of the camera. You may get. This method is particularly effective in a situation where GPS radio waves cannot be received.
- the camera is in a situation where it cannot receive both GPS signal radio waves and wireless LAN radio waves. In that case, the camera should just memorize
- the pressure sensor 123 acquires the altitude using the atmospheric pressure as it is, there may be an error in the altitude value due to, for example, a change in weather.
- the camera may acquire a weather state using a database existing on the Internet and correct an altitude error acquired by the atmospheric pressure sensor 123.
- the camera may use the GPS signal for altitude detection without using atmospheric pressure (without using the atmospheric pressure sensor 123).
- the photographer of the camera may manually input the camera position information, direction, and altitude. In this case, the photographer may manually input the position information, orientation, and altitude of the camera every time the camera is used for shooting. It may be obtained relatively using a sensor or the like.
- the spatial information 125 is not a specific numerical value as described above, but may be abstract.
- the position information of an abstract camera such as “hot place” and “cold place” may be acquired using a thermometer and recorded (described) in the metadata as spatial information 125.
- the position information of the abstract camera such as “a dry place” and “a place where it is squeezed” may be used as the spatial information 125.
- the position information of the abstract camera such as “lively place” and “quiet place” using a microphone may be used as the spatial information 125.
- the MUX 127 is a multiplexer that multiplexes the moving image data encoded by the encoding unit 120 and the metadata generated by the metadata generation unit 126 and stores them in the storage device 128. For example, when the format stored in the storage device 128 is the AVCHD format, the MUX 127 stores each frame in the EXIF format.
- the storage device 128 is an HDD or a flash memory built in the camera.
- the moving image data encoded by the encoding unit 120 and the metadata generated by the metadata generation unit 126 are not limited to being multiplexed and stored in the storage device 128, for example on the Internet. You may upload directly to the server. Further, the present invention is not limited to the case where the moving image data encoded by the encoding unit 120 and the metadata generated by the metadata generation unit 126 are multiplexed, and is encoded by the encoding unit 120. It is sufficient that the moving image data and the metadata created by the metadata creating unit 126 are associated with each other.
- Example 2 a specific example of the content reproduction apparatus of the present embodiment will be described using Example 2.
- the content playback apparatus 20A includes a server 25 having a storage unit 24 and a playback terminal 26 having a display device 27.
- the server 25 and the playback terminal 26 are connected via a network 301 such as the Internet.
- the second moving image content is selected by the server 25 from the plurality of moving image contents excluding the first moving image content, A case where playback is performed by the playback terminal 26 will be described.
- FIG. 10 for example, three people who are not acquainted with each other photograph the stage of the same singer 40 using three content photographing devices 10 (camera 10a to camera 10c). Then, it demonstrates that the moving image file (moving image content) which image
- a description will be given on the assumption that a plurality of moving image files uploaded to the server 25 can be viewed by the reproduction terminal 26 and the first moving image content is reproduced by the reproduction terminal 26.
- the server 25 includes a file reading unit 251, a data transfer unit 252, and an angle switching unit 253, and a storage unit 24 inside or outside.
- the storage unit 24 is connected to the server 25 and is, for example, a hard disk.
- the storage unit 24 stores (stores), for example, a plurality of moving image contents (moving image files 1 to 3) uploaded by an unspecified number of people.
- the data transfer unit 252 has the function of the reproduction unit 23 and transfers the moving image file (first moving image content or second moving image content) read by the file reading unit 251 to the reproduction terminal 26 via the network 301. .
- the angle switching unit 253 has functions of the selection unit 21 and the determination unit 22.
- the angle switching unit 253 instructs the file reading unit 251 to read out the selected moving image file (second moving image content) as a new moving image file (second moving image content).
- the angle switching unit 253 is based on the angle switching instruction instructed by the instruction unit 261 and the time stamp described in the metadata associated with the moving image file (first moving image content) currently being reproduced. Then, the reproduction start location of the moving image content (second moving image content) to be reproduced is determined. That is, the angle switching unit 253 cues up to a position (time, scene) corresponding to a position (time, scene) corresponding to the position where the angle switching instruction is received in the first video content, that is, the reproduction start position of the selected second moving image content. And instructing the file reading unit 251 to perform reading.
- the playback terminal 26 is operated by an unspecified person and views the moving image content by accessing the server 25.
- the playback terminal 26 includes an instruction unit 261, a file specification unit 262, and a playback unit 23 including a data reception unit 231 and a decryption unit 232.
- the playback terminal 26 is connected to a display device 27 having a display unit 271 and an input unit 272.
- the file specification unit 262 receives an instruction input by a predetermined person using the input unit 272, for example.
- the file designation unit 262 designates one of the plurality of moving image contents (moving image file 1 to moving image file 3) stored in the storage unit 24 based on the received instruction (first moving image content). Is transmitted to the file reading unit 251.
- the instruction unit 261 receives an angle switching instruction input by a predetermined person using the input unit 272, for example.
- the instruction unit 261 transmits the instruction to the angle switching unit 253.
- the data receiving unit 231 and the decoding unit 232 have the function of the reproducing unit 23.
- the data receiving unit 231 receives the moving image file (first moving image content or second moving image content) transferred from the server 25.
- the decoding unit 232 decodes the moving image file received by the data receiving unit 231 and transmits it to the display device 27 as moving image data.
- the data transfer unit 252 sequentially transfers the moving image file from the reproduction start position determined by the angle switching unit 253, and the decoding unit 232 sequentially decodes the moving image file received by the data receiving unit 231 to display the moving image file.
- 27 may be reproduced from the reproduction start location determined by the angle switching unit 253.
- the moving image file may be decoded and transmitted to the display device 27 as moving image data using the time corresponding to the time designated by the instruction unit 261 as the reproduction start location.
- the content reproduction apparatus 20A including the server 25 and the reproduction terminal 26 configured as described above uses the plurality of accumulated moving image contents by switching the first moving image content and reproducing the second moving image content. Can be played back as a single multi-angle video.
- FIGS. 11 to 13 are diagrams for explaining an example of the operation of the content reproduction apparatus according to the second embodiment.
- FIG. 11 shows a case where a moving image content (moving image file 1) taken by the camera 10a is uploaded to the server 25 and the uploaded moving image file 1 is viewed on the playback terminal 26.
- FIG. 11 shows an example in which the moving image file 1 capturing the singer 40 is reproduced on the screen 51 displayed on the display 50a.
- An angle switching button 53 is prepared in the screen displayed on the display 50a.
- the angle switching unit 253 cues and reads up to a scene corresponding to the start point of playback of the selected moving image file 2, that is, the moving image file 1, corresponding to the angle switching instruction. To instruct. Then, the file reading unit 251 reads the moving image file 2, and the data transfer unit 252 transfers the moving image file 2. In this way, the moving image file 2 is transferred from the server 25.
- FIG. 12 shows an example in which the moving image file 2 in which the singer 40 of another angle is photographed is reproduced on the screen 52 displayed on the display 50a.
- the photographer of the camera 10a is as if he / she shot a video taken by another stranger, that is, a video shot at a different angle from himself / herself, that is not taken by himself / herself. It becomes possible to view continuously.
- FIG. 13 shows an example in which the moving image file 3 in which the singer 40 of another angle is photographed is reproduced on the screen 54 displayed on the display 50a.
- the photographer could only view the video taken by the photographer's own camera, and only enjoyed the image at a fixed angle. Since images taken from other angles can be connected with this camera, it is possible to achieve a special effect of enjoying images from various angles. Further, not only the photographer but also a simple viewer who is not involved in photographing at all can achieve a special effect of enjoying favorite videos at various angles.
- the angle switching instruction is such that the viewer presses the angle switching button 53 prepared on the display screen, but the present invention is not limited to this method.
- the content playback apparatus and the content playback system may automatically perform angle switching at random, at regular intervals, or according to changes in video.
- the server 25 organizes metadata describing shooting date / time, shooting point, shooting direction, and the like, and the same event (the shooting date / time and location match) or the same time video (taken at the same time). Video of different locations), video of the same location (videos taken at the same location at different times), or related video content that is listed or linked to the user and played back. Good. The operation in this case will be described next.
- the angle switching unit 253 receives an instruction signal indicating an angle switching instruction.
- the angle switching unit 253 creates a switching moving image file candidate group (S201).
- the angle switching unit 253 may start narrowing down candidates (moving image files that may be subject to angle switching) after receiving an instruction signal indicating an angle switching instruction. For example, angle switching may be performed. Regardless of whether an instruction signal indicating an instruction (viewer button click) is received or not, candidates may be narrowed down. Note that when no candidate is found, the angle switching instruction is invalid. For this reason, when no candidate is found, the angle switching button prepared on the display screen may be hidden, or the angle switching button may not be displayed (expressed in gray, for example). On the other hand, when one or more candidates are found, the angle switching instruction is valid. In this case, it may be displayed as a picture-in-picture or a separate screen so that the user can easily understand what kind of video content the candidate reproduces.
- the moving image file is read to the file reading unit 251. Then, the angle switching unit 253 confirms the metadata assigned to each scene included in the moving image file (S203).
- the angle switching unit 253 displays the time stamp described in the metadata assigned to the scene of the currently playing moving image file and the time associated with the moving image file read by the file reading unit 251. It is confirmed whether the stamps are similar (S204). If it is confirmed that the time stamps are not similar (No in S204), the process returns to S202 and another candidate (moving image file) is read.
- the tolerance of the error may be determined based on whether or not the error exceeds a predetermined threshold value, or does not have a predetermined threshold value and is comprehensively determined together with an error in spatial information described later. It's okay. For example, if the error of the time stamp is large but the error of the spatial information is small, it may be a target (candidate) for angle switching.
- the threshold value may be changed instead of being fixed. For example, if the content played back by a moving image file is a lecture, theater, performance, etc., if there is a temporal discrepancy between the scenes before and after the angle switch, it will be viewed as a fatal “sound skip” . On the other hand, when the content reproduced by the moving image file is a cheer of watching sports, even if there is some time inconsistency, no fatal result is obtained. That is, the threshold value of the time stamp error may be dynamically changed according to the genre of the content reproduced by the moving image file. The genre selection may be determined by the title or tag of the moving image file specified by the viewer, may be determined by analyzing the audio information, or determined by analyzing the image information. It may be done.
- the angle switching unit 253 converts the spatial information described in the metadata attached to (associated with) the scene of the moving image file currently being reproduced and the moving image file read by the file reading unit 251. It is confirmed whether or not the associated spatial information is similar (S205).
- the similarity of spatial information is evaluated with a certain degree of width. Because if the moving image file currently being played back and the candidate moving image file do not match the camera position and orientation (angle) to some extent, it cannot be considered that the same subject is shot, but too much If they match too much, the angle between the currently playing video file and the candidate video file is almost the same, so the effect of angle switching (video shot from different angles with another camera is cut in) Because it will be lost.
- the camera position information and the evaluation of the direction (angle) will be described in detail in the third embodiment, so that the description thereof is omitted here.
- the degree of similarity of the camera position information falls within a threshold of several hundred meters and is evaluated as having a high degree of similarity if it is far away within that range, the angle switching effect increases.
- the altitude of the camera is effective in determining which of the events that are performed simultaneously on the floors of the building. For example, when several sets of weddings are held in the same building on the floor, it is determined which wedding image is taken by each moving image file. It is effective for. This is because angle switching can be performed with moving image files having the same altitude.
- the server 25 reads the candidate (moving image file), seeks to the point of the time stamp (S206), and reproduces it (S207).
- the file reading unit 251 determines that the candidate (moving image file) is in accordance with an instruction from the angle switching unit 253. ) And seek to the point of the time stamp (reproduction start point) (S206).
- the data transfer unit 252 transfers the data to the playback terminal 26 via the network 301 read by the file reading unit 251 (S207).
- the reliability of each parameter may be further evaluated. For example, even if the moving image file currently being played and the candidate moving image file show high similarity in the camera position information, if the reliability of the camera position information is low, the spatial information of both video files is Another candidate (moving image file) may be selected without being evaluated as similar.
- the similarity may be calculated in a form including the reliability (so that the similarity becomes low if the reliability is low), and only the similarity may be evaluated.
- the moving image files included in the switching moving image file candidate group are evaluated one by one, and as soon as a moving image file (candidate) associated with a metafile describing a similar time stamp and spatial information is found.
- the present invention is not limited to this case. For example, after evaluating all the moving image files (candidates) included in the switching moving image file candidate group, the moving image file (candidate) having the highest evaluation value may be adopted. Furthermore, a history of angle switching may be stored, and a moving image file that has been subject to angle switching may have a low evaluation value. In this case, the further effect that more moving image files are selected can be obtained.
- Example 3 In the third embodiment, another method of evaluating the spatial information described in S205 of FIG. 14 will be described.
- FIG. 15 is a diagram illustrating an example of a configuration of an angle switching unit according to the third embodiment.
- symbol is attached
- the camera position information and the camera orientation (angle) evaluation method depend on the subject photographed by the camera, the content (genre) reproduced by the moving image file, and the like. It is good to change.
- the 15 includes a mode selection unit 254 as compared with the angle switching unit 253.
- the mode selection unit 254 receives an instruction to select an evaluation mode from a viewer or a user, and determines whether to evaluate metadata (especially spatial information) in the selected evaluation mode.
- buttons 53 are prepared corresponding to the above-described three evaluation modes, and the viewer clicks one of the buttons, thereby “theater mode”. It is also possible to select (specify) whether to change the angle with or to change the angle in the “stadium mode”. This is an example, and the system may automatically select the evaluation mode. In that case, the system may select the evaluation mode according to the title or tag of the moving image file, analyze and select audio information, or analyze and select image information.
- FIG. 16 is a diagram illustrating details of S205 in the third embodiment. Elements similar to those in FIG. 14 are denoted by the same reference numerals, and detailed description thereof is omitted.
- the angle switching unit 253a displays the spatial information described in the metadata assigned to the scene of the currently playing moving image file and the space associated with the moving image file read by the file reading unit 251. Check if the information is similar.
- the angle switching unit 253a confirms the evaluation mode selected by the mode selection unit 254 (S2051).
- the angle switching unit 253a sets the spatial information set according to the selected evaluation mode. It is confirmed whether the evaluation value is equal to or greater than a threshold value (S2055). Specifically, the angle switching unit 253a associates the spatial information described in the metadata attached to the scene of the currently playing moving image file with the moving image file read by the file reading unit 251. It is confirmed whether or not an evaluation value for evaluating whether or not the spatial information is similar is equal to or greater than a threshold value.
- the angle switching unit 253a proceeds to S207, and if not (No in S2055), the process returns to S202.
- FIG. 17 is a diagram for explaining a camera position information and direction evaluation method in theater mode.
- a moving image file in which a subject is photographed by the camera 10A from one direction, such as a lecture, a play, or a music performance is reproduced (viewed).
- a moving image file photographed by the camera 10C and a moving image file photographed by the camera 10C as moving image file candidates whose angles can be switched.
- the position (position information) of the camera 10B is closest to the position (position information) of the camera 10A.
- the angle (azimuth) of the camera 10B is opposite to the angle (azimuth) of the camera 10A.
- the evaluation value of the spatial information of the moving image file shot by the camera 10B with respect to the moving image file shot by the camera 10A is low. This is because, in the theater mode, since the subject is supposed to be imaged from one direction, the moving image file captured by the camera 10B is unlikely to record a desired subject, and the effect of angle switching cannot be expected. Because.
- the position of the camera 10C is far from the position of the camera 10A, but the angle (angle) is the same, so the evaluation of the spatial information of the moving image file photographed by the camera C is high. In this way, spatial information is evaluated in the theater mode.
- FIG. 18 is a diagram for explaining a camera position information and direction evaluation method in the stadium mode.
- a moving image file (video) captured by the camera 10D is being reproduced (viewed) in a situation where the subject is captured so as to surround the subject, such as athletic meet, baseball, or soccer watching.
- a moving image file photographed by the camera 10E and a moving image file photographed by the camera 10F as moving image file candidates whose angles can be switched.
- the position (position information) of the camera 10E is closest to the position (position information) of the camera 10D.
- the angle (azimuth) of the camera 10E is opposite to that of the camera 10D, the evaluation value of the spatial information is low. This reason is omitted because it is the same as that described in the theater mode.
- the evaluation is high. Because the position of the camera 10F is in the azimuth of the camera 10D and is somewhat away from the position of the camera 10D, the camera 10F can shoot from a different angle in the stadium from the subject that the camera 10D shoots. This is because it is highly possible. Thus, in the stadium mode, the spatial information is evaluated by an evaluation method different from that in the theater mode.
- FIG. 19 is a diagram for explaining a camera position information and direction evaluation method in the party mode.
- FIG. 19 first, it is assumed that a moving image file (video) shot by the camera 10G is being reproduced (viewed) in a situation where subjects exist in all directions, such as a banquet or a party. At this time, it is shown that there are moving image files shot by the camera 10H, the camera 10I, and the camera 10J as moving image file candidates that can be switched in angle.
- any moving image file (video) captured by the camera 10H to the camera 10J can be subject to angle switching.
- FIG. 20 is a diagram for describing a camera position information and direction evaluation method according to the second embodiment.
- FIG. 20 it is assumed that a moving image file captured by the camera 10K is being played (viewed) out of moving image files captured by a plurality of cameras 10K to 10M.
- the camera 10K is playing back a moving image file (video) capturing the player 45.
- “stadium mode” is selected as the evaluation mode.
- the camera 10L and the camera 10M are switched as moving image files that can be switched from the moving image file captured by the camera 10K.
- the evaluation value of the spatial information of the moving image file that has been photographed becomes high.
- the moving image file is switched according to the subject imaged in the moving image file.
- the player 45 is extracted as a subject photographed by the camera 10K, and a moving image file in which the player 45 is photographed is selected. More specifically, when the camera 10L and the camera 10M having a high spatial information evaluation value are compared, the player 45 is photographed in the moving image file photographed by the camera 10L, but is photographed by the camera 10M. The player 45 is not photographed in the moving image file. Accordingly, the moving image file shot by the camera 10L is selected as the moving image file switched from the moving image file shot by the camera 10K.
- the subject photographed in the moving image file is used for switching the moving image file (selection of moving image content). This will give a special effect to users who want to keep watching their favorite players at different angles, for example, and also to parents who have shot a single child's game with multiple cameras. Play.
- the operation in the stadium mode described in this embodiment is only an example. The same applies, for example, in the theater mode where multiple singers are performing at the same time and the favorite singer continues to watch at different angles.
- the identity may be determined using a face detection technique. You may detect and identify your number. Further, if the position information of the camera used for shooting and the azimuth (angle, direction) of the camera are accurately recorded as spatial information, for example, the subject (player 45) currently captured by the camera 10K shown in FIG. Can be specified at which position coordinates. Therefore, it is also possible to select a camera (camera 10L in FIG. 20) that captures a subject existing at the same position coordinates. These techniques may be implemented in combination. For example, if a face image can be recognized, the subject is determined using face image detection. If the face is not shown, the back number is specified. If both of them are not moved, the position coordinates of the subject are specified. Also good.
- the user may register in advance the subject that the user wants to track before playback. Further, during playback, when an angle switching instruction is given, the subject shown in the center of the screen may be tracked, or the user's usage history in another system may be used. For example, it is assumed that the user has a history of purchasing an autobiography of the player 45 on an online mail order site. Then, since the possibility that the user is a fan of the player 45 is high, the face image of the player 45 may be detected, and the mode may be automatically shifted to the mode in which the player 45 is tracked. Still image data may be registered in the system in order to specify a subject. The system may detect a face image of a subject recorded in registered still image data, search for moving image data in which a similar face image is recorded, and start reproduction.
- face image detection may be performed every time a moving image file is selected.
- a database that associates the detected person with the moving image data and the scene where the person is photographed is created, and the same database can be searched by searching the database. You may make it select the moving image data in which the to-be-photographed object is imaged.
- a content reproduction method, a content reproduction system, and a content photographing apparatus capable of selecting and reproducing a related moving image content even if it is a moving image content individually shot by an stranger. be able to.
- the content photographing apparatus of the present invention such as a video camera is rented out to the audience at the concert venue.
- the audience as volunteers, each shoots their favorite video, but the captured video is uploaded to a server (content playback device) on the Internet without being stored in the video camera. That's fine.
- the viewer can access the server (content reproduction device) by charging and can view the moving image content photographed by the audience from all angles.
- the viewer can select a video of his / her favorite angle, which was not possible with conventional broadcasting, and can obtain a special effect.
- each component may be configured by dedicated hardware, or may be realized by executing a software program suitable for each component.
- Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
- the present invention is not limited to this embodiment. Absent. Unless it deviates from the gist of the present invention, one or more of the present invention may be applied to various modifications that can be conceived by those skilled in the art, or forms constructed by combining components in different embodiments. It may be included within the scope of the embodiments.
- the main purpose is to select moving image content (video) obtained by imaging the same subject at different angles, but the present invention is not limited to such a method.
- the time stamp associated with the moving image content to be compared may be evaluated using a time stamp representing “New Year's Eve” for each country, instead of absolute synchronicity.
- the time stamp associated with the moving image content to be compared may be evaluated at the same absolute time. In that case, there is an extraordinary effect that an interesting result can be obtained, such as finding that Paris is the evening before New Year's Eve at the time of New Year's Eve in Tokyo.
- abstract position information such as “hot place” and “cold place” can be used instead of physical coordinates such as latitude / longitude. For example, when watching a video that conveys the heat wave, in order to satisfy the interest of "what is being done in a cold place at the same time” It is more effective to use the temperature information of the space including the camera.
- humidity information such as “a dry place” and “a damp place” may be used.
- audio information input to the microphone of the camera at the time of shooting may be used as spatial information such as “lively place” and “quiet place” can be determined by using audio information from the microphone, so cut out the countryside video at the same time in the downtown area video. Can do.
- position information obtained from standard radio waves that can identify West Japan, East Japan, etc. may be used as spatial information.
- position information obtained from standard radio waves that can identify West Japan, East Japan, etc. may be used as spatial information.
- it is possible to select a moving image content such as switching a moving image file that satisfies the interest of “How is East Japan at the same time?”.
- the moving image content is selected using the simultaneity based on the time stamp.
- the present invention is not limited to this mode.
- the time stamp may be used to switch to a video that captures the night of New Year's night in Tokyo.
- the video may be selected using the time stamp even at a distant time instead of the time stamp synchronism.
- still image content such as still image data may be selected.
- the video when playing a certain video content, if the angle switch button prepared on the display screen is pressed, the video is not switched to another video content, but is shot at the same place on the same day. It may be switched to the image content.
- the switching of the still image content may not be performed once. You may perform a slideshow display that sequentially switches multiple still image contents shot at the same place on the same day at the same time, or a slideshow display that sequentially switches multiple still image contents captured at the same place on the same day in time order May be.
- the shooting date and time, the shooting location, etc. are described so that a related still image or moving image can be searched and viewed on the net.
- the metadata may be associated as additional information in the HTML page. Thereby, it can be used in the operations shown in S204 and S205 of FIG.
- the metadata is not limited to the shooting date and time and location, but may include object names (person names, building names, place names, etc.) appearing in still images and moving images, and the contents conveyed by the video and its keywords Information may be described.
- object names person names, building names, place names, etc.
- the result of specifying the person name from the face image recognition can be managed by a server or the like. In this case, for example, when reading a certain electronic document, if a person's name that you are interested in comes out, search by that person's name, and moving image content or still image content in which a face image of the person with that person's name is taken May be displayed.
- the present invention can be used for a content playback method, a content playback system, and a content shooting device, and in particular, selection and playback of still images or moving images such as video cameras, servers on the Internet, and devices for realizing simple television broadcasting. Can be used.
Abstract
Description
本発明者は、「背景技術」の欄において記載した、従来例に関し、以下の問題が生じることを見出した。
図1は、実施の形態1におけるコンテンツ再生システムの構成例を示す図である。図2~図4は、実施の形態1における動画像コンテンツの選択方法の一例について説明するための図である。図5A~図6Bは、実施の形態1における動画像コンテンツの再生方法の一例について説明するための図である。
図8は、実施例1におけるコンテンツ撮影装置の構成の一例を示すブロック図である。
図9は、実施例2におけるコンテンツ再生装置の構成の一例を示すブロック図である。図10は、実施例2におけるコンテンツ再生装置の使用態様の一例を示す図である。
実施例3では、図14のS205で説明した空間情報の評価方法の別の方法について説明する。
実施の形態1の実施例3では、空間情報の評価方法(評価モード)を複数用意しておき、選択された評価方法により、アングル切り替え(動画像コンテンツの選択)を行うことについて説明したが、説明した方法に限定されるものではない。例えば、空間情報に加えて、動画像コンテンツに撮影されている被写体に応じて、動画像コンテンツを選択するとしてもよい。本実施の形態では、これについて説明する。
上述した実施の形態では、同一の被写体を異なる角度で撮像された動画像コンテンツ(映像)の選択を主たる目的としたが、本願発明は、そのような方法に限定されるものではない。
10A、10B、10C、10D、10E、10F、10G、10H、10I、10J、10K、10L、10M、10a、10b、10c カメラ
11 撮像部
12、24 保存部
20 コンテンツ再生装置
20A コンテンツ再生装置
21 選択部
22 決定部
23 再生部
25 サーバ
26 再生端末
27 表示装置
30、301 ネットワーク
40 歌手
45 選手
50、50a ディスプレイ
51、51a、51e、52、54 画面
53 アングル切り替えボタン
112 光学部
113 CMOSセンサー
114 AD変換部
120 符号化部
121 GPS信号受信部
122 地磁気センサー
123 気圧センサー
124 タイムスタンプ
125 空間情報
126 メタデータ作成部
127 MUX
128 ストレージデバイス
231 データ受信部
232 復号部
251 ファイル読み出し部
252 データ転送部
253、253a アングル切り替え部
254 モード選択部
261 指示部
262 ファイル指定部
271 表示部
272 入力部
Claims (19)
- 複数のカメラで撮影された複数の動画像コンテンツから動画像コンテンツを選択し再生するシステムにおけるコンテンツ再生方法であって、
前記複数の動画像コンテンツのそれぞれには、撮影時刻を示すタイムスタンプと、撮影時のカメラに関する空間の特徴を示す空間情報とが記述されたメタデータが関連付けられており、
前記コンテンツ再生方法は、
前記複数の動画像コンテンツの中から、メタデータに記述されたタイムスタンプと空間情報とに基づいて、再生対象の動画像コンテンツを選択し、
選択した前記動画像コンテンツに関連付けられたメタデータに記述されたタイムスタンプに基づいて、選択した前記動画像コンテンツの再生開始箇所を決定し、
選択した前記動画像コンテンツを、決定した前記再生開始箇所から再生する、
コンテンツ再生方法。 - 再生対象の前記動画像コンテンツを選択するとき、前記複数の動画像コンテンツの中から、メタデータに記述されたタイムスタンプと空間情報とに基づいて、前記再生対象の動画像コンテンツとして、少なくとも2以上の動画像コンテンツを選択し、
前記再生開始箇所から再生するとき、前記少なくとも2以上の動画像コンテンツを、前記再生開始箇所から、同時に再生する、
請求項1に記載のコンテンツ再生方法。 - 前記コンテンツ再生方法において、
前記複数の動画像コンテンツのうち少なくとも1つの第1動画像コンテンツを既に再生している場合には、
再生対象の前記動画像コンテンツを選択するとき、前記第1動画像コンテンツを除いた前記複数の動画像コンテンツの中から、前記メタデータに記述されたタイムスタンプと空間情報とに基づいて、前記再生対象の動画像コンテンツとして、少なくとも1つの第2動画像コンテンツを選択し、
前記再生開始箇所から再生するとき、前記第1動画像コンテンツを前記再生開始箇所に対応する箇所で切り替えて、再生対象の前記第2動画像コンテンツを前記再生開始箇所から再生する、
請求項1に記載のコンテンツ再生方法。 - 前記第2動画像コンテンツは、前記第1動画像コンテンツと同一の被写体が異なるアングルで撮影された動画像コンテンツである、
請求項3に記載のコンテンツ再生方法。 - 前記空間情報は、所定の時間間隔ごとに、前記メタデータに記述される、
請求項1乃至4のいずれか1項に記載のコンテンツ再生方法。 - 前記空間情報は、撮影時のカメラの位置を示す位置情報を含む、
請求項1乃至5のいずれか1項に記載のコンテンツ再生方法。 - 前記位置情報は、撮影時のカメラの緯度および経度を含む、
請求項6に記載のコンテンツ再生方法。 - 前記空間情報は、撮影時のカメラの高度を含む、
請求項1乃至7のいずれか1項に記載のコンテンツ再生方法。 - 前記空間情報は、撮影時のカメラの方位を含む、
請求項1乃至8のいずれか1項に記載のコンテンツ再生方法。 - 前記空間情報は、撮影時のカメラを含む空間の温度および湿度の少なくとも一方を含む、
請求項1乃至9のいずれか一項に記載のコンテンツ再生方法。 - 再生対象の前記動画像コンテンツを選択する場合、メタデータに記述されたタイムスタンプに基づいて、前記複数の動画像コンテンツの中から、第1の複数の動画像コンテンツを選択し、選択した前記第1の複数の動画像コンテンツの中から、所定の評価モードに規定された評価方法を用いて評価した空間情報であって前記メタデータに記述された空間情報に基づいて、再生対象の前記動画像コンテンツを選択する、
請求項1乃至10のいずれか一項に記載のコンテンツ再生方法。 - 再生対象の前記動画像コンテンツを選択する場合、複数の評価モードの中から指定された評価モードに規定された空間情報の評価方法を用いて評価した空間情報であって前記メタデータに記述された空間情報に基づいて、再生対象の前記動画像コンテンツを選択する、
請求項11に記載のコンテンツ再生方法。 - 再生対象の前記動画像コンテンツを選択する場合、利用者により前記複数の評価モードの中から指定される評価モードに規定された空間情報の評価方法を用いる、
請求項11に記載のコンテンツ再生方法。 - 再生対象の前記動画像コンテンツを選択する場合、前記メタデータに記述されたタイムスタンプと空間情報とに加えて、前記動画像コンテンツで撮影されている被写体に基づき、前記動画像コンテンツを選択する、
請求項1乃至13に記載のコンテンツ再生方法。 - 再生対象の前記動画像コンテンツを選択する場合、メタデータに記述されたタイムスタンプと空間情報とに基づいて、少なくとも2以上の動画像コンテンツを選択し、選択した前記2以上の動画像コンテンツで撮影されている被写体のうち同一の被写体が撮影されている前記動画像コンテンツを選択する、
請求項14に記載のコンテンツ再生方法。 - 前記被写体は、人物である、
請求項15に記載のコンテンツ再生方法。 - 複数の動画像コンテンツを選択して再生するコンテンツ再生システムであって、
それぞれカメラを有し、当該カメラで動画像コンテンツを撮影する複数のコンテンツ撮影装置と、
前記コンテンツ撮影装置で撮影された複数の動画像コンテンツから動画像コンテンツを選択して再生するコンテンツ再生装置と、を備え、
前記複数の動画像コンテンツのそれぞれには、撮影時刻を示すタイムスタンプと、撮影時のカメラに関する空間の特徴を示す空間情報とが記述されたメタデータが関連付けられており、
前記コンテンツ再生装置は、
前記複数の動画像コンテンツの中から、メタデータに記述されたタイムスタンプと空間情報とに基づいて、再生対象の動画像コンテンツを選択する選択部と、
前記選択部が選択した前記動画像コンテンツに関連付けられたメタデータに記述されたタイムスタンプに基づいて、選択した前記動画像コンテンツの再生開始箇所を決定する決定部と、
前記選択部が選択した前記動画像コンテンツを、前記決定部が決定した前記再生開始箇所から再生する再生部とを有する、
コンテンツ再生システム。 - 請求項17に記載のコンテンツ撮影装置は、
動画像コンテンツを撮像するカメラと、
前記カメラで撮像された動画像コンテンツに、撮影時刻を示すタイムスタンプと、撮影時の前記カメラに関する空間の特徴を示す空間情報とが記述されたメタデータを関連付けて保存する保存部と、を備える、
コンテンツ撮影装置。 - 前記空間情報は、撮影時の前記カメラの方位を含む、
請求項18に記載のコンテンツ撮影装置。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/128,674 US9172938B2 (en) | 2012-04-27 | 2013-04-26 | Content reproduction method, content reproduction system, and content imaging device |
JP2013548505A JP6160960B2 (ja) | 2012-04-27 | 2013-04-26 | コンテンツ再生方法、コンテンツ再生システムおよびコンテンツ撮影装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261639119P | 2012-04-27 | 2012-04-27 | |
US61/639,119 | 2012-04-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013161319A1 true WO2013161319A1 (ja) | 2013-10-31 |
Family
ID=49482661
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/002845 WO2013161319A1 (ja) | 2012-04-27 | 2013-04-26 | コンテンツ再生方法、コンテンツ再生システムおよびコンテンツ撮影装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US9172938B2 (ja) |
JP (2) | JP6160960B2 (ja) |
WO (1) | WO2013161319A1 (ja) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016536945A (ja) * | 2013-11-20 | 2016-11-24 | ネイバー コーポレーションNAVER Corporation | 動画提供方法および動画提供システム |
JP2017504234A (ja) * | 2013-11-20 | 2017-02-02 | グーグル インコーポレイテッド | マルチビューオーディオおよびビデオインタラクティブプレイバック |
JP2017184132A (ja) * | 2016-03-31 | 2017-10-05 | サクサ株式会社 | 画像処理装置及び画像処理方法 |
JP2017228906A (ja) * | 2016-06-21 | 2017-12-28 | サイレックス・テクノロジー株式会社 | 再生装置、再生システム、再生装置の制御方法、及び、再生システムの制御方法 |
WO2018042977A1 (ja) * | 2016-08-29 | 2018-03-08 | 株式会社Nexpoint | 監視カメラシステム及び監視カメラシステムにおける動画閲覧方法並びに動画結合方法 |
JP2018521546A (ja) * | 2015-05-18 | 2018-08-02 | ゼップ ラブズ、インコーポレイテッド | クラウド映像共有に基づくマルチアングル映像編集 |
WO2019065305A1 (ja) * | 2017-09-29 | 2019-04-04 | 本田技研工業株式会社 | 情報提供システム、情報提供方法および情報提供システム用管理装置 |
JP2019179965A (ja) * | 2018-03-30 | 2019-10-17 | 日本電気株式会社 | 映像伝送処理装置、方法、プログラム、及び記録媒体 |
JP2021093228A (ja) * | 2021-03-22 | 2021-06-17 | 株式会社Smart119 | 救急出動支援システム |
JP2021150863A (ja) * | 2020-03-19 | 2021-09-27 | 富士フイルム株式会社 | 表示制御装置、表示制御装置の作動方法、表示制御装置の作動プログラム |
US11178370B2 (en) | 2018-07-03 | 2021-11-16 | Fujifilm Corporation | Image correction device, imaging device, image correction method, and image correction program |
JP7373294B2 (ja) | 2019-04-12 | 2023-11-02 | 株式会社ソニー・インタラクティブエンタテインメント | 画像処理装置、画像提供サーバ、画像表示方法、および画像提供方法 |
JP7477352B2 (ja) | 2020-04-21 | 2024-05-01 | 株式会社Nttドコモ | 情報処理装置 |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9626747B2 (en) * | 2012-04-24 | 2017-04-18 | Apple Inc. | Image enhancement and repair using sample data from other images |
US20150142537A1 (en) * | 2013-11-18 | 2015-05-21 | Verizon Patent And Licensing Inc. | Receiving product/service information and content based on a captured image |
US9582738B2 (en) * | 2014-02-24 | 2017-02-28 | Invent.ly LLC | Automatically generating notes and classifying multimedia content specific to a video production |
TWI505113B (zh) * | 2014-03-18 | 2015-10-21 | Vivotek Inc | 監視系統及其影像搜尋方法 |
US10409366B2 (en) | 2014-04-28 | 2019-09-10 | Adobe Inc. | Method and apparatus for controlling display of digital content using eye movement |
US9685194B2 (en) * | 2014-07-23 | 2017-06-20 | Gopro, Inc. | Voice-based video tagging |
US10242379B2 (en) * | 2015-01-30 | 2019-03-26 | Adobe Inc. | Tracking visual gaze information for controlling content display |
US10116976B2 (en) * | 2015-10-15 | 2018-10-30 | At&T Intellectual Property I, L.P. | System and method for distributing media content associated with an event |
US10129579B2 (en) | 2015-10-15 | 2018-11-13 | At&T Mobility Ii Llc | Dynamic video image synthesis using multiple cameras and remote control |
JP2017168940A (ja) * | 2016-03-14 | 2017-09-21 | 富士通株式会社 | 再生制御プログラム、変換プログラム、再生制御装置、端末装置、動画再生システム、再生制御方法、及び変換方法 |
EP3598764B1 (en) * | 2018-07-17 | 2021-01-20 | IDEMIA Identity & Security Germany AG | Supplementing video material |
JP7199886B2 (ja) * | 2018-09-14 | 2023-01-06 | キヤノン株式会社 | 画像処理装置、画像処理方法、及び、プログラム |
JP7358078B2 (ja) * | 2019-06-07 | 2023-10-10 | キヤノン株式会社 | 情報処理装置、情報処理装置の制御方法、及び、プログラム |
KR20220143466A (ko) * | 2021-04-16 | 2022-10-25 | 삼성전자주식회사 | 공간 컨텐츠를 처리하는 방법 및 이를 수행하는 전자 장치 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001274936A (ja) * | 2000-01-21 | 2001-10-05 | Casio Comput Co Ltd | 画像データ伝送システム及び撮像データ伝送方法 |
JP2009303137A (ja) * | 2008-06-17 | 2009-12-24 | Canon Inc | 映像表示装置及び映像表示方法 |
JP2012005028A (ja) * | 2010-06-21 | 2012-01-05 | Canon Inc | 撮像装置、情報配信装置、送信方法、及び情報配信方法、並びにプログラム |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6359837B1 (en) * | 1999-06-30 | 2002-03-19 | Casio Computer Co., Ltd. | Camera, camera system, information recording system, timepiece, and link system for camera and timepiece |
US6780105B1 (en) * | 2000-07-31 | 2004-08-24 | Igt | Gaming device having a multiple screen bonus round |
JP2006260178A (ja) * | 2005-03-17 | 2006-09-28 | Matsushita Electric Ind Co Ltd | コンテンツ再生装置及びコンテンツ再生方法 |
JP4670776B2 (ja) * | 2006-09-04 | 2011-04-13 | 株式会社ニコン | 映像共有システム |
CN101622869B (zh) | 2007-12-18 | 2012-03-07 | 松下电器产业株式会社 | 图像再现装置、图像再现方法及图像再现程序 |
JP2010232853A (ja) * | 2009-03-26 | 2010-10-14 | Victor Co Of Japan Ltd | コンテンツ再生装置およびコンテンツ再生方法 |
JP5473478B2 (ja) * | 2009-08-24 | 2014-04-16 | キヤノン株式会社 | 画像表示装置、その制御方法及びプログラム |
-
2013
- 2013-04-26 WO PCT/JP2013/002845 patent/WO2013161319A1/ja active Application Filing
- 2013-04-26 JP JP2013548505A patent/JP6160960B2/ja active Active
- 2013-04-26 US US14/128,674 patent/US9172938B2/en active Active
-
2017
- 2017-06-02 JP JP2017109677A patent/JP6435585B2/ja active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001274936A (ja) * | 2000-01-21 | 2001-10-05 | Casio Comput Co Ltd | 画像データ伝送システム及び撮像データ伝送方法 |
JP2009303137A (ja) * | 2008-06-17 | 2009-12-24 | Canon Inc | 映像表示装置及び映像表示方法 |
JP2012005028A (ja) * | 2010-06-21 | 2012-01-05 | Canon Inc | 撮像装置、情報配信装置、送信方法、及び情報配信方法、並びにプログラム |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016536945A (ja) * | 2013-11-20 | 2016-11-24 | ネイバー コーポレーションNAVER Corporation | 動画提供方法および動画提供システム |
JP2017504234A (ja) * | 2013-11-20 | 2017-02-02 | グーグル インコーポレイテッド | マルチビューオーディオおよびビデオインタラクティブプレイバック |
US11816310B1 (en) | 2013-11-20 | 2023-11-14 | Google Llc | Multi-view audio and video interactive playback |
US11095954B2 (en) | 2013-11-20 | 2021-08-17 | Naver Corporation | Video-providing method and video-providing system |
US10754511B2 (en) | 2013-11-20 | 2020-08-25 | Google Llc | Multi-view audio and video interactive playback |
JP2018521546A (ja) * | 2015-05-18 | 2018-08-02 | ゼップ ラブズ、インコーポレイテッド | クラウド映像共有に基づくマルチアングル映像編集 |
JP2017184132A (ja) * | 2016-03-31 | 2017-10-05 | サクサ株式会社 | 画像処理装置及び画像処理方法 |
JP2017228906A (ja) * | 2016-06-21 | 2017-12-28 | サイレックス・テクノロジー株式会社 | 再生装置、再生システム、再生装置の制御方法、及び、再生システムの制御方法 |
WO2018042977A1 (ja) * | 2016-08-29 | 2018-03-08 | 株式会社Nexpoint | 監視カメラシステム及び監視カメラシステムにおける動画閲覧方法並びに動画結合方法 |
JP2018037734A (ja) * | 2016-08-29 | 2018-03-08 | 株式会社Nexpoint | 監視カメラシステム及び監視カメラシステムにおける動画閲覧方法並びに動画結合方法 |
JPWO2019065305A1 (ja) * | 2017-09-29 | 2020-11-26 | 本田技研工業株式会社 | 情報提供システム、情報提供方法および情報提供システム用管理装置 |
WO2019065305A1 (ja) * | 2017-09-29 | 2019-04-04 | 本田技研工業株式会社 | 情報提供システム、情報提供方法および情報提供システム用管理装置 |
JP2019179965A (ja) * | 2018-03-30 | 2019-10-17 | 日本電気株式会社 | 映像伝送処理装置、方法、プログラム、及び記録媒体 |
JP7167466B2 (ja) | 2018-03-30 | 2022-11-09 | 日本電気株式会社 | 映像伝送処理装置、方法、プログラム、及び記録媒体 |
US11178370B2 (en) | 2018-07-03 | 2021-11-16 | Fujifilm Corporation | Image correction device, imaging device, image correction method, and image correction program |
JP7373294B2 (ja) | 2019-04-12 | 2023-11-02 | 株式会社ソニー・インタラクティブエンタテインメント | 画像処理装置、画像提供サーバ、画像表示方法、および画像提供方法 |
JP2021150863A (ja) * | 2020-03-19 | 2021-09-27 | 富士フイルム株式会社 | 表示制御装置、表示制御装置の作動方法、表示制御装置の作動プログラム |
JP7301772B2 (ja) | 2020-03-19 | 2023-07-03 | 富士フイルム株式会社 | 表示制御装置、表示制御装置の作動方法、表示制御装置の作動プログラム |
JP7477352B2 (ja) | 2020-04-21 | 2024-05-01 | 株式会社Nttドコモ | 情報処理装置 |
JP2021093228A (ja) * | 2021-03-22 | 2021-06-17 | 株式会社Smart119 | 救急出動支援システム |
Also Published As
Publication number | Publication date |
---|---|
US20140126881A1 (en) | 2014-05-08 |
JP6435585B2 (ja) | 2018-12-12 |
JPWO2013161319A1 (ja) | 2015-12-24 |
JP2017200200A (ja) | 2017-11-02 |
JP6160960B2 (ja) | 2017-07-12 |
US9172938B2 (en) | 2015-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6435585B2 (ja) | コンテンツ再生方法、コンテンツ再生装置およびコンテンツ再生プログラム | |
US7577636B2 (en) | Network-extensible reconfigurable media appliance | |
US7782363B2 (en) | Providing multiple video perspectives of activities through a data network to a remote multimedia server for selective display by remote viewing audiences | |
US20160337718A1 (en) | Automated video production from a plurality of electronic devices | |
US10887673B2 (en) | Method and system for associating recorded videos with highlight and event tags to facilitate replay services | |
CN102318341A (zh) | 为视频重放路线行为表现对加入时间戳的影像的组合 | |
JP2020043584A (ja) | 複数のメディアストリームの処理 | |
JP2016538657A (ja) | 複数のユーザコメントを検索してコンテンツにオーバーレイすることによるビデオのブラウジング | |
EP3105933A1 (en) | Apparatus and method for processing media content | |
CN104010206A (zh) | 基于地理位置的虚拟现实视频播放的方法和系统 | |
US9154545B2 (en) | Video information control apparatus and method | |
JP2005191892A (ja) | 情報取得装置及びこれを用いたマルチメディア情報作成システム | |
KR20140083569A (ko) | 컨텐츠의 위치 데이터를 기초하여 동일한 위치 데이터를 가지는 하나 이상의 사진을 선택하는 서버 및 방법, 그리고 단말 | |
JP2006005788A (ja) | 画像表示方法及び画像表示システム | |
KR20180053221A (ko) | 전자 장치 및 그의 제어 방법 | |
JP2022006725A (ja) | 画像管理装置、画像管理システム及び画像管理方法 | |
JP2022006724A (ja) | 画像管理装置、画像管理システム及び画像管理方法 | |
JP2022006723A (ja) | 画像管理装置、画像管理システム及び画像管理方法 | |
JP2020188328A (ja) | 映像システム | |
TW201909117A (zh) | 多重影像來源之處理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2013548505 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13781068 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14128674 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13781068 Country of ref document: EP Kind code of ref document: A1 |