WO2014132988A1 - Dispositif de traitement d'informations et procédé de traitement d'informations - Google Patents

Dispositif de traitement d'informations et procédé de traitement d'informations Download PDF

Info

Publication number
WO2014132988A1
WO2014132988A1 PCT/JP2014/054651 JP2014054651W WO2014132988A1 WO 2014132988 A1 WO2014132988 A1 WO 2014132988A1 JP 2014054651 W JP2014054651 W JP 2014054651W WO 2014132988 A1 WO2014132988 A1 WO 2014132988A1
Authority
WO
WIPO (PCT)
Prior art keywords
display range
moving image
information
camera work
search information
Prior art date
Application number
PCT/JP2014/054651
Other languages
English (en)
Japanese (ja)
Inventor
翼 梅津
建太郎 牛山
Original Assignee
ブラザー工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ブラザー工業株式会社 filed Critical ブラザー工業株式会社
Publication of WO2014132988A1 publication Critical patent/WO2014132988A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/278Content descriptor database or directory service for end-user access
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content

Definitions

  • This disclosure relates to the technical field of terminal devices that display moving images.
  • Patent Document 1 discloses a system in which a user distributes editing data indicating a reproduction position of moving image data to other users by mail in order to recommend a specific scene of moving image data to other users. Yes. Thereby, the user who created edit data can quickly show a specific scene in the moving image data to other users.
  • a pseudo camera work may be performed by a user on a moving image such as a panoramic video.
  • the display range in a plurality of image frames constituting the moving image is designated by instructing the direction of the virtual camera, the field of view, and the like from the operation unit.
  • pseudo camera work performed by another user on a moving image being displayed on the terminal device is recommended to the user of the terminal device.
  • the pseudo camera work is characterized by the movement of the time-series display range in a plurality of image frames constituting a moving image.
  • the present disclosure has been made in view of the above points, and provides an information processing apparatus and an information processing method capable of efficiently searching for camera work desired by a user.
  • the display range displayed by the display unit can change the display range of the moving image according to the moving image playback position.
  • first determination means for searching and determining the area associated with the second search information corresponding to the first search information received by the receiving means from the storage means ,
  • the second search information associated with the area indicates a subject appearing in a display range of the moving image including the area
  • the receiving unit The request including the first search information indicating a subject is received, and the first determination unit receives the request received by the reception unit based on the second search information associated with the region.
  • the area including the display range in which the subject indicated by the first search information appears is determined, and the second determining means determines the display range information indicating the display range including the area determined by the first determining means.
  • the display range information to be provided to the terminal device is determined.
  • the information processing apparatus is configured to reproduce the moving image when the first search information is input while the moving image is displayed on the terminal device.
  • the request including the reproduction position information indicating the first search information is received, and the first determination unit is configured to determine the region in the image frame at the reproduction position indicated by the reproduction position information received by the reception unit.
  • the region corresponding to the second search information corresponding to the first search information received by the receiving unit is determined, and the second determination unit is determined by the first determination unit.
  • the display range information indicating the display range including the region thus determined is determined as display range information to be provided to the terminal device.
  • a fourth aspect of the present invention is an information processing method performed by a computer, wherein the display range displayed by the display means in the image frame constituting the moving image is a reproduction position of the moving image.
  • a display range information request indicating, for each playback position, the display range of a moving image that can be changed according to a first position, which is used to search for an area composed of one or more pixels in the image frame of the moving image.
  • a first determination step of searching and determining the determined region, and display range information indicating the display range including the region determined by the first determination step is determined as display range information to be provided to the terminal device And a second determination step.
  • FIG. 7 is a flowchart showing main processing in the control unit 21 of the client 2. It is a flowchart which shows an example of the input process in step S4 shown to FIG. 5A. It is a figure which shows the example of a screen for inputting scene information. It is a figure which shows the example of a screen for inputting scene information. It is a flowchart which shows the process in the control part 11 of the delivery server 1.
  • FIG. 4 is a flowchart showing processing in a control unit 21 of the client 2. It is a figure which shows the example of a screen for inputting a search key. It is a figure which shows the example of a screen for inputting a search key.
  • FIG. It is a figure which shows an example of the main screen MV and the sub screens SV1 to SV5. It is a flowchart which shows the process in the control part 11 of the delivery server 1.
  • FIG. It is a figure which shows the display range when the partial area
  • FIG. 1 is a diagram illustrating a schematic configuration example of a communication system S of the present embodiment.
  • the communication system S includes a distribution server 1 and a plurality of clients 2.
  • the distribution server 1 is an example of an information processing apparatus according to the present disclosure.
  • the client 2 is an example of a terminal device according to the present disclosure. Further, the distribution server 1 and the client 2 can communicate with each other via the network NW.
  • the network NW is configured by, for example, the Internet.
  • the distribution server 1 accepts upload of content from the client 2 or camera work data of the content, for example.
  • the distribution server 1 transmits content to the client 2 in response to a content request from the client 2.
  • the content includes moving image data.
  • the moving image data is data representing a moving image in which the display range displayed by the display means can be changed in accordance with the reproduction position of the moving image within the image frame constituting the moving image.
  • An example of such a moving image is a panoramic moving image.
  • a panoramic video is a video in which a subject is shot by a camera equipped with a lens capable of shooting a wide range, for example, with a high-resolution camera. Examples of lenses capable of photographing a wide range include a wide lens, a fisheye lens, and a 360 lens.
  • the playback position is an elapsed time from the start of playback of moving image data.
  • the content may include audio data.
  • the content is transmitted by streaming distribution via the network NW, for example.
  • the client 2 receives the content
  • the distribution server 1 transmits the camera work data of the content to the client 2 in response to a request for camera work data from the client 2, for example.
  • the camera work data is an example of display range information indicating, for each reproduction position, a display range displayed by the display unit in an image frame constituting a moving image. This display range corresponds to a drawing area drawn on the screen of the display means in one image frame. In other words, the display range is a range cut out from the shooting range defined by the image frame.
  • Such a display range is designated by, for example, pseudo camera work (hereinafter referred to as “pseudo camera work”).
  • pseudo camera work refers to determining at least one of the viewpoint position, the line-of-sight direction, and the visual field area of a person who views a moving image projected on a virtual screen, for example.
  • pseudo camera work refers to determining the orientation of the virtual camera, the width of the field of view, and the like.
  • the virtual camera refers to a virtual camera that determines a drawing area in an image frame constituting a moving image.
  • the pseudo camera work can be reproduced by the camera work data (hereinafter referred to as “pseudo camera work data”).
  • one pseudo camera work data does not necessarily indicate the display range in all image frames included in the reproduction time from the reproduction start to the reproduction end of the moving image data. That is, one pseudo camera work data may indicate a display range in an image frame included in a partial time range in the reproduction time.
  • the client 2 displays the moving image according to the display range indicated by the acquired pseudo camera work data while receiving the content by streaming.
  • the distribution server 1 can be connected to the storage device 3.
  • the storage device 3 is configured by, for example, a hard disk drive (HDD).
  • the storage device 3 is provided in the distribution server 1.
  • the storage device 3 may be provided in a server different from the distribution server 1.
  • the storage device 3 stores Web page data transmitted to the client 2 in response to a request from the client 2.
  • the storage device 3 further includes a moving image data storage area 31a, an audio data storage area 31b, a work file storage area 31c, and a scene information storage area 31d.
  • the moving image data storage area 31a stores a plurality of moving image data.
  • the moving image data stored in the moving image data storage area 31 a can be shared among a plurality of clients 2 that can access the distribution server 1.
  • a plurality of audio data is stored in the audio data storage area 31b.
  • the audio data stored in the audio data storage area 31 b can be shared among a plurality of clients that can access the distribution server 1.
  • the work file is stored in the work file storage area 31c in association with each content.
  • the work file stores content titles, pseudo camera work data, and the like.
  • the pseudo camera work data is given scene information.
  • the scene information is information indicating the characteristics of a moving image scene.
  • the moving image scene indicates a range that is a segment of a certain operation on the moving image.
  • Such scenes include, for example, a scene where a singer sings, a scene played by an actor, or a scene that shows the state of an event.
  • the scene information includes, for example, text information indicating a subject such as a singer or an actor appearing in the scene, identification information for identifying the subject, and the like.
  • a work ID for identifying the work file is given to the work file.
  • the work file may include a content ID for identifying the content.
  • the work file stored in the work file storage area 31c includes, for example, a work file uploaded from the client 2.
  • One content may be associated with a plurality of work files.
  • the pseudo camera work data indicating the display range designated by the pseudo camera work performed by each of a plurality of users is associated with a certain content.
  • a scene information database is stored in association with each content.
  • the scene information database is a database in which scene information can be registered in association with each partial area and each reproduction position in an image frame constituting a moving image.
  • a partial area is an area composed of one or more pixels in the image frame. The partial area is determined by the reproduction position of the moving image and the position on the image frame at the reproduction position.
  • the scene information registered in the scene information database is an example of second search information indicating the characteristics of the area.
  • the scene information database may be configured so that information indicating features other than the scene of the moving image is registered in association with the partial area. Note that the partial area may be one pixel in one image frame, but in this case, the load for managing scene information increases.
  • FIG. 2 is a conceptual diagram illustrating an example in which a three-dimensional moving image space is divided into a plurality of moving image blocks.
  • each moving image block is assigned a unique block ID.
  • Each moving image block is associated with a coordinate position on the image frame F constituting the moving image and a reproduction position. This playback position is, for example, the playback position of the first image frame among a plurality of image frames included in the moving image block.
  • the image frame in this case is an image frame divided as shown in FIG.
  • the scene information database for example, scene information given to the pseudo camera work data uploaded from the client 2 is registered in association with the block ID.
  • the scene information in this case is scene information posted by the user of the client 2.
  • the same or similar scene information may be posted by a plurality of users for the same moving image block. Therefore, as shown in FIG. 2, the number of postings of scene information is registered in the scene information database in association with the block ID.
  • scene information and the number of postings of scene information are not necessarily registered in all moving image blocks.
  • the scene information database may be generated and updated on the system operator side, for example, without depending on the scene information provided from the client 2.
  • the delivery server 1 is provided with the control part 11 and the interface part 12, as shown in FIG.
  • the control unit 11 includes a CPU, ROM, RAM, and the like as a computer.
  • the control unit 11 is an example of a reception unit, a first determination unit, and a second determination unit of the present disclosure.
  • the control unit 11 performs content transmission or reception control, pseudo camera work data transmission or reception control, and the like.
  • the control unit 11 receives a request for pseudo camera work data from the client 2, the control unit 11 determines pseudo camera work data to be returned to the client 2 based on the request. The method for determining the pseudo camera work data will be described later.
  • the client 2 includes a control unit 21, a storage unit 22, a video RAM 23, a video control unit 24, an operation processing unit 25, an audio control unit 26, an interface unit 27, a bus 28, and the like. Configured. These components are connected to the bus 28.
  • a display unit 24 a including a display is connected to the video control unit 24.
  • the display is an example of display means.
  • An operation unit 25 a is connected to the operation processing unit 25. Examples of the operation unit 25a include a mouse, a keyboard, and a remote controller.
  • a touch panel serving both as the display unit 24a and the operation unit 25a may be applied.
  • the control unit 21 receives an operation instruction from the operation unit 25 a by the user via the operation processing unit 25. The user can perform the above-described pseudo camera work operation using the operation unit 25a.
  • a speaker 26 a is connected to the audio control unit 26.
  • the interface unit 27 is connected to the network NW.
  • the control unit 21 includes a CPU, ROM, RAM, and the like as a computer.
  • the control unit 21 has a timer function.
  • the storage unit 22 is configured by, for example, a hard disk drive (HDD).
  • the storage unit 22 stores an OS (Operating System), player software, and the like.
  • the player software is a program for playing back content. Note that the player software may be downloaded from a predetermined server connected to the network NW, for example. Alternatively, the player software may be recorded on a recording medium and read via a drive of the recording medium, for example.
  • the control unit 21 functions as a player that reproduces content by executing player software.
  • the control unit 21 sequentially acquires the contents streamed from the distribution server 1 by the function of the player and reproduces the contents.
  • the RAM in the control unit 21 is provided with a buffer memory.
  • the buffer memory for example, moving image data included in the content streamed from the distribution server 1 is temporarily stored.
  • the buffer memory temporarily stores, for example, pseudo camera work data distributed from the distribution server 1.
  • the control unit 21 outputs moving image data from the buffer memory to the video RAM 23.
  • a frame buffer is provided in the RAM. For example, in the frame buffer, image data of a portion corresponding to the display range indicated by the pseudo camera work data in the image frame constituting the moving image reproduced by the moving image data is written.
  • the video control unit 24 displays the image data written in the frame buffer by drawing it on the corresponding screen.
  • audio data may be included in the content held in the buffer memory from the distribution server 1.
  • the control unit 21 reproduces the audio data from the buffer memory and outputs it to the audio control unit 26.
  • the voice control unit 26 generates an analog voice signal from the voice data, and outputs the generated analog voice signal to the speaker 26a.
  • control unit 21 receives an instruction of a display range of a part of the image frame constituting the moving image being displayed on the display screen by the user's pseudo camera work operation.
  • the control unit 21 receives an instruction to change the display range displayed in the image frame constituting the moving image being displayed on the screen.
  • the control unit 21 changes and displays the display range displayed on the screen in response to the change instruction.
  • the user can change the display range of the moving image being displayed on the screen by changing at least one of the viewpoint position, the line-of-sight direction, and the visual field area by operating the pseudo camera work.
  • the viewpoint position is a position where the person is watching the moving image.
  • the line-of-sight direction is the direction of the line of sight of the person's moving image.
  • the visual field area is, for example, the area of a region in the range of the visual field of the person on the virtual screen arranged in the three-dimensional virtual space.
  • the visual field area may be the range of the visual field of the person.
  • 3A to 3C are diagrams showing an example of a virtual screen and an example of a display range for the virtual screen.
  • a screen SC1 is defined as a virtual screen.
  • the screen SC1 is a rectangular flat screen, and a moving image is projected on the rectangular plane.
  • the display range R1 on the screen SC1 is defined by, for example, an X coordinate, a Y coordinate, a width, and a height.
  • the upper left vertex of the screen SC1 is set as the origin in the coordinate system of the screen SC1.
  • the X coordinate and the Y coordinate define the viewpoint position.
  • the X coordinate is the horizontal coordinate of the upper left vertex of the display range R1
  • the Y coordinate is the vertical coordinate of the upper left vertex of the display range R1.
  • a point that is a predetermined distance away from the screen SC1 in the three-dimensional virtual space may be assumed as the viewpoint.
  • a line passing through the viewpoint and perpendicularly intersecting the screen SC1 is defined as a line of sight.
  • the point where the line of sight and the screen SC1 intersect is the center of the display range R1.
  • the width and height define the viewing area.
  • the width and the height are the horizontal length and the vertical length of the display range R1.
  • the line-of-sight direction is determined in advance.
  • a screen SC2 is defined as a virtual screen.
  • the screen SC2 is a cylindrical screen, and a moving image is projected on the side surface of the cylinder.
  • the side surface of the cylinder is an example of a virtual solid surface.
  • a cylindrical panoramic video is projected on the screen SC2.
  • the panoramic video is, for example, an omnidirectional video.
  • the panoramic video may be a partial orientation video with a viewing angle narrower than 360 degrees.
  • the display range R2 on the screen SC2 is defined by, for example, an azimuth angle, a horizontal viewing angle, and a height. The azimuth determines the viewing direction. For example, let the midpoint of the central axis of the cylinder of the screen SC2 be the viewpoint.
  • the viewpoint is the origin in the coordinate system of the three-dimensional virtual space
  • the center axis of the screen SC2 is the Z axis.
  • the X axis passes through the origin and is perpendicular to the Y axis and the Z axis.
  • the Y axis passes through the origin and is perpendicular to the X axis and the Z axis.
  • the azimuth determines the direction of the line of sight from the viewpoint.
  • the line of sight is, for example, perpendicular to the Z axis.
  • the azimuth angle is, for example, an angle between the X axis and the line of sight.
  • the horizontal viewing angle and height define the viewing area.
  • the horizontal viewing angle is an angle indicating the range of the horizontal visual field centered on the direction of the line of sight.
  • the height is the vertical length of the display range R2. Based on the azimuth angle, the horizontal viewing angle, and the height, a quadrangular pyramid indicating a viewing range in the three-dimensional virtual space is defined. This square pyramid is the view volume.
  • a view volume refers to a range that is subject to projection transformation in a three-dimensional virtual space. Although the actual view volume is a quadrangular pyramid, a quadrangular pyramid is used for convenience of explanation.
  • the vertex of the view volume is the viewpoint, and the line of sight passes through the center of the bottom surface of the view volume.
  • the angle formed by the side surface P21 and the side surface P22 parallel to the Z axis is the horizontal viewing angle.
  • the length in the vertical direction of the surface where the view volume and the screen SC2 intersect is the height.
  • the surface where the view volume and the screen SC2 intersect is the display range R2.
  • the viewpoint position is determined in advance.
  • a screen SC3 is defined as a virtual screen.
  • the screen SC3 is a spherical screen, and a moving image is displayed on a spherical surface.
  • a spherical panoramic video is displayed on the screen SC3.
  • the display range R3 on the screen SC3 is defined by, for example, an azimuth angle, an elevation angle, a horizontal viewing angle, and a vertical viewing angle.
  • the azimuth angle and the elevation angle determine the line-of-sight direction.
  • the viewpoint is located within a range surrounded by the screen SC3.
  • the viewpoint is the center of the sphere of the screen SC3.
  • the viewpoint is the origin in the coordinate system of the three-dimensional virtual space, and the vertical coordinate axis is the Z axis.
  • the X axis passes through the origin and is perpendicular to the Y axis and the Z axis.
  • the Y axis passes through the origin and is perpendicular to the X axis and the Z axis.
  • the azimuth angle is, for example, an angle formed by the XZ plane and the line of sight.
  • the elevation angle is, for example, an angle formed by the XY plane and the line of sight.
  • the horizontal viewing angle and the vertical viewing angle define the viewing area.
  • the horizontal viewing angle is an angle indicating the range of the horizontal visual field centered on the direction of the line of sight.
  • the vertical viewing angle is an angle indicating the range of the vertical visual field around the direction of the line of sight.
  • a line on the XY plane that passes through the origin and intersects the line of sight perpendicularly is defined as a vertical rotation axis of the line of sight.
  • a line that passes through the origin and perpendicularly intersects the line of sight and the vertical rotation axis is defined as the horizontal rotation axis of the line of sight.
  • a quadrangular pyramid indicating a viewing range in the three-dimensional virtual space is defined. This square pyramid is the view volume.
  • the vertex of the view volume is the viewpoint, and the line of sight passes through the center of the bottom surface of the view volume.
  • the angle formed by the side surface P31 parallel to the Z axis and the side surface P32 is the horizontal viewing angle.
  • the angle formed by the side surface P33 and the side surface P34 is the vertical viewing angle.
  • the surface where the view volume and the screen SC3 intersect is the display range R3.
  • the viewpoint position is determined in advance.
  • the perspective transformation converts the three-dimensional coordinates of the display range on the virtual screen into two-dimensional coordinates based on the viewpoint position, the line-of-sight direction, and the visual field area. Based on the converted two-dimensional coordinates, for example, it is possible to specify which part of the image frame constituting the panoramic video is within the display range.
  • the display range R3 changes according to the viewing direction.
  • the display range R3 changes according to the viewing area. That is, the display range R3 is a range corresponding to the line-of-sight direction and the visual field area.
  • the screen SC3 may be a solid that completely covers the viewpoint, and may be a screen having a cubic shape, for example.
  • the screen SC1 to SC3 may be determined according to the type of moving image data, for example.
  • the screen SC1 is determined for a video other than a panoramic video
  • the screen SC2 is determined for a cylindrical panoramic video
  • the screen SC3 is determined for a spherical panoramic video. Good.
  • FIG. 4A shows an example in which the virtual screen is a rectangular screen SC1.
  • FIG. 4B shows an example in which the virtual screen is a cylindrical screen SC2.
  • FIG. 4C shows an example in which the virtual screen is a spherical screen SC3.
  • the aspect ratio is determined to be 16: 9, if one of the width and the height is determined, the other is determined. And any one of height may be included.
  • pseudo camera work data indicating display ranges in image frames at respective reproduction positions such as 0 milliseconds, 16 milliseconds, 33 milliseconds, and 49 milliseconds are shown. ing. Note that 16 milliseconds is comparable to the display refresh rate (60 Hz).
  • FIG. 5A is a flowchart showing main processing in the control unit 21 of the client 2.
  • FIG. 5B is a flowchart showing an example of the input process in step S4 shown in FIG.
  • the client 2 transmits a page request to the distribution server 1. Then, the client 2 receives the Web page transmitted from the distribution server 1 in response to the page request and displays it on the display in the display unit 24a.
  • content information is displayed in a selectable manner.
  • the content information displayed on the Web page is information on a part of the plurality of content uploaded to the distribution server 1. This corresponds to, for example, content information recommended for the user or content information searched based on a keyword input by the user.
  • the content information includes, for example, information such as the content title.
  • the control unit 21 initializes the work file and starts playing the selected content (step S1). Thereby, the moving image reproduced by the moving image data included in the content streamed from the distribution server 1 is displayed on the main screen of the display.
  • the work file is a file for uploading pseudo camera work data.
  • the work file is stored in the storage unit 22 in advance. With the initialization of the work file, the title of the content that has been played back is set in the work file. A content ID may be set in the work file.
  • the operation input of the pseudo camera work for the moving image displayed on the main screen is set to “manual”.
  • the display range of the moving image displayed on the main screen does not depend on, for example, the pseudo camera work data acquired from the distribution server 1 and is changed by the user's pseudo camera work operation.
  • control unit 21 stores pseudo camera work data indicating the moving image display range at the current reproduction position in the work file (step S2). Next, the control unit 21 determines whether or not there is scene information input by the user at the current playback position (step S3).
  • FIGS. 5C and 5D are diagrams showing examples of screens for inputting scene information.
  • a reproduced moving image is displayed, and buttons corresponding to a plurality of persons appearing in the scene of the moving image being displayed are displayed so as to be selectable.
  • This person is a singer, for example, and is an example of a subject.
  • These buttons are buttons for inputting scene information (hereinafter referred to as “scene information input buttons”).
  • Each scene information input button is displayed, for example, when the user gives an instruction to display the scene information input button via the operation unit 25a during content reproduction.
  • Each scene information input button is associated with an ID (identification information) for identifying a person. This association is performed, for example, when content is generated on the distribution server 1 side.
  • an ID for identifying a person is associated with the scene information input button.
  • IDs for identifying these subjects are associated with the scene information input buttons.
  • a reproduced moving image is displayed and a text input field for inputting text information such as a character string is displayed.
  • the text input field is displayed, for example, when the user gives an instruction to display the text input field via the operation unit 25a during content reproduction.
  • step S3 determines that there is scene information input
  • step S4 determines that there is scene information input.
  • the process proceeds to the input process of step S4.
  • the selection of the scene information input button is performed, for example, by clicking the scene information input button with a mouse or tapping the scene information input button with a finger or a pen.
  • step S3 determines that there is an input of scene information
  • step S4 Proceed to input processing.
  • step S3 The designation of the text input field is performed, for example, by clicking the text input field with the mouse or tapping the text input field with a finger or a pen.
  • step S6 the process proceeds to step S6.
  • step S4 the control unit 21 determines whether or not the input of the scene information is “ID input” by the scene information input button. For example, when the scene information input button is selected in the display state of the screen shown in FIG. 5C, it is determined that “ID input” is made by the scene information input button (step S41: YES), and the process proceeds to step S42. On the other hand, for example, when the text input field is designated in the display state of the screen shown in FIG. 5D, it is determined that it is not “ID input” by the scene information input button (step S41: NO), and the process proceeds to step S43.
  • step S42 the control unit 21 stores the ID corresponding to the selected scene information input button as scene information, and displays reproduction position information indicating the reproduction position of the moving image when the scene information input button is selected. It is stored in association with the scene information.
  • step S43 the control unit 21 determines that the input is text, and stores playback position information indicating the playback position of the moving image when the user inputs the first character from, for example, a keyboard. It should be noted that the playback position information indicating the playback position of the moving image at any point in time from the second character to the completion of text input is stored instead of the playback position of the moving image when the first first character is input. It may be configured.
  • the control unit 21 stores the text information input in the text input field by the user as scene information in association with the reproduction position information stored in step S43 (step S44).
  • step S5 the control unit 21 uses the pseudo camera work data stored in step S2 to store the scene information and the reproduction position information stored in the processing in FIG. 5B. Store in a work file in association with. Thereby, scene information and reproduction position information are added to the pseudo camera work data.
  • step S6 determines whether or not there is an instruction to upload pseudo camera work data by the user.
  • step S6: YES the process proceeds to step S7.
  • step S6: NO the process proceeds to step S9.
  • step S7 the control unit 21 transmits at least a work file storing pseudo camera work data to the distribution server 1 via the network NW. Thereby, a part of the pseudo camera work data from the start to the end of the playback of the moving image is uploaded.
  • step S8 the control unit 21 clears the contents of the work file (step S8) and returns to step S2. As a result, an empty work file remains in the storage unit 22.
  • step S9 the control unit 21 determines whether or not the content has been played to the end position of the content. If it is determined that the content has not been played back to the end position (step S9: NO), the process returns to step S2 and the above process is repeated. On the other hand, when it is determined that the content has been played to the end position (step S9: YES), the process proceeds to step S10.
  • step S10 the control unit 21 displays an upload button on the screen together with a message asking the user whether to upload the pseudo camera work data.
  • step S11 determines whether there is an instruction to upload pseudo camera work data from the user. For example, when the user selects the upload button, the control unit 21 determines that there is an upload instruction (step S11: YES), and proceeds to step S12. On the other hand, when it is determined that there is no upload instruction (step S11: NO), the processing shown in FIG.
  • step S12 the work file storing at least the pseudo camera work data is transmitted to the distribution server 1 via the network NW, and the process shown in FIG.
  • FIG. 6 is a flowchart showing processing in the control unit 11 of the distribution server 1.
  • the process shown in FIG. 6 is started when the distribution server 1 receives a work file from the client 2.
  • the control unit 11 stores the received work file in the work file storage area 31c (step S101).
  • the control unit 11 specifies the moving image data of the content from the moving image data storage area 31a from the content title set in the received work file (step S102).
  • the control unit 11 specifies the pseudo camera work data to which the scene information and the reproduction position information are given from the pseudo camera work data stored in the received work file (step S103).
  • the control unit 11 is assigned to the coordinate position and reproduction position associated with the moving image block constituting the moving image data identified in step S102, the display range indicated by the identified pseudo camera work data, and the pseudo camera work data.
  • one or more moving image blocks are specified (step S104). For example, the reproduction position information given to the specified pseudo camera work data in which the coordinate position is included in the display range indicated by the specified pseudo camera work data among the plurality of video blocks constituting the moving picture data A moving image block including a part of the image frame at the reproduction position indicated by is identified.
  • step S105 determines whether or not text information is included in the scene information given to the pseudo camera work data specified in step S103 (step S105).
  • step S105: YES the process proceeds to step S106.
  • step S105: NO the process proceeds to step S108.
  • an ID for identifying a person is included in the scene information.
  • step S106 the control unit 11 parses text information included in the scene information and extracts word information. Thereby, for example, when the text information is a sentence, one or more words (words) are extracted from the sentence.
  • the control unit 11 extracts a keyword from the word information extracted in step S106 as scene information to be registered using, for example, a keyword dictionary stored in advance (step S107).
  • step S108 the control unit 11 already associates the scene information given to the pseudo camera work data or the scene information that is the same as or similar to the scene information extracted in step S107 with the moving image block specified in step S104. And whether it is registered in the scene information database.
  • this scene information database is a scene information database corresponding to the moving image data specified in step S102.
  • the scene information database is specified from the scene information storage area 31d before the process of step S108. Further, whether or not the scene information is similar is determined by using, for example, a previously stored synonym dictionary or synonym dictionary. For example, scene information that is synonymous or synonymous with the scene information extracted in step S107 is determined as similar scene information.
  • step S104 If it is determined that the moving image block identified in step S104 is already associated with the moving image block and is not registered in the scene information database (step S108: NO), the process proceeds to step S109. On the other hand, if it is determined that it is already associated with the moving image block specified in step S104 and registered in the scene information database (step S108: YES), the process proceeds to step S110.
  • step S109 the control unit 11 registers the scene information added to the pseudo camera work data or the scene information extracted in step S107 in the scene information database in association with the moving image block specified in step S104.
  • the process shown in FIG. As described above, the scene information input by the user is registered in the scene information database in association with the moving image block in which the scene corresponding to the scene information is displayed. Therefore, it is possible to efficiently generate a scene information database that reflects the preferences of a plurality of users.
  • step S110 the control unit 11 increments the posting number of the scene information registered in association with the moving image block specified in step S104 by 1, and ends the process shown in FIG.
  • “1” is added to the number of postings associated with the moving image block identified in step S104 and is updated and registered in the scene information database.
  • the posting number of the scene information input by the user is registered in the scene information database in association with the moving image block in which the scene corresponding to the scene information is displayed. Therefore, it is possible to efficiently generate a scene information database that can determine which moving image block is a moving image block in which a scene with a high degree of attention posted with more scene information from the user is displayed.
  • FIG. 7A is a flowchart showing processing in the control unit 21 of the client 2.
  • the processing shown in FIG. 7A is started, for example, when the user gives a search instruction for pseudo camera work data via the operation unit 25a during execution of the player software.
  • the control unit 21 determines whether or not there is a search key input by the user (step S21).
  • the search key is, for example, a key for searching for a scene in which a user likes a person.
  • the search key is an example of first search information used for searching a partial area of a moving image.
  • FIGS. 7B and 7C are diagrams showing examples of screens for inputting a search key.
  • a reproduced moving image is displayed, and buttons corresponding to a plurality of persons appearing in the scene of the moving image being displayed are selectable. These buttons are buttons for inputting search keys (hereinafter referred to as “search key input buttons”).
  • search key input buttons buttons for inputting search keys (hereinafter referred to as “search key input buttons”).
  • Each search key input button is displayed, for example, when the user gives an instruction to display the search key input button via the operation unit 25a during content reproduction.
  • Each search key input button is associated with an ID for identifying a person, like the scene information input button shown in FIG.
  • FIG. 7B On the screen shown in FIG. 7B, a reproduced moving image is displayed, and buttons corresponding to a plurality of persons appearing in the scene of the moving image being displayed are selectable. These buttons are buttons for inputting search keys (hereinafter referred to as “search key input buttons”).
  • Each search key input button is
  • a reproduced moving image is displayed and a search keyword input field for inputting text information including the search keyword is displayed.
  • the search keyword input field is displayed, for example, when the user gives an instruction to display the search keyword input field via the operation unit 25a during content reproduction.
  • step S21 determines that there is a search key input by the user (step S21). : YES).
  • the process proceeds to step S22.
  • the selection method of the search key input button is the same as that of the scene information input button.
  • step S21: YES determines that the search key is input by the user (step S21: YES), and step S22. Proceed to The method for specifying the search keyword input field is the same as that for the text input field.
  • step S21: NO determines that it is determined that the user has not entered a search key
  • step S22 the control unit 21 determines whether or not the input of the search key is “ID input” by the search key input button. For example, when the search key input button is selected in the display state of the screen shown in FIG. 7B, it is determined that “ID input” is made by the search key input button (step S22: YES), and the process proceeds to step S23. On the other hand, for example, when the search keyword input field is specified in the display state of the screen shown in FIG. 7C, it is determined that it is not “ID input” by the search key input button (step S22: NO), and the process proceeds to step S24. .
  • step S23 the control unit 21 stores the ID corresponding to the selected search key input button as a search key.
  • the reproduction position information indicating the reproduction position of the moving image when the search key input button is selected may be stored in association with the search key.
  • step S24 the control unit 21 stores the text information input in the search keyword input field by the user as a search key.
  • playback position information indicating the playback position of the moving image when the user inputs the first character from a keyboard or the like may be stored in association with the search key.
  • the playback position information indicating the playback position of the moving image at any point in time from the second character to the completion of text input is stored instead of the playback position of the moving image when the first first character is input. It may be configured.
  • the control unit 21 transmits a request for pseudo camera work data to the distribution server 1 via the network NW (step S25).
  • This request is a request for pseudo camera work data.
  • the request for pseudo camera work data includes, for example, the title of the content selected by the user and the search key stored in step S23 or step S24.
  • the request for pseudo camera work data may include a content ID for identifying the content. Further, the request for pseudo camera work data may be configured to include reproduction position information associated with the search key.
  • the control part 21 receives the work file transmitted from the delivery server 1 according to the request
  • the control unit 21 displays a moving image on a sub-screen different from the main screen according to the pseudo camera work data stored in the received work file (step S27).
  • a plurality of work files may be received from the distribution server 1.
  • a plurality of sub screens are displayed as thumbnail screens. With this thumbnail screen, a list of pseudo camera work data can be displayed.
  • FIG. 7D is a diagram showing an example of the main screen MV and the sub screens SV1 to SV5.
  • moving images are displayed on the sub screens SV1 to SV5 according to the respective pseudo camera work data received from the distribution server 1.
  • the display range indicated by the received pseudo camera work data includes scenes included in the moving image block searched based on the search key described above.
  • the playback positions of the moving images displayed on the sub screens SV1 to SV5 are the same. That is, the image frames displayed on the sub screens SV1 to SV5 are the same, but the display ranges in the image frames are different from each other. This means that, for example, the angle of the virtual camera and the field of view are different.
  • the user can see the scene he / she wants to see during the content reproduction by the moving images displayed on the sub-screens SV1 to SV5.
  • the moving image being displayed on the main screen MV is switched to the selected moving image.
  • step S28 it is determined whether or not there is a player termination instruction from the user. If it is determined that the player has instructed to end (step S28: YES), the processing shown in FIG. 7A ends. On the other hand, if it is determined that there is no instruction to end the player (step S28: NO), the process returns to step S21 and the above process is continued.
  • FIG. 8 is a flowchart showing processing in the control unit 11 of the distribution server 1.
  • the process illustrated in FIG. 8 is started when the distribution server 1 receives a request for pseudo camera work data from the client 2.
  • the control unit 11 acquires a search key from the received request (step S111).
  • the request may include text information composed of sentences.
  • the control unit 11 acquires a search keyword as a search key from the sentence by parsing the text information.
  • the request may include playback position information.
  • the control unit 11 acquires a search key and reproduction position information from the request.
  • the control unit 11 specifies the scene information database associated with the title or content ID of the content included in the request from the scene information storage area 31d (step S112).
  • the control unit 11 based on the search key acquired in step S ⁇ b> 111 and the scene information registered in the scene information database specified in step S ⁇ b> 112, a moving image in which scene information corresponding to the search key is associated.
  • a block is searched from the scene information database (step S113).
  • the scene information corresponding to the search key is scene information that matches the search key.
  • the scene information corresponding to the search key is scene information including all the search keys.
  • playback position information may be acquired in step S111.
  • the control unit 11 refers to the scene information database, and from among the video blocks in the image frame at the playback position indicated by the acquired playback position information, the video block associated with the scene information corresponding to the search key Search for.
  • step S114 determines whether or not a moving image block has been searched as a result of the search in step S113 (step S114). And when it determines with a moving image block not being searched (step S114: NO), it progresses to step S115.
  • step S115 the control unit 11 notifies the client 2 of information indicating that there is no scene to be searched, and ends the process illustrated in FIG.
  • step S114 YES
  • the process proceeds to step S116.
  • step S116 the control unit 11 determines the moving image block searched in step S113 as a moving image block including a scene to be searched. For example, a scene including a moving image block in which a subject such as a person indicated by the search key appears is determined.
  • the control unit 11 acquires pseudo camera work data indicating the display range including the moving image block determined in step S116 from the work file storage area 31c (step S117). That is, pseudo camera work data that passes through the moving image block determined in step S116 is acquired.
  • Step S118 determines whether or not there is a predetermined number or more of the pseudo camera work data acquired in Step S117 (Step S118).
  • the predetermined number is set to the number of sub-screens in the client 2, for example. And when it determines with the pseudo camera work data acquired by step S117 not being more than predetermined number (step S118: NO), it progresses to step S119. On the other hand, when it is determined that there is a predetermined number or more of the pseudo camera work data acquired in step S117 (step S118: YES), the process proceeds to step S120.
  • step S119 the control unit 11 determines the pseudo camera work data acquired in step S117 as pseudo camera work data to be provided to the user of the client 2.
  • step S120 the control unit 11 ranks the pseudo camera work data acquired in step S117 based on the number of postings of scene information associated with the moving image block determined in step S116. For example, the control unit 11 ranks the pseudo camera work data acquired in step S117 in descending order of the total number of postings associated with each moving image block including the display range indicated by the pseudo camera work data. Then, the control unit 11 determines the upper predetermined number of pseudo camera work data having a higher ranking as pseudo camera work data to be provided to the user of the client 2 (step S121).
  • the upper predetermined number may be set based on the number of sub-screens that can be displayed by the client 2 (five in the example of FIG. 7D).
  • the request described above includes information indicating the number of sub screens.
  • the pseudo camera work data passing through the moving image block with a high degree of attention where more scene information is posted can be determined as the pseudo camera work data to be provided to the user of the client 2.
  • control unit 11 transmits the work file storing the pseudo camera work data determined in step S119 or step S121 to the client 2 (step S122), and ends the process shown in FIG.
  • the case where the search key is input by the user when the moving image is displayed on the main screen is shown as an example.
  • the input of the search key by the user and the transmission of the request from the client 2 to the distribution server 1 may be performed when a moving image is not displayed on the main screen.
  • a request including a search key input from the mobile terminal of the user of the client 2 may be transmitted to the distribution server 1.
  • the control unit 11 of the distribution server 1 does not transmit the work file storing the pseudo camera work data determined in step S119 or step S121 to the mobile terminal that has transmitted the request.
  • the control unit 11 of the distribution server 1 stores the work file in the work file storage area 31c in association with the user ID for identifying the client 2 and the user of the mobile terminal. Thereafter, when the client 2 accesses the distribution server 1 and, for example, the user is logged in using the user ID, the distribution server 1 transmits the work file stored in association with the user ID to the client 2. To do. Thereby, the client 2 displays a moving image on a sub-screen different from the main screen according to the pseudo camera work data stored in the received work file.
  • the distribution server 1 when the distribution server 1 receives a request for pseudo camera work data from the client 2, the distribution server 1 is stored in advance in association with the partial area in the image frame constituting the moving image. Based on the scene information, a partial area associated with the scene information corresponding to the search key included in the request is determined, and pseudo camera work data indicating a display range including the determined partial area is transmitted to the user of the client 2 It is determined as pseudo camera work data to be provided. Therefore, the pseudo camera work data indicating the display range corresponding to the pseudo camera work desired by the user who inputs the search key can be efficiently retrieved and provided to the user. For example, pseudo camera work data indicating a display range including a scene in which a subject such as a person the user wants to see can be provided to the user.
  • the distribution server 1 determines whether the playback position indicated by the playback position information is within the partial area in the image frame. Then, the partial area associated with the scene information corresponding to the search key included in the request is determined. Therefore, the pseudo camera work data indicating the display range at the timing desired by the user in the entire playback time of the moving image can be efficiently searched and provided to the user.
  • step S117 shown in FIG. 8 the control unit 11 generates pseudo camera work data indicating the display range including the moving image block determined in step S116, based on the scene information database specified in step S112. You may comprise so that it may acquire. For example, the control unit 11 selects a plurality of upper video blocks that are centered on the video block with the highest number of postings or the highest number of postings among the postings associated with the video block in the scene information database described above. A display range centered on the center of gravity is determined for each playback position of the moving image block, and pseudo camera work data indicating the determined display range for each playback position is generated.
  • FIG. 9A is a diagram showing a display range when a partial region in one image frame constituting a moving image is a pixel.
  • FIG. 9B is a diagram illustrating a display range when a partial area in one image frame constituting a moving image is a moving image block.
  • an area including 50% of the total number of postings of the entire one image frame F is determined as the display range R21 with the pixel P having the highest number of postings as the center. This 50% is a display ratio. For example, if the total number of postings for the entire image frame F is “30”, the total number of postings in the display range R21 is “15”.
  • the center of gravity of these pixels is set as the center of the display range.
  • the display range R22 is an area including 50% of the total number of postings of the entire image frame F, centering on the center of the moving image block B1 having the highest number of postings.
  • the center of gravity obtained from the center of these moving image blocks is set as the center of the display range.
  • a part of the moving image blocks B2 to B9 adjacent to the moving image block B1 is included in the display range R22.
  • the adjacent moving image blocks B2 to B9 are divided so that the number of postings in the display range R22 is 50% of the total number of postings in the entire image frame F.
  • the display ratio is not limited to 50% of the total number of postings of the entire image frame F.
  • the display ratio may be determined so that the number of postings in the display range R21 or R22 is 40% or more of the total number of postings in the entire image frame F.
  • the size of the display range is adjusted by fixing the aspect ratio to 16: 9 or the like.
  • FIG. 9C is a conceptual diagram showing an example in which the display ranges R31 to R33 determined every 5 seconds are complemented so as to continuously change.
  • the display range R31 in the unit playback time range of “0 to 5 seconds” is determined as the display range at the playback position of 2.5 seconds in the middle of the unit playback time range.
  • the display range R32 in the unit reproduction time range of 5 seconds to 10 seconds is determined as the display range at the reproduction position of 7.5 seconds in the middle of the unit reproduction time range.
  • the display range R33 in the unit playback time range of “10 seconds to 15 seconds” is determined as the display range at the playback position of 12.5 seconds in the middle of this unit playback time range.
  • the display range in F is complemented so as to change continuously. Such complementation does not have to be linear as shown in FIG. 9C, and it is desirable to make it a smooth curve.
  • FIG. 10A is a flowchart showing an automatic generation process of pseudo camera work data in the control unit 11 of the distribution server 1. This flowchart considers the example shown in FIG.
  • the automatic generation process of pseudo camera work data shown in FIG. 10A is executed, for example, every time the scene information database is updated or every predetermined number of times (for example, 10 times). Or you may comprise so that the automatic generation process of pseudo camera work data may be performed regularly. Or you may perform at arbitrary timings by judgment of an operator etc.
  • the automatic generation process of pseudo camera work data is started, new pseudo camera work data is generated. At this stage, the generated pseudo camera work data is empty pseudo camera work data.
  • control unit 11 registers the display range at the reproduction position “0” as the first element in the new pseudo camera work data in association with the reproduction position “0” (step S201).
  • the display range at the reproduction position “0” is determined, for example, for the entire image frame F at the reproduction position “0”.
  • control unit 11 sets “0” to the variable i (step S202).
  • control unit 11 determines a set of image frames F at each reproduction position included in the above-described unit reproduction time range “T ⁇ i to T ⁇ (i + 1)” as a processing target (step S203).
  • T is the time length of one unit playback time range.
  • control unit 11 refers to the scene information database and determines the center of the partial area having the highest number of postings as the center of the display range among the processing targets determined in step S203 (step S204).
  • the control unit 11 refers to the scene information database and determines the display range in the image frame F so that the display ratio is, for example, 0.5 (step S205). This display ratio is calculated, for example, by dividing the total number of postings for the entire playback time of the moving image by the total number of postings to be processed determined in step S203.
  • the control unit 11 determines the reproduction position of the display range determined in step S205 as “T ⁇ i + T / 2)” (step S206). For example, in the unit playback time range of 0 to 5 seconds, the playback position “T ⁇ i + T / 2)” is determined as 2.5 seconds.
  • the display range is determined by complementary calculation (step S207).
  • the complementary calculation for example, an image located between the reproduction position “0” and the reproduction position “2.5 seconds” based on the display range at the reproduction position “0” and the display range at the reproduction position “2.5 seconds”.
  • the display range in the frame F is calculated so as to change continuously.
  • control unit 11 registers the display range determined in step S205 and the display range determined in step S207 in new pseudo camera work data in association with the respective reproduction positions (step S208).
  • control unit 11 increments the variable i by 1 (step S209).
  • control unit 11 determines whether “T ⁇ i” is greater than the entire playback time of the moving image (step S210). If it is determined that “T ⁇ i” is not greater than the entire playback time of the moving image (step S210: NO), the process returns to step S203. As a result, the same processing as described above is executed in the next unit reproduction time range. If it is determined that “T ⁇ i” is greater than the entire playback time of the moving image (step S210: YES), the automatic generation process of pseudo camera work data is terminated.
  • FIG. 11 is a conceptual diagram showing another example of automatic generation processing of pseudo camera work data.
  • the partial area in the image frame F at a certain reproduction position is a moving image block
  • the numerical value shown in FIG. 11 is the number of postings of each moving image block.
  • the moving image block B1 having the largest number of postings is selected.
  • the selection of the moving image block is repeated until the display ratio exceeds 50%, for example.
  • the video block B3 having the largest number of postings is selected from the video blocks adjacent to the selected video block B1.
  • the video block B5 having the largest number of postings after the video block B3 is selected from the video blocks adjacent to the selected video block B1.
  • the moving image block B8 having the largest number of postings after the moving image blocks B3 and B5 is selected.
  • the minimum area including all the selected moving image blocks B1, B2, B4, and B8 is determined as the display range R41.
  • the aspect ratio is fixed, the determined display range R41 is adjusted to a display range corresponding to the aspect ratio.
  • FIG. 10B is a flowchart showing another example of automatic generation processing of pseudo camera work data. This flowchart considers the example shown in FIG.
  • the start condition of the automatic generation process of pseudo camera work data shown in FIG. 10B is the same as the start condition of the automatic generation process of pseudo camera work data shown in FIG.
  • the processing in steps S211 to S213 shown in FIG. 10B is the same as the processing in steps S201 to S203 shown in FIG.
  • step S214 shown in FIG. 10B the control unit 11 refers to the scene information database and selects a moving image block having the largest number of postings among the processing targets determined in step S213.
  • the control unit 11 refers to the scene information database, and in the image frame F in which the moving image block selected in step S214 is arranged, the control unit 11 posts most of the moving image blocks adjacent to the moving image block selected in step S214.
  • a moving image block with a large number of times is selected (step S215). Note that the moving image block selected in step S215 is set as a non-selection target in the subsequent step S216.
  • the control unit 11 refers to the scene information database and determines whether or not the display ratio is, for example, 0.5 or more (step S216).
  • the display ratio is the same as in the case of the process of step S205 shown in FIG.
  • step S216: NO the process returns to step S215.
  • a moving image block having the next largest number of postings is selected from the moving image blocks adjacent to the moving image block selected in step S214.
  • step S216: YES the process proceeds to step S217.
  • step S217 the control part 11 determines the minimum area
  • the processes in steps S218 to S222 shown in FIG. 10B are the same as the processes in steps S206 to S210 shown in FIG.
  • the pseudo camera work data generated by the automatic generation process of the pseudo camera work data is transmitted to the client 2 in step S122 shown in FIG.
  • the pseudo camera work data indicating the display range including the scene with a high degree of attention that is posted more is automatically used by using the scene information database.
  • the generated pseudo camera work data can be determined as appropriate pseudo camera work data recommended to the user.
  • the client 2 has been shown to receive content and pseudo camera work data of the content from the distribution server 1.
  • the present disclosure can also be applied to a case where the client 2 receives content and pseudo camera work data of the content from another client 2 in a hybrid type or peer type peer-to-peer network.
  • the client 2 functions as the information processing apparatus of the present disclosure.
  • the client 2 may be connected to the storage device 3.
  • the client 2 reproduces the content acquired from the storage device 3.
  • the client 2 displays a moving image according to the pseudo camera work data acquired from the storage device 3.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne un dispositif de traitement d'informations et un procédé de traitement d'informations grâce auquel il est possible de récupérer de manière efficace le travail de photographie souhaité par un utilisateur. Lorsqu'une demande de données de travail de photographie simulé est reçue d'un dispositif terminal, le dispositif de traitement d'informations détermine, sur la base d'informations de scène qui ont été associées à des sous-régions dans les trames d'image qui composent une image mobile et stockées à l'avance, la sous-région associée aux informations de scène correspondant à la clé de récupération incluse dans la demande, et détermine des données de travail de photographie simulées indiquant la région d'affichage incluse dans la sous-région déterminée en tant que données de travail de photographie simulé devant être fournies à l'utilisateur du dispositif terminal.
PCT/JP2014/054651 2013-02-27 2014-02-26 Dispositif de traitement d'informations et procédé de traitement d'informations WO2014132988A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013037497A JP5870944B2 (ja) 2013-02-27 2013-02-27 情報処理装置及び情報処理方法
JP2013-037497 2013-02-27

Publications (1)

Publication Number Publication Date
WO2014132988A1 true WO2014132988A1 (fr) 2014-09-04

Family

ID=51428251

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/054651 WO2014132988A1 (fr) 2013-02-27 2014-02-26 Dispositif de traitement d'informations et procédé de traitement d'informations

Country Status (2)

Country Link
JP (1) JP5870944B2 (fr)
WO (1) WO2014132988A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5967126B2 (ja) * 2014-03-28 2016-08-10 ブラザー工業株式会社 端末装置及びプログラム
JP6388532B2 (ja) * 2014-11-28 2018-09-12 富士通株式会社 画像提供システムおよび画像提供方法
JP2018182428A (ja) * 2017-04-06 2018-11-15 株式会社フューチャリズムワークス 映像配信装置、映像配信システム及び映像配信方法
JP6980496B2 (ja) 2017-11-21 2021-12-15 キヤノン株式会社 情報処理装置、情報処理方法、及びプログラム

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0855131A (ja) * 1994-08-12 1996-02-27 Nippon Telegr & Teleph Corp <Ntt> 動画像内の物体の同定方法及び動画像内の物体の同定装置
JP2007013479A (ja) * 2005-06-29 2007-01-18 Matsushita Electric Ind Co Ltd カメラワーク情報付与および評価装置
JP2008181515A (ja) * 1999-07-09 2008-08-07 Toshiba Corp 物体領域情報記述方法、映像情報処理方法及び情報処理装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0855131A (ja) * 1994-08-12 1996-02-27 Nippon Telegr & Teleph Corp <Ntt> 動画像内の物体の同定方法及び動画像内の物体の同定装置
JP2008181515A (ja) * 1999-07-09 2008-08-07 Toshiba Corp 物体領域情報記述方法、映像情報処理方法及び情報処理装置
JP2007013479A (ja) * 2005-06-29 2007-01-18 Matsushita Electric Ind Co Ltd カメラワーク情報付与および評価装置

Also Published As

Publication number Publication date
JP5870944B2 (ja) 2016-03-01
JP2014164685A (ja) 2014-09-08

Similar Documents

Publication Publication Date Title
JP6558587B2 (ja) 情報処理装置、表示装置、情報処理方法、プログラム、および情報処理システム
CN105745938B (zh) 多视角音频和视频交互式回放
US20180160194A1 (en) Methods, systems, and media for enhancing two-dimensional video content items with spherical video content
KR20160112898A (ko) 증강현실 기반 동적 서비스 제공 방법 및 장치
JP6787394B2 (ja) 情報処理装置、情報処理方法、プログラム
TWI617930B (zh) 空間物件搜尋排序方法、系統與電腦可讀取儲存裝置
US10970932B2 (en) Provision of virtual reality content
JP5870944B2 (ja) 情報処理装置及び情報処理方法
WO2016098467A1 (fr) Système de traitement d&#39;informations, serveur, programme et procédé de traitement d&#39;informations
US10740618B1 (en) Tracking objects in live 360 video
US10061492B2 (en) Path-linked viewpoints from point of interest
US11474661B2 (en) Methods, systems, and media for presenting media content previews
JP6684306B2 (ja) 端末装置、動画配信装置、プログラム
WO2020079996A1 (fr) Dispositif, procédé et programme de traitement d&#39;informations
JP6149967B1 (ja) 動画配信サーバ、動画出力装置、動画配信システム、及び動画配信方法
JP2017108356A (ja) 画像管理システム、画像管理方法、プログラム
JP6451013B2 (ja) 端末装置、動画表示方法、及びプログラム
JP6336309B2 (ja) 端末装置、動画配信装置、プログラム
JP5791744B1 (ja) 端末装置、動画表示方法、及びプログラム
JP5942932B2 (ja) 端末装置、及びプログラム
JP6390932B2 (ja) 端末装置、動画表示方法、及びプログラム
KR102372181B1 (ko) 전자 장치 및 그의 제어 방법
JPH11174950A (ja) 情報処理装置及びその方法、コンピュータ可読メモリ
JP6867541B1 (ja) 画像表示装置及びプログラム
JP5791745B1 (ja) 動画配信装置、動画配信方法、及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14757559

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14757559

Country of ref document: EP

Kind code of ref document: A1