WO2016181780A1 - Content provision system, content provision device, and content provision method - Google Patents

Content provision system, content provision device, and content provision method Download PDF

Info

Publication number
WO2016181780A1
WO2016181780A1 PCT/JP2016/062523 JP2016062523W WO2016181780A1 WO 2016181780 A1 WO2016181780 A1 WO 2016181780A1 JP 2016062523 W JP2016062523 W JP 2016062523W WO 2016181780 A1 WO2016181780 A1 WO 2016181780A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
unit
information
image
image content
Prior art date
Application number
PCT/JP2016/062523
Other languages
French (fr)
Japanese (ja)
Inventor
正行 中里
平 張
友治 佐藤
Original Assignee
凸版印刷株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 凸版印刷株式会社 filed Critical 凸版印刷株式会社
Publication of WO2016181780A1 publication Critical patent/WO2016181780A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a content providing system, a content providing apparatus, and a content providing method.
  • a technique for showing content in 3D (three-dimensional) using the parallax between the right-eye image and the left-eye image is widely known as a known technique.
  • a technique for reading a two-dimensional barcode attached to a (game) card or magazine with a terminal such as a smartphone, and downloading and viewing content associated with the two-dimensional barcode for example, see Patent Document 1).
  • the present invention has been made in view of the above circumstances, and provides a content providing system, a content providing apparatus, and a content providing method capable of providing a three-dimensional image content obtained by synthesizing an image of an object selected by a user. .
  • a first aspect of the present invention is a content providing system including a terminal device and a content providing device that provides 3D image content to the terminal device, the terminal device including object identification information that identifies an object;
  • a transmission unit that transmits, to the content providing apparatus, arrangement position information capable of acquiring a three-dimensional position where the image of the object is arranged, and the content provision corresponding to the transmitted object identification information and the arrangement position information
  • a display control unit for displaying the 3D image content received from the device, wherein the content providing device is the object content that is the 3D image content of the object specified by the object identification information and the object identification information
  • the storage unit The object information receiving unit that receives the object identification information and the arrangement position information from the terminal device, the background content that is the background three-dimensional image content, the object content corresponding to the received object identification information is received
  • a synthesizing unit that generates a three-dimensional image content for distribution synthesized so as to be arranged at a three-dimensional position obtained from the arrangement position information, and the three-
  • a content providing apparatus the storage unit storing the object identification information and the object content that is the three-dimensional image content of the object specified by the object identification information,
  • An object information receiving unit that receives object identification information and arrangement position information capable of acquiring a three-dimensional position where an image of the object is arranged is received from a terminal device, and background content that is a three-dimensional image content of the background is received.
  • a synthesizing unit that generates a three-dimensional image content for distribution that is synthesized so that the object content corresponding to the object identification information is arranged at a three-dimensional position obtained from the received arrangement position information;
  • the 3D image content for distribution generated by the combining unit Comprising a content transmitting unit that transmits device, the.
  • the composition providing unit further includes a correction instruction receiving unit that receives a correction instruction for instructing correction of the arrangement position or orientation of the object from the terminal device. May correct the position or orientation in which the object content is arranged in the three-dimensional image content for distribution according to the received correction instruction.
  • the storage unit further stores attribute information indicating an attribute of the object in association with the object identification information, and the composition
  • the unit corrects the three-dimensional position obtained from the arrangement position information according to the attribute information corresponding to the received object identification information, and sets the object content to the corrected position.
  • the three-dimensional image content for distribution synthesized so as to be arranged may be generated.
  • the synthesizing unit applies the three-dimensional image content for distribution according to the attribute information corresponding to the received object identification information.
  • the orientation of the arranged object content may be corrected.
  • a sixth aspect of the present invention is the content providing apparatus according to the fourth or fifth aspect, wherein the synthesizing unit corresponds to the received object identification information for an attribute relating to display of the three-dimensional image content for distribution. You may correct
  • the viewing direction information indicating the user orientation or the user orientation and position is received from the terminal device.
  • a direction information receiving unit is further provided, and the content transmitting unit is configured to generate a three-dimensional area corresponding to the orientation of the user indicated by the viewing direction information or the orientation and position of the user from the three-dimensional image content for distribution.
  • Image content may be extracted and transmitted to the terminal device.
  • the three-dimensional image for distribution when the combining unit receives a combination of predetermined object identification information may be processed according to the combination.
  • a content providing method executed by the content providing apparatus, wherein the object information receiving unit obtains object identification information for specifying the object and a three-dimensional position at which the image of the object is arranged.
  • An object information receiving step for receiving possible arrangement position information from the terminal device; and a three-dimensional image of the object identified by the received object identification information in the background content, which is a background three-dimensional image content
  • a synthesizing step for generating a three-dimensional image content for distribution by synthesizing the object content as the content so as to be arranged at a three-dimensional position obtained from the received arrangement position information;
  • the 3D image content for distribution generated by the step Having a content transmission step of transmitting to the terminal device.
  • 1 is a configuration diagram of a content providing system according to an embodiment of the present invention. It is a functional block diagram of the content provision apparatus by one Embodiment of this invention. 1 is an external view of an HMD (head mounted display) according to an embodiment of the present invention. It is a functional block diagram of the terminal device by one Embodiment of this invention. It is a figure which shows the object content management information by one Embodiment of this invention. It is a flowchart of the content request process in the terminal device by one Embodiment of this invention. It is a flowchart of the content provision process in the content provision apparatus by one Embodiment of this invention. It is a figure which shows the condition where a user uses HMD by one Embodiment of this invention.
  • FIG. 1 is a configuration diagram of a content providing system according to an embodiment of the present invention.
  • the content providing system includes a content providing apparatus 1, a head mounted display (hereinafter referred to as “HMD”) 2, and a medium 5.
  • HMD head mounted display
  • the figure only one HMD 2 and one medium 5 are shown, but there are actually a plurality. Further, a plurality of content providing apparatuses 1 may be provided.
  • the content providing apparatus 1 distributes 3D image content.
  • the three-dimensional image content is content data that displays an image such as a moving image or a still image in three dimensions.
  • the right eye image is displayed on the right half of the display screen and the left eye image is displayed on the left half.
  • HMD2 is an example of a stereoscopic device used for viewing 3D image content.
  • the head mounted display 2 uses the terminal device 3 as a display device.
  • the stereoscopic device may be glasses used for viewing 3D image content displayed on the terminal device 3.
  • the terminal device 3 is a computer terminal having a communication function such as a smartphone or a tablet terminal, and displays the 3D image content distributed from the content providing device 1 on a display.
  • the content providing device 1 and the terminal device 3 are connected via a network 9 such as the Internet.
  • an object ID object identification information
  • the background content is a 3D image content of a moving image or a still image as a background.
  • the background is a room
  • the object is a chair, table, lighting, curtain, painting, or the like installed in the room.
  • the background is a place where a game or story develops, and the object is a character appearing in the game or story.
  • An object may be a factor that changes an attribute related to image display as well as a target for actually displaying an image as an object. For example, a sun object changes the brightness of an image brightly.
  • the medium 5 is, for example, a card, tag, sticker, marker, or the like on which a character string representing an object ID or an image (for example, a two-dimensional barcode) is printed.
  • a card or the like is used as the medium 5
  • a photograph or picture of the object, an object name, an object description, or the like is printed or written so that the user can easily recognize which object ID can be obtained from the medium 5.
  • a card, a tag, a seal, or a marker that is the medium 5 may be affixed to an actual object (for example, furniture) specified by the object ID obtained from the medium 5.
  • the medium 5 may be an RFID (Radio Frequency Identification) storing an object ID.
  • the RFID storing the object ID is attached to, for example, a card or a real object.
  • FIG. 2 is a functional block diagram showing the configuration of the content providing apparatus 1, and only functional blocks related to the present embodiment are extracted and shown.
  • the content providing apparatus 1 is realized by one or a plurality of computer servers, and includes a storage unit 11, a communication unit 12, and a processing unit 13.
  • the storage unit 11 stores various pieces of information such as background content or object content management information.
  • the object content management information is data in which an object ID, object content, and object attribute information are associated with each other.
  • the object content is a three-dimensional image content of a moving image or a still image of the object.
  • the attribute information indicates an attribute relating to an attribute related to the arrangement position of the object or an attribute related to an attribute related to image display (processing is performed).
  • the communication unit 12 transmits and receives data to and from other devices via the network 9.
  • the processing unit 13 includes a viewing direction information receiving unit 131, an object information receiving unit 132, a synthesizing unit 133, a content transmitting unit 134, a correction instruction receiving unit 135, and a deletion instruction receiving unit 136.
  • the viewing direction information reception unit 131 receives viewing direction information indicating the viewing direction of the user, that is, the direction in which the user is facing, from the terminal device 3.
  • the viewing direction information may include information on the current position of the user.
  • the current position may be a geographical absolute position or a relative position based on a predetermined position in the background content image.
  • the object information receiving unit 132 receives from the terminal device 3 an object ID and arrangement position information indicating information that can acquire a three-dimensional position where an image of the object is arranged. In the arrangement position information, a geographical absolute position may be set, or a relative position or direction based on a predetermined position in the background content image may be set.
  • the synthesizing unit 133 synthesizes the object content corresponding to the received object ID with the background content so that the object content is arranged at a three-dimensional position acquired from the received arrangement position information, and is a three-dimensional image content for distribution. Generate 3D image content for distribution. Note that, when the arrangement position information indicates a geographical absolute position, the composition unit 133 converts the absolute position into a position in the image of the background content.
  • the content transmission unit 134 transmits the 3D image content extracted based on the viewing direction information from the 3D image content for distribution generated by the combining unit 133 to the terminal device 3.
  • the correction instruction receiving unit 135 receives a correction instruction for instructing correction of the arrangement position or orientation of the object content superimposed on the distribution 3D image content.
  • the composition unit 133 corrects the position or orientation in which the object content is arranged in the distribution 3D image content in accordance with the position information correction instruction.
  • the deletion instruction receiving unit 136 receives a deletion instruction that instructs to delete the object content superimposed on the distribution 3D image content.
  • the composition unit 133 deletes the object content instructed to be deleted by the deletion instruction from the distribution 3D image content.
  • FIG. 3 is an external view of the HMD 2.
  • the figure shows an example of the HMD 2 when a smartphone is used as the terminal device 3.
  • the right-eye lens 21 included in the HMD 2 is a lens for viewing the right-eye image displayed on the right side of the display by the terminal device 3.
  • the left-eye lens 22 provided in the HMD 2 is a lens for viewing the left-eye image displayed on the left side of the display by the terminal device 3.
  • a partition 23 is attached to the HMD 2 so that the left-eye image cannot be seen through the right-eye lens 21 and the right-eye image cannot be seen through the left-eye lens 22.
  • the user sets the terminal device 3 so that the edge of the partition 23 opposite to the lens overlaps in the vicinity of the separation between the right-eye image and the left-eye image displayed on the screen of the terminal device 3.
  • the user views the 3D image content displayed by the terminal device 3 through the right-eye lens 21 and the left-eye lens 22.
  • FIG. 4 is a functional block diagram showing a configuration of the terminal device 3, and only functional blocks related to the present embodiment are extracted and shown.
  • the terminal device 3 is, for example, a smartphone, but may be a tablet terminal, a mobile phone terminal, a portable personal computer, or the like.
  • the terminal device 3 includes an input unit 31, a processing unit 32, a detection unit 33, an imaging unit 34, a communication unit 35, and a display unit 36.
  • the medium 5 is an RFID
  • the terminal device 3 includes a tag reader.
  • the input unit 31 receives information input by a user operation.
  • the processing unit 32 includes a viewing direction information acquisition unit 321, an object identification acquisition unit 322, an arrangement position information acquisition unit 323, a transmission unit 324, a content reception unit 325, and a display control unit 326.
  • the viewing direction information acquisition unit 321 acquires viewing direction information.
  • the object identification acquisition unit 322 acquires the object ID recorded on the medium 5.
  • the arrangement position information acquisition unit 323 acquires arrangement position information representing a three-dimensional position where an object image is arranged.
  • the transmission unit 324 transmits various types of information such as viewing direction information, object ID, and arrangement position information to the content providing apparatus 1.
  • the transmission unit 324 transmits a correction instruction and a deletion instruction to the content providing apparatus 1 based on the user operation input by the input unit 31.
  • the content receiving unit 325 receives 3D image content from the content providing apparatus 1.
  • the display control unit 326 displays various data such as 3D image content on the display unit 36.
  • the detection unit 33 is a sensor that detects a direction.
  • the detection unit 33 may include a GPS (Global Positioning System) that obtains information on the current position.
  • the imaging unit 34 is a camera.
  • the communication unit 35 transmits and receives information via the network 9.
  • the display unit 36 is a display and displays data. When the terminal device 3 is a smartphone, the display unit 36 is a touch panel, and the input unit 31 is a sensor arranged on the touch panel.
  • FIG. 5 is a diagram showing a data configuration example of the object content management information.
  • the object content management information shown in the figure is information in which an object ID, an object name, object content, and attribute information are associated with each other. Only one of the object content and the attribute information may be set corresponding to one object ID.
  • the attribute information includes information on an arrangement position attribute and an image processing attribute.
  • the placement position attribute indicates an attribute related to the position where the object is placed.
  • the image processing attribute indicates an attribute that affects (performs processing) an attribute relating to image display. Only one of the arrangement position attribute and the image processing attribute may be set in the attribute information.
  • FIG. 6 is a flowchart of content request processing in the terminal device 3.
  • the viewing direction information acquisition unit 321 acquires viewing direction information (step S105).
  • the viewing direction information acquisition unit 321 acquires information on the direction and the current position from the detection unit 33 and sets it as viewing direction information.
  • the viewing direction information acquisition unit 321 does not acquire information on the current position from the GPS that constitutes the detection unit 33, but a signboard or image installed on the wall or floor of the building where the user is currently located or where the user is located It is also possible to receive position information by short-range communication from a communication device provided for the current position and use it as current position information.
  • WiFi registered trademark
  • Bluetooth registered trademark
  • visible light communication infrared communication
  • NFC Near field communication
  • characters and images two-dimensional barcodes, etc.
  • the viewing direction information acquisition unit 321 acquires position information from the captured image data, and uses it as current position information.
  • the transmission unit 324 transmits the viewing direction information acquired by the viewing direction information acquisition unit 321 to the content providing apparatus 1 (step S110).
  • the content receiving unit 325 receives the 3D image content from the content providing apparatus 1 (step S115).
  • the display control unit 326 displays the received 3D image content on the display unit 36 (step S120).
  • step S125: NO the processing unit 32 performs a process of step S145 described later.
  • the object identification acquisition unit 322 acquires the object ID recorded on the medium 5 (step S125). S130).
  • the imaging unit 34 captures characters and images representing the object ID printed on the medium 5.
  • the object identification acquisition unit 322 acquires the object ID from the captured image data.
  • the medium 5 is an RFID
  • the object identification acquisition unit 322 acquires the object ID read from the medium 5 by the tag reader.
  • the medium 5 is a communication device
  • the object identification acquisition unit 322 receives an object ID transmitted from the medium 5 by short-range communication.
  • the arrangement position information acquisition unit 323 acquires arrangement position information (step S135).
  • the arrangement position information acquisition unit 323 acquires information on the current position from the GPS constituting the detection unit 33 and uses it as arrangement position information.
  • the arrangement position information acquisition unit 323 may use the user orientation information acquired from the detection unit 33 as the arrangement position information.
  • the arrangement position information indicates the direction of the user, the position that is advanced by a predetermined distance in the direction of the user from the predetermined three-dimensional position in the image of the background content (or distribution three-dimensional image content) is the arrangement position.
  • the arrangement position information acquisition unit 323 receives position information by short-range communication from a communication device provided on a wall or floor of a building where the user is currently present, a signboard or an image installed in a place where the user is present. It is good also as arrangement position information.
  • the imaging unit 34 captures characters and images indicating information on the position drawn on a poster or marker attached to a wall or floor of a building where the user is present, a signboard or an image installed in a place where the user is present, or the like. May be.
  • the viewing direction information acquisition unit 321 acquires position information from the captured image data and uses it as arrangement position information.
  • the transmission unit 324 transmits the object ID acquired by the object identification acquisition unit 322 and the arrangement position information acquired by the arrangement position information acquisition unit 323 to the content providing apparatus 1 (step S140).
  • the processing unit 32 of the terminal device 3 repeats the processing from step S115.
  • step S145 the transmission unit 324 determines whether an instruction to correct the position or orientation of the object has been input.
  • step S145 the transmission unit 324 performs the process of step S155 described later.
  • the transmission unit 324 transmits the input correction instruction to the content providing apparatus 1 (step S150). For example, when the user wants to move the position of the object image, the user touches the object image displayed on the touch panel with a finger and moves the object image in the direction in which the user wants to move (drag operation). The transmission unit 324 transmits to the content providing apparatus 1 a correction instruction in which the object ID of the object whose image is displayed at the touched position on the screen, the movement direction, and the movement amount are set. The amount of movement is information according to the distance that the finger is moved while touching the touch panel. Or a user shakes in the direction which wants to move a head, holding HMD2.
  • the sensor included in the detection unit 33 detects the direction in which the head is shaken, and the transmission unit 324 transmits a correction instruction in which the detected direction is set as the movement direction to the content providing apparatus 1.
  • the transmission unit 324 may set, as the correction instruction, a movement amount corresponding to the speed at which the head is shaken or the distance at which the head is shaken.
  • the terminal device 3 is provided with a sensor for detecting the line of sight, and the direction in which the user moves the line of sight is detected.
  • the transmission unit 324 transmits to the content providing apparatus 1 the object ID of the object whose image is displayed at the position where the user's line of sight was hit and the correction instruction in which the detected direction is set as the movement direction.
  • the transmission unit 324 may set a movement amount according to the distance that the line of sight has moved to the correction instruction.
  • the user wants to rotate the orientation of the object image, the user performs an operation such as tapping the object image displayed on the touch panel.
  • the transmission unit 324 transmits the object ID of the object whose image is displayed at the tapped position on the screen and the correction instruction that sets the rotation to the content providing apparatus 1.
  • the processing unit 32 of the terminal device 3 repeats the processing from step S115.
  • step S145 After determining NO in step S145, the transmission unit 324 determines whether an object deletion instruction has been input (step S155).
  • the transmission unit 324 determines that an object deletion instruction has been input (step S155: YES)
  • the transmission unit 324 transmits the input deletion instruction to the content providing apparatus 1 (step S160).
  • the user touches the image of the object to be deleted displayed on the touch panel with a finger and performs a flick operation in the direction outside the screen, or double taps the image of the object to be deleted.
  • the transmission unit 324 transmits to the content providing apparatus 1 a deletion instruction in which the object ID of the object for which such a deletion operation has been performed is set.
  • the processing unit 32 of the terminal device 3 repeats the processing from step S115.
  • step S155: NO the processing unit 32 determines whether or not the object deletion instruction has been input by the input unit 31 (step S165). ). If the processing unit 32 determines that the end of processing has not been input (step S165: NO), the processing unit 32 repeats the processing from step S105. Then, when the processing unit 32 determines that the processing end is input in step S165 (step S165: YES), the processing unit 32 ends the processing.
  • FIG. 7 is a flowchart of content providing processing in the content providing apparatus 1.
  • the composition unit 133 of the content providing apparatus 1 determines whether the viewing direction information receiving unit 131 has received viewing direction information from the terminal device 3 (step S205).
  • the synthesizing unit 133 determines that the viewing direction information receiving unit 131 has not received the viewing direction information (step S205: NO)
  • the combining unit 133 performs a process of step S220 described later.
  • the synthesizing unit 133 determines whether the viewing direction information receiving unit 131 has received the viewing direction information (step S205: YES). . When it is determined that the distribution 3D image content has been generated (step S210: YES), the composition unit 133 performs a process of step S220 described later.
  • the composition unit 133 reads the background content from the storage unit 11 and sets it as the distribution 3D image content (step S215). Note that information specifying background content may be further received from the terminal device 3.
  • the synthesizing unit 133 reads the background content specified by the received information from the storage unit 11 and sets it as the three-dimensional image content for distribution.
  • the combining unit 133 determines whether or not the object information receiving unit 132 has received the object ID and arrangement position information. Is determined (step S220). When the composition unit 133 determines that the object information reception unit 132 has not received the object ID and the arrangement position information (step S220: NO), the composition unit 133 performs a process of step S245 described later.
  • the composition unit 133 determines that the object information reception unit 132 has received the object ID and the arrangement position information (step S220: YES)
  • the composition unit 133 performs the process of step S225. That is, the composition unit 133 reads out the object content and attribute information corresponding to the received object ID from the object content management information stored in the storage unit 11 (step S225).
  • the synthesizing unit 133 obtains a three-dimensional arrangement position in the distribution three-dimensional image content from the received arrangement position information.
  • the synthesizing unit 133 converts the position information into a position in the background image.
  • the synthesizing unit 133 superimposes (synthesizes) the read object content on the current distribution 3D image content so as to be arranged at the arrangement position in the distribution 3D image content, and the updated distribution 3D image Content is generated (step S230).
  • the object ID of the object content is added to the object content superimposed on the 3D image content for distribution.
  • the composition unit 133 corrects the three-dimensional arrangement position of the object content arranged in step S230 according to the arrangement position attribute indicated by the read attribute information (step S235).
  • the composition unit 133 may correct the placement position indicated by the received placement position information according to the placement position attribute before composition.
  • the synthesizing unit 133 superimposes (synthesizes) the object content on the current delivery 3D image content so that the object content is placed at the corrected placement position in the delivery 3D image content, and then delivers the updated delivery. 3D image content is generated.
  • the synthesis unit 133 may further correct the orientation of the object content according to the arrangement position attribute.
  • the synthesizing unit 133 performs processing according to the image processing attribute indicated by the read attribute information on the attribute relating to the display of the generated distribution 3D image content (step S240).
  • step S225 when the object content is not read, the composition unit 133 does not perform the processes of step S230 and step S235. Further, when the arrangement position attribute is not set in the attribute information, the synthesizing unit 133 does not perform the process of step S235, and does not perform the process of step S240 when the image processing attribute is not set in the attribute information.
  • the composition unit 133 determines whether or not the correction instruction receiving unit 135 has received a correction instruction from the terminal device 3 after determining NO in step S220 or after the process of step S240 (step S245). When it is determined that the correction instruction receiving unit 135 has not received the correction instruction (step S245: NO), the synthesizing unit 133 performs a process of step S255 described later.
  • the composition unit 133 corrects the arrangement position or orientation of the object content in the distribution 3D image content according to the correction instruction (step S245). S250). Specifically, the composition unit 133 corrects the current arrangement position of the object content in the distribution 3D image content according to the movement direction and the movement amount indicated by the correction instruction.
  • the composition unit 133 moves the arrangement position only for the object content specified by the object ID.
  • the composition unit 133 moves the arrangement position by a fixed movement amount.
  • the compositing unit 133 is specified by the object ID set in the correction instruction among the object contents superimposed on the distribution 3D image content.
  • the direction of the object content is rotated by a predetermined angle in a predetermined direction.
  • the composition unit 133 determines whether or not the deletion instruction receiving unit 136 has received a deletion instruction from the terminal device 3 after determining NO in step S245 or after the processing of step S250 (step S255). When it is determined that the deletion instruction receiving unit 136 has not received the deletion instruction (step S255: NO), the synthesizing unit 133 performs a process of step S265 described later. On the other hand, when determining that the delete instruction receiving unit 136 has received the delete instruction (step S255: YES), the synthesizing unit 133 performs the process of step S260. That is, the composition unit 133 deletes the object content specified by the object ID set in the deletion instruction from the object content superimposed on the distribution 3D image content (step S260).
  • the content transmission unit 134 performs the process of step S265. That is, the content transmission unit 134 extracts the 3D image content of the region (part) corresponding to the direction and position of the user indicated by the viewing direction information received from the terminal device 3 from the 3D image content for distribution (Step S265). ). When the user position is not set in the viewing direction information, a predetermined position such as the center position of the background content is used. The content transmission unit 134 transmits the 3D image content extracted from the 3D image content for distribution to the terminal device 3 (step S270).
  • the imaging unit 34 of the terminal device 3 continuously captures moving image data, and the object identification acquisition unit 322 continues even if the user does not input an object ID acquisition instruction through the input unit 31. Then, a character or an image representing the object ID may be detected from the moving image data being captured, and the object ID may be acquired from the detected information.
  • the arrangement position information acquisition unit 323 acquires the arrangement position information even if the user does not input an instruction through the input unit 31, and the transmission unit 324 receives the object ID and the object ID acquired by the object identification acquisition unit 322.
  • the arrangement position information acquired by the arrangement position information acquisition unit 323 is transmitted to the content providing apparatus 1.
  • the terminal device 3 detects the object ID from the moving image data while the imaging unit 34 is continuously capturing images, even if no instruction is input by the user, and the detected object ID, The process of transmitting the arrangement position information when the object ID is detected to the content providing apparatus 1 is repeated. Thereby, even if the user does not perform an operation each time, the user simply holds the medium 5 over the imaging unit 34 of the terminal device 3 or points the imaging unit 34 toward the medium 5, and the three-dimensional image in which objects are successively added. Content can be viewed.
  • FIG. 8 is a diagram showing a situation in which the user uses the HMD 2.
  • the user is in the room R with the same floor plan as the room where the furniture is placed.
  • a marker M printed with position information is attached to the floor or wall.
  • FIG. 9 is a diagram illustrating an example of setting object content management information.
  • the object content management information is set with information on furniture or interior such as chairs, curtains, clocks, etc., or objects such as the sun that affects the situation of the room.
  • the placement position attribute is “floor”
  • the placement position attribute is “window”
  • the placement position attribute is “wall”.
  • the placement position attribute is " ceiling ".
  • how to process an image is set in the image processing attribute of an object such as lighting or the sun that affects the brightness of the image.
  • the orientation attribute of the object may be obtained by the arrangement position attribute.
  • the object content is arranged so that the orientation is the same as the room, and in the case of the arrangement position attribute “window”, the object content is arranged so as to be parallel to the window. In the case of “wall”, the object content is arranged in parallel with the wall. Note that the orientation of the object may be explicitly set in the arrangement position attribute.
  • FIG. 10 is a diagram illustrating a generation example of the distribution 3D image content.
  • the viewing direction information acquisition unit 321 of the terminal device 3 used as the display device of the HMD 2 transmits the user orientation and current position information acquired by the detection unit 33 to the content providing device 1 as viewing direction information (Ste S105, Step S110).
  • the viewing direction information receiving unit 131 of the content providing apparatus 1 receives the viewing direction information from the terminal device 3 (step S205: YES)
  • the composition unit 133 displays the background content B1 of the room R corresponding to the current position indicated by the viewing direction information. Is read from the storage unit 11.
  • the composition unit 133 sets the background content B1 as the distribution three-dimensional image content G11 as shown in FIG. 10A (step S210: NO, step S215).
  • the background content B1 is a three-dimensional image content generated by photographing the room R in advance. By designating the position and orientation from the background content B1, it is possible to acquire the 3D image content when viewed from the designated position in the designated direction.
  • the content transmission unit 134 extracts the three-dimensional image content in the area corresponding to the position or orientation of the user indicated by the received viewing direction information from the distribution three-dimensional image content G11 (step S265), and transmits it to the terminal device 3. (Step S270).
  • the display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36 (steps S115 to S120). Thereby, the user can view the 3D image content when the room R is viewed from the current position.
  • the viewing direction information acquisition unit 321 of the terminal device 3 transmits the user orientation and current position information newly acquired by the detection unit 33 to the content providing device 1 as viewing direction information.
  • the viewing direction information receiving unit 131 of the content providing apparatus 1 receives new viewing direction information from the terminal device 3 (step S205).
  • the content transmission unit 134 extracts the 3D image content of the area corresponding to the position and orientation of the user indicated by the newly received viewing direction information from the already generated 3D image content for distribution G11 (step S265). Then, the data is transmitted to the terminal device 3 (step S270).
  • the display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36 (steps S115 to S120). Thereby, the user can view the 3D image content when the room R is viewed from the moved position.
  • Step S125: YES When the user moves to a place where the furniture is desired to be installed in the room R, the user selects a card (medium 5) on which the photograph of the furniture to be installed and the object ID are printed, and inputs an imaging instruction via the input unit 31 of the terminal device 3 ( Step S125: YES).
  • the object identification acquisition unit 322 acquires the object ID from the image data captured by the imaging unit 34 (step S130). Further, the user images the marker M of the place where the furniture is to be installed by the imaging unit 34 of the terminal device 3.
  • the arrangement position information acquisition unit 323 acquires the position information from the data of the image captured by the imaging unit 34 and sets it as the arrangement position information (step S135).
  • the transmission unit 324 transmits the acquired object ID and arrangement position information to the content providing apparatus 1 (step S140).
  • the synthesizing unit 133 reads the object content and attribute information corresponding to the object ID (step S225).
  • the received object ID is “00001”
  • the attribute information indicating the object content C11 of the object name “chair” and the arrangement position attribute “floor” is read from the object content management information illustrated in FIG.
  • the composition unit 133 superimposes (synthesizes) the object content C11 on the current delivery 3D image content G11 to generate the delivery 3D image content G12 shown in FIG. 10B (step S230).
  • the object content C11 is superimposed on the image of the distribution three-dimensional image content G11 so as to be arranged at a three-dimensional arrangement position indicated by the arrangement position information.
  • the synthesizing unit 133 corrects the arrangement position of the object content C11 in the generated distribution 3D image content G12 so as to coincide with the arrangement position attribute “floor” set in the attribute information.
  • the object content C11 may be arranged in the same direction, but may be arranged such that the lower end is above the floor height of the distribution 3D image content G12. Therefore, the composition unit 133 corrects the arrangement position of the object content C11 in the distribution 3D image content G12 so that the lower end of the object content C11 coincides with the height of the floor (step S235). Furthermore, as illustrated in FIG.
  • the composition unit 133 determines the orientation of the room and the object in the background content B1 (or the three-dimensional image content G12 for distribution) superimposed on the three-dimensional image content G12 for distribution.
  • the object content C11 may be rotated so that the content C11 is oriented.
  • the composition unit 133 similarly corrects the arrangement position including the direction even when an object content such as a carpet with an arrangement position attribute “floor” is superimposed. Further, for example, when the object content such as a clock where the placement position attribute is “wall” is superimposed, the composition unit 133 corrects the placement position of the object content to the position of the wall in the 3D image content for distribution, Rotate the direction to be parallel.
  • the synthesizing unit 133 may generate a 3D image content for distribution after correcting the arrangement position of the object content C11, and for distributing a moving image that moves the object content C11 from the initial arrangement position to the corrected arrangement position. Three-dimensional image content may be generated.
  • the content transmission unit 134 extracts the 3D image content of the area corresponding to the position and orientation of the user indicated by the latest viewing direction information from the 3D image content for distribution G12 in which the arrangement position of the object content C11 is corrected (step S1). S265), and transmits to the terminal device 3 (step S270).
  • step S145 When the user wants to move the position of the chair, the user moves the chair image displayed on the terminal device 3 in the desired direction while touching with the finger (step S145: YES).
  • the transmission unit 324 of the terminal device 3 transmits the correction instruction in which the chair object ID, the movement direction, and the movement amount are set to the content providing apparatus 1 (step S150).
  • the synthesizing unit 133 of the content providing apparatus 1 acquires the object ID, the moving direction, and the moving amount from the correction instruction received by the correction instruction receiving unit 135.
  • the composition unit 133 corrects the arrangement position of the chair object content C11 in the distribution 3D image content G12 according to the correction instruction (step S245: YES, step S250).
  • the content transmission unit 134 extracts the 3D image content of the area corresponding to the position and orientation of the user indicated by the latest viewing direction information from the 3D image content for distribution G12 in which the arrangement position of the object content C11 is corrected (step S1). S265), and transmits to the terminal device 3 (step S270). Thereby, the user can view the three-dimensional image content when the position of the chair installed in the room R is moved. Since the position can be corrected in this way, it is not necessary to install the markers M in the room R at a narrow interval.
  • the user moves to a place where he wants to install different furniture in the room R.
  • the terminal device 3 transmits to the content providing apparatus 1 information on the user direction and the current position newly acquired by the movement.
  • the content providing apparatus 1 extracts the 3D image content of the area corresponding to the position and orientation of the user indicated by the newly received viewing direction information from the 3D image content G12 for distribution, and transmits it to the terminal device 3.
  • the display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36. Thereby, the user can view the 3D image content when the room R in which the chair is installed is viewed from the moved position.
  • the user takes an image of a new furniture card to be installed by the terminal device 3 and acquires an object ID (step S125: YES, step S130). Further, the user images the marker M of the place where new furniture is desired to be installed by the terminal device 3, and obtains arrangement position information (step S135).
  • the transmission unit 324 transmits the acquired object ID and arrangement position information to the content providing apparatus 1 (step S140).
  • the content providing device 1 reads the object content and attribute information corresponding to the object ID received from the terminal device 3 (step S220: YES, step S225).
  • the received object ID is “00004”
  • the object content C14 of the object name “lighting 1”, the arrangement position attribute “ceiling”, and the image processing attribute “one step brighter” are obtained from the object content management information shown in FIG. Assume that the indicated attribute information is read.
  • the synthesizing unit 133 superimposes (synthesizes) the object content C14 on the distribution three-dimensional image content G12 to generate a distribution three-dimensional image content (step S230).
  • the object content C14 is superimposed on the image of the distribution three-dimensional image content G12 so as to be arranged at a three-dimensional arrangement position indicated by the arrangement position information.
  • the three-dimensional image content for distribution generated here is a three-dimensional image content in which the background content B1, the object content C11, and the object content C14 are superimposed.
  • the composition unit 133 corrects the arrangement position of the object content C14 in the generated distribution three-dimensional image content so as to coincide with the arrangement position attribute “ceiling” set in the attribute information.
  • the upper end of the object content C14 may be arranged at a height below or above the ceiling in the distribution 3D image content. Accordingly, the composition unit 133 corrects the arrangement position of the object content C14 so that the upper end of the object content C14 matches the height of the ceiling (step S235). Further, the composition unit 133 sets the object content C14 so that the orientation of the room in the background content B1 (or the delivery 3D image content) superimposed on the delivery 3D image content matches the orientation of the object content C14. It may be rotated.
  • the synthesizing unit 133 performs a process of “one step brighter” on the three-dimensional image content for distribution whose position of the object content C14 is corrected as indicated by the image processing attribute (step S240).
  • the content transmission unit 134 extracts the three-dimensional image content in the region corresponding to the position and orientation of the user indicated by the latest viewing direction information from the processed three-dimensional image content for distribution (step S265). Transmit (step S270).
  • the terminal device 3 may read and memorize
  • the user selects an object name of furniture to be installed in the room R from the object names displayed on the display unit 36 by the terminal device 3 using the input unit 31.
  • the transmission unit 324 of the terminal device 3 transmits the object ID corresponding to the selected object name in step S140.
  • FIG. 11 is a diagram showing an example of the 3D image content for distribution when a plurality of furniture objects are installed.
  • a distribution 3D image content in which object contents C21 to C28 are superimposed on a background content B1 that is a 3D image content of a room R is generated.
  • the user shown in the figure shows the virtual position of the user in the distribution 3D image content.
  • the user can experience an immersive feeling as if he / she is actually in the room R where the furniture of the object contents C21 to C28 is installed by viewing the 3D image content extracted from the 3D image content for distribution with the HMD2. it can.
  • FIG. 12 is a diagram showing an example of setting object content management information.
  • information on characters that can appear on the game screen or objects that affect the display of the screen is set in the object content management information.
  • the placement position attribute is “ground”, and for an object that moves in the sky or floats in the sky, the placement position attribute is “sky”.
  • the arrangement position attribute is empty, the height from the ground may be set.
  • the image processing attribute of an object such as the sun or a cumulonimbus that affects the brightness of the image is set as to how to process an attribute related to image display.
  • FIG. 13 is a diagram illustrating an example of 3D image content displayed by the terminal device 3.
  • the viewing direction information acquisition unit 321 of the terminal device 3 used as the display device of the HMD 2 transmits the user orientation information acquired by the detection unit 33 to the content providing device 1 as viewing direction information (step S105).
  • Step S110 When the viewing direction information receiving unit 131 of the content providing device 1 receives the viewing direction information from the terminal device 3 (step S205: YES), the combining unit 133 reads the background content B2 from the storage unit 11.
  • the composition unit 133 sets the background content B2 as the three-dimensional image content for distribution G31 (step S210: NO, step S215).
  • the content transmission unit 134 extracts the 3D image content in the area corresponding to the user orientation indicated by the received viewing direction information from the 3D image content G31 for distribution (step S265), and transmits the 3D image content to the terminal device 3 (step S265).
  • the user's position is a fixed position such as the center position of the space in the background content (or distribution 3D image content).
  • the display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36 (steps S115 to S120). ).
  • the user uses the image capturing unit 34 of the terminal device 3 to capture the card (medium 5), and acquires the object ID “10001” from the captured image data (step S125: YES, step S130).
  • the arrangement position information acquisition unit 323 of the terminal device 3 uses the user orientation information acquired by the detection unit 33 as arrangement position information (step S135).
  • the transmission unit 324 transmits the acquired object ID and arrangement position information to the content providing apparatus 1 (step S140).
  • the composition unit 133 of the content providing apparatus 1 reads out the object content C31 and attribute information corresponding to the received object ID “10001” (step S220: YES, step S225).
  • the synthesizing unit 133 acquires user orientation information from the received arrangement position information, and the direction indicated by the acquired information from a predetermined three-dimensional position such as the center position of the background content (or three-dimensional image content for distribution). The position advanced by a predetermined distance is acquired as the arrangement position.
  • the synthesizing unit 133 superimposes (synthesizes) the object content C31 on the current distribution 3D image content G31 to generate the updated distribution 3D image content (step S230).
  • the object content C31 is superimposed on the image of the distribution three-dimensional image content G31 so as to be arranged at the acquired three-dimensional arrangement position.
  • the composition unit 133 corrects the arrangement position of the object content C31 in the updated three-dimensional image content for distribution so as to coincide with the arrangement position “ground” set in the attribute information. That is, the composition unit 133 corrects the arrangement position of the object content C31 in the distribution 3D image content so that the lower end of the object content C31 matches the height of the ground (step S235).
  • the content transmission unit 134 extracts the 3D image content of the area corresponding to the user orientation indicated by the latest viewing direction information from the newly generated 3D image content for distribution (step S265), and transmits the content to the terminal device 3. (Step S270).
  • the user images the cloud A card (medium 5) with the imaging unit 34 of the terminal device 3, and acquires the object ID “10008” from the captured image data (step S125: YES, step S130).
  • the transmission unit 324 of the terminal device 3 transmits the acquired object ID and arrangement position information indicating the user orientation to the content providing device 1 (steps S135 to S140).
  • the composition unit 133 of the content providing apparatus 1 reads out the object content C38 and attribute information corresponding to the received object ID “10008” (step S220: YES, step S225).
  • the synthesizing unit 133 acquires a position advanced by a predetermined distance from the predetermined three-dimensional position such as the center position of the distribution three-dimensional image content in the direction indicated by the received arrangement position information.
  • the synthesizing unit 133 superimposes (synthesizes) the object content C38 on the current 3D image content for distribution to generate the 3D image content for distribution G32 (step S230).
  • the object content C38 is superimposed on the image of the current 3D image content for distribution so as to be arranged at the obtained three-dimensional arrangement position.
  • the synthesizing unit 133 corrects the arrangement position of the object content C38 in the distribution 3D image content G32 so as to coincide with the arrangement position “sky (height W from the ground)” set in the attribute information (step S31).
  • the content transmission unit 134 extracts the three-dimensional image content in the area corresponding to the user orientation indicated by the latest viewing direction information from the distribution three-dimensional image content G32 (step S265), and transmits it to the terminal device 3 (step S265).
  • the display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36 (steps S115 to S120). ).
  • the content providing device 1 is the tertiary for distribution in which the background content B2 and the object content C31, C32, C34, C38, and C39 are superimposed.
  • the original image content G33 is generated.
  • the content providing apparatus 1 transmits the 3D image content obtained by extracting the range corresponding to the user direction from the 3D image content for distribution G33 to the terminal device 3.
  • the display control unit 326 of the terminal device 3 displays the 3D image content shown in (c) of FIG. 13 on the display unit 36 (steps S115 to S120).
  • the terminal device 3 transmits the newly acquired user orientation information to the content providing device 1.
  • the content providing apparatus 1 extracts the newly received three-dimensional image content in the area corresponding to the orientation of the user from the distribution three-dimensional image content G33 and transmits the extracted three-dimensional image content to the terminal device 3.
  • the display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36. Thereby, the user can view the 3D image content when viewing in a different direction.
  • the composition unit 133 of the content providing apparatus 1 may process the 3D image content for distribution according to the event according to the event.
  • the event includes, for example, a case where a combination of predetermined object IDs is received and a case where predetermined object content is arranged in a predetermined arrangement. The event may further consider the order of reception of the object IDs and the type of background content. Also, for processing according to events, superimpose new object content on 3D image content for distribution, replace object content superimposed on 3D image content for distribution with other object content, and other background content Change to background content, change attributes (brightness, color tone, etc.) related to the display of 3D image content for distribution.
  • the composition unit 133 of the content providing apparatus 1 detects, as an event, that the object content “Mama” and the object content “Dad” are arranged within a predetermined distance.
  • the synthesizing unit 133 adds the “heart mark” object content to the distribution 3D image content so as to be placed between the object content of “Mama” and the object content of “Daddy”. Superimposed.
  • the composition unit 133 of the content providing apparatus 1 detects as an event that the object ID “Futaba” is received from the terminal device 3 first and then the object ID “rain” is received.
  • the synthesizing unit 133 replaces the Futaba object content superimposed on the distribution 3D image content with a moving image object content that grows from Futaba and blooms.
  • the composition unit 133 of the content providing apparatus 1 detects that an object ID of “Momotaro”, “dog”, “pheasant”, or “monkey” has been received as an event.
  • the composition unit 133 changes the background content superimposed on the 3D image content for distribution to the background content of the story where the demon goes to the island.
  • the composition unit 133 of the content providing apparatus 1 detects the use of the room background content and the reception of the curtain object ID and the moon object ID as events.
  • the synthesizing unit 133 performs processing so that the brightness of the distribution three-dimensional image content is reduced.
  • the user can view the content of the three-dimensional image obtained by synthesizing the image of the object selected by the user with a highly immersive HMD.
  • the content providing device 1 and the terminal device 3 described above have a computer system therein.
  • the process of operation of the content providing device 1 and the terminal device 3 is stored in a computer-readable recording medium in the form of a program, and the computer system reads and executes this program to perform the above processing.
  • the computer system referred to here includes a CPU, various memories, an OS, and hardware such as peripheral devices.
  • the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used.
  • the “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system.
  • the “computer-readable recording medium” dynamically holds a program for a short time like a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line.
  • a volatile memory in a computer system serving as a server or a client in that case, and a program that holds a program for a certain period of time are also included.
  • the program may be a program for realizing a part of the functions described above, and may be a program capable of realizing the functions described above in combination with a program already recorded in a computer system.

Abstract

This content provision system comprises a content provision device. The content provision device includes: a storage unit that stores, in association with one another, pieces of object identification information and pieces of object content each being three-dimensional image content of an object specified by the object identification information; an object information reception unit that receives a piece of object identification information and arrangement position information from a terminal device; a compositing unit that creates three-dimensional image content for distribution that is composited such that a piece of object content corresponding to the received object identification information is arranged on background content, which is three-dimensional image content of a background, at a three-dimensional position obtained from the received arrangement position information; and a content transmission unit that transmits, to the terminal device, the three-dimensional image content for distribution created by the compositing unit.

Description

コンテンツ提供システム、コンテンツ提供装置及びコンテンツ提供方法Content providing system, content providing apparatus, and content providing method
 本発明は、コンテンツ提供システム、コンテンツ提供装置及びコンテンツ提供方法に関する。 The present invention relates to a content providing system, a content providing apparatus, and a content providing method.
 右目用画像と左目用画像の視差を利用してコンテンツを3D(三次元)に見せる技術は公知の技術として広く知られている。また、(ゲーム)カードや雑誌に付された二次元バーコードをスマートフォンなどの端末で読み込み、二次元バーコードに紐付けられたコンテンツをダウンロードして視聴する技術がある(例えば、特許文献1参照)。 A technique for showing content in 3D (three-dimensional) using the parallax between the right-eye image and the left-eye image is widely known as a known technique. In addition, there is a technique for reading a two-dimensional barcode attached to a (game) card or magazine with a terminal such as a smartphone, and downloading and viewing content associated with the two-dimensional barcode (for example, see Patent Document 1). ).
日本国特開2011-22798号公報Japanese Unexamined Patent Publication No. 2011-22798
 消費者が家具の購入を検討する場合、自宅の間取りと家具の調和が購入を決定する重要な因子となる。また、消費者は、ダイニングテーブルとチェア、ソファとコーヒーテーブルなど、複数種類の家具をセットで購入するか否かを検討することも少なくない。そこで、消費者が、自宅の間取りと購入を検討している家具とを組み合わせて3D画像で見ることができれば、家具を購入するか否かの決断に非常に役立つと考えられる。家具の販売業者がそのような3D画像を消費者に提供できれば、購入予定の消費者は家具選びを楽しみ、かつ家具設置後の明確なイメージを確認して購入を検討することができ、購買促進に繋がる。
 また、複数のキャラクターが登場する3D画像のカードゲームに、ユーザが選択したキャラクターを登場させることができれば、ユーザにより魅力的なコンテンツを提供することができる。
When consumers consider buying furniture, the layout of the home and the harmony of the furniture is an important factor in determining the purchase. In addition, consumers often consider whether or not to purchase multiple types of furniture such as dining tables and chairs, sofas and coffee tables. Therefore, if the consumer can see the 3D image by combining the floor plan of the home and the furniture that is being considered for purchase, it will be very useful in determining whether or not to purchase the furniture. If the furniture distributor can provide such 3D images to consumers, the consumers who plan to purchase can enjoy choosing furniture and confirm the clear image after the furniture is installed, and consider purchasing. It leads to.
Further, if the character selected by the user can appear in a 3D image card game in which a plurality of characters appear, more attractive content can be provided to the user.
 本発明は、上記事情を考慮してなされたもので、ユーザが選択したオブジェクトの画像を合成した三次元画像のコンテンツを提供することができるコンテンツ提供システム、コンテンツ提供装置及びコンテンツ提供方法を提供する。 The present invention has been made in view of the above circumstances, and provides a content providing system, a content providing apparatus, and a content providing method capable of providing a three-dimensional image content obtained by synthesizing an image of an object selected by a user. .
 本発明の第1態様は、端末装置と、三次元画像コンテンツを前記端末装置に提供するコンテンツ提供装置とを有するコンテンツ提供システムであって、前記端末装置は、オブジェクトを特定するオブジェクト識別情報と、前記オブジェクトの画像を配置する三次元上の位置を取得可能な配置位置情報とを前記コンテンツ提供装置に送信する送信部と、送信した前記オブジェクト識別情報及び前記配置位置情報に対応して前記コンテンツ提供装置から受信した三次元画像コンテンツを表示させる表示制御部と、を備え、前記コンテンツ提供装置は、前記オブジェクト識別情報と、前記オブジェクト識別情報により特定される前記オブジェクトの三次元画像コンテンツであるオブジェクトコンテンツとを対応付けて記憶する記憶部と、前記オブジェクト識別情報及び前記配置位置情報を前記端末装置から受信するオブジェクト情報受信部と、背景の三次元画像コンテンツである背景コンテンツに、受信した前記オブジェクト識別情報に対応した前記オブジェクトコンテンツを、受信した前記配置位置情報から得られた三次元上の位置に配置するように合成した配信用の三次元画像コンテンツを生成する合成部と、前記合成部により生成された配信用の前記三次元画像コンテンツを前記端末装置に送信するコンテンツ送信部と、を備える。 A first aspect of the present invention is a content providing system including a terminal device and a content providing device that provides 3D image content to the terminal device, the terminal device including object identification information that identifies an object; A transmission unit that transmits, to the content providing apparatus, arrangement position information capable of acquiring a three-dimensional position where the image of the object is arranged, and the content provision corresponding to the transmitted object identification information and the arrangement position information A display control unit for displaying the 3D image content received from the device, wherein the content providing device is the object content that is the 3D image content of the object specified by the object identification information and the object identification information And the storage unit The object information receiving unit that receives the object identification information and the arrangement position information from the terminal device, the background content that is the background three-dimensional image content, the object content corresponding to the received object identification information is received A synthesizing unit that generates a three-dimensional image content for distribution synthesized so as to be arranged at a three-dimensional position obtained from the arrangement position information, and the three-dimensional image content for distribution generated by the synthesizing unit A content transmission unit that transmits to the terminal device.
 本発明の第2態様は、コンテンツ提供装置であって、オブジェクト識別情報と、前記オブジェクト識別情報により特定されるオブジェクトの三次元画像コンテンツであるオブジェクトコンテンツとを対応付けて記憶する記憶部と、前記オブジェクト識別情報と、前記オブジェクトの画像を配置する三次元上の位置を取得可能な配置位置情報とを端末装置から受信するオブジェクト情報受信部と、背景の三次元画像コンテンツである背景コンテンツに、受信した前記オブジェクト識別情報に対応した前記オブジェクトコンテンツを、受信した前記配置位置情報から得られた三次元上の位置に配置するように合成した配信用の三次元画像コンテンツを生成する合成部と、前記合成部により生成された配信用の前記三次元画像コンテンツを前記端末装置に送信するコンテンツ送信部と、を備える。 According to a second aspect of the present invention, there is provided a content providing apparatus, the storage unit storing the object identification information and the object content that is the three-dimensional image content of the object specified by the object identification information, An object information receiving unit that receives object identification information and arrangement position information capable of acquiring a three-dimensional position where an image of the object is arranged is received from a terminal device, and background content that is a three-dimensional image content of the background is received. A synthesizing unit that generates a three-dimensional image content for distribution that is synthesized so that the object content corresponding to the object identification information is arranged at a three-dimensional position obtained from the received arrangement position information; The 3D image content for distribution generated by the combining unit Comprising a content transmitting unit that transmits device, the.
 本発明の第3態様は、上記第2態様のコンテンツ提供装置において、前記端末装置から前記オブジェクトの配置位置または向きの補正を指示する補正指示を受信する補正指示受信部をさらに備え、前記合成部は、配信用の前記三次元画像コンテンツにおいて前記オブジェクトコンテンツが配置された位置または向きを、受信した前記補正指示に応じて補正してもよい。 According to a third aspect of the present invention, in the content providing apparatus according to the second aspect, the composition providing unit further includes a correction instruction receiving unit that receives a correction instruction for instructing correction of the arrangement position or orientation of the object from the terminal device. May correct the position or orientation in which the object content is arranged in the three-dimensional image content for distribution according to the received correction instruction.
 本発明の第4態様は、上記第2または第3態様のコンテンツ提供装置において、前記記憶部が、前記オブジェクト識別情報に対応付けて、前記オブジェクトの属性を示す属性情報をさらに記憶し、前記合成部が、前記配置位置情報から得られた三次元上の位置を、受信した前記オブジェクト識別情報に対応した前記属性情報に応じて補正し、前記背景コンテンツに、前記オブジェクトコンテンツを、補正した位置に配置するように合成した配信用の前記三次元画像コンテンツを生成してもよい。 According to a fourth aspect of the present invention, in the content providing apparatus according to the second or third aspect, the storage unit further stores attribute information indicating an attribute of the object in association with the object identification information, and the composition The unit corrects the three-dimensional position obtained from the arrangement position information according to the attribute information corresponding to the received object identification information, and sets the object content to the corrected position. The three-dimensional image content for distribution synthesized so as to be arranged may be generated.
 本発明の第5態様は、上記第4態様のコンテンツ提供装置であって、前記合成部が、受信した前記オブジェクト識別情報に対応した前記属性情報に応じて、配信用の前記三次元画像コンテンツに配置された前記オブジェクトコンテンツの向きを補正してもよい。 According to a fifth aspect of the present invention, there is provided the content providing apparatus according to the fourth aspect, wherein the synthesizing unit applies the three-dimensional image content for distribution according to the attribute information corresponding to the received object identification information. The orientation of the arranged object content may be corrected.
 本発明の第6態様は、上記第4または5態様のコンテンツ提供装置であって、前記合成部が、配信用の前記三次元画像コンテンツの表示に関する属性を、受信した前記オブジェクト識別情報に対応した前記属性情報に応じて補正してもよい。 A sixth aspect of the present invention is the content providing apparatus according to the fourth or fifth aspect, wherein the synthesizing unit corresponds to the received object identification information for an attribute relating to display of the three-dimensional image content for distribution. You may correct | amend according to the said attribute information.
 本発明の第7態様は、上記第2から第6態様のいずれか1態様のコンテンツ提供装置において、前記端末装置からユーザの向き、あるいは、ユーザの向き及び位置を示す視聴方向情報を受信する視聴方向情報受信部をさらに備え、前記コンテンツ送信部は、配信用の前記三次元画像コンテンツから、前記視聴方向情報が示す前記ユーザの向き、あるいは、前記ユーザの向き及び位置に応じた領域の三次元画像コンテンツを抽出して前記端末装置に送信してもよい。 According to a seventh aspect of the present invention, in the content providing apparatus according to any one of the second to sixth aspects, the viewing direction information indicating the user orientation or the user orientation and position is received from the terminal device. A direction information receiving unit is further provided, and the content transmitting unit is configured to generate a three-dimensional area corresponding to the orientation of the user indicated by the viewing direction information or the orientation and position of the user from the three-dimensional image content for distribution. Image content may be extracted and transmitted to the terminal device.
 本発明の第8態様は、上記第2から第7態様のいずれか1態様のコンテンツ提供装置において、前記合成部が、所定のオブジェクト識別情報の組み合わせを受信した場合、配信用の前記三次元画像コンテンツに、前記組み合わせに応じた加工を行ってもよい。 According to an eighth aspect of the present invention, in the content providing apparatus according to any one of the second to seventh aspects, the three-dimensional image for distribution when the combining unit receives a combination of predetermined object identification information. The content may be processed according to the combination.
 本発明の第9態様は、コンテンツ提供装置が実行するコンテンツ提供方法であって、オブジェクト情報受信部が、オブジェクトを特定するオブジェクト識別情報と、前記オブジェクトの画像を配置する三次元上の位置を取得可能な配置位置情報とを端末装置から受信するオブジェクト情報受信ステップと、合成部が、背景の三次元画像コンテンツである背景コンテンツに、受信した前記オブジェクト識別情報により特定される前記オブジェクトの三次元画像コンテンツであるオブジェクトコンテンツを、受信した前記配置位置情報から得られた三次元上の位置に配置するように合成した配信用の三次元画像コンテンツを生成する合成ステップと、コンテンツ送信部が、前記合成ステップにより生成された配信用の前記三次元画像コンテンツを前記端末装置に送信するコンテンツ送信ステップと、を有する。 According to a ninth aspect of the present invention, there is provided a content providing method executed by the content providing apparatus, wherein the object information receiving unit obtains object identification information for specifying the object and a three-dimensional position at which the image of the object is arranged. An object information receiving step for receiving possible arrangement position information from the terminal device; and a three-dimensional image of the object identified by the received object identification information in the background content, which is a background three-dimensional image content A synthesizing step for generating a three-dimensional image content for distribution by synthesizing the object content as the content so as to be arranged at a three-dimensional position obtained from the received arrangement position information; The 3D image content for distribution generated by the step Having a content transmission step of transmitting to the terminal device.
 上記本発明に係る態様によれば、ユーザが選択したオブジェクトの画像を合成した三次元画像のコンテンツを提供することができる。 According to the above aspect of the present invention, it is possible to provide a three-dimensional image content obtained by synthesizing an object image selected by the user.
本発明の一実施形態によるコンテンツ提供システムの構成図である。1 is a configuration diagram of a content providing system according to an embodiment of the present invention. 本発明の一実施形態によるコンテンツ提供装置の機能ブロック図である。It is a functional block diagram of the content provision apparatus by one Embodiment of this invention. 本発明の一実施形態によるHMD(ヘッドマウントディスプレイ)の外観図である。1 is an external view of an HMD (head mounted display) according to an embodiment of the present invention. 本発明の一実施形態による端末装置の機能ブロック図である。It is a functional block diagram of the terminal device by one Embodiment of this invention. 本発明の一実施形態によるオブジェクトコンテンツ管理情報を示す図である。It is a figure which shows the object content management information by one Embodiment of this invention. 本発明の一実施形態による端末装置におけるコンテンツ要求処理のフロー図である。It is a flowchart of the content request process in the terminal device by one Embodiment of this invention. 本発明の一実施形態によるコンテンツ提供装置におけるコンテンツ提供処理のフロー図である。It is a flowchart of the content provision process in the content provision apparatus by one Embodiment of this invention. 本発明の一実施形態によるHMDをユーザが使用する状況を示す図である。It is a figure which shows the condition where a user uses HMD by one Embodiment of this invention. 本発明の一実施形態によるオブジェクトコンテンツ管理情報の設定例を示す図である。It is a figure which shows the example of a setting of the object content management information by one Embodiment of this invention. 本発明の一実施形態による配信用三次元画像コンテンツの生成例を示す図である。It is a figure which shows the example of a production | generation of the three-dimensional image content for delivery by one Embodiment of this invention. 本発明の一実施形態による配信用三次元画像コンテンツの例を示す図である。It is a figure which shows the example of the three-dimensional image content for delivery by one Embodiment of this invention. 本発明の一実施形態によるオブジェクトコンテンツ管理情報の設定例を示す図である。It is a figure which shows the example of a setting of the object content management information by one Embodiment of this invention. 本発明の一実施形態による端末装置が表示する三次元画像コンテンツの例を示す図である。It is a figure which shows the example of the three-dimensional image content which the terminal device by one Embodiment of this invention displays.
 以下、図面を参照しながら本発明の実施形態を詳細に説明する。
 図1は、本発明の一実施形態によるコンテンツ提供システムの構成図である。同図に示すように、コンテンツ提供システムは、コンテンツ提供装置1と、ヘッドマウントディスプレイ(以下、「HMD」と記載する。)2と、媒体5とを備える。同図では、HMD2、及び、媒体5をそれぞれ1つのみ示しているが、現実には複数が存在する。また、コンテンツ提供装置1が複数台備えられてもよい。
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
FIG. 1 is a configuration diagram of a content providing system according to an embodiment of the present invention. As shown in the figure, the content providing system includes a content providing apparatus 1, a head mounted display (hereinafter referred to as “HMD”) 2, and a medium 5. In the figure, only one HMD 2 and one medium 5 are shown, but there are actually a plurality. Further, a plurality of content providing apparatuses 1 may be provided.
 コンテンツ提供装置1は、三次元画像コンテンツを配信する。三次元画像コンテンツは、動画や静止画などの画像を三次元に見せるコンテンツデータである。本実施形態の三次元画像コンテンツは、表示画面の右半分に右目用画像を、左半分に左目用画像を表示させる。 The content providing apparatus 1 distributes 3D image content. The three-dimensional image content is content data that displays an image such as a moving image or a still image in three dimensions. In the 3D image content of the present embodiment, the right eye image is displayed on the right half of the display screen and the left eye image is displayed on the left half.
 HMD2は、三次元画像コンテンツを視聴するために用いられる立体視用器具の一例である。ヘッドマウントディスプレイ2は、端末装置3を表示装置として利用する。なお、立体視用器具は、HMD2のほか、端末装置3に表示される三次元画像コンテンツを視聴するために用いられる眼鏡であってもよい。 HMD2 is an example of a stereoscopic device used for viewing 3D image content. The head mounted display 2 uses the terminal device 3 as a display device. In addition to the HMD 2, the stereoscopic device may be glasses used for viewing 3D image content displayed on the terminal device 3.
 端末装置3は、例えば、スマートフォン、タブレット端末などの通信機能を有するコンピュータ端末であり、コンテンツ提供装置1から配信された三次元画像コンテンツをディスプレイに表示する。コンテンツ提供装置1と端末装置3とは、インターネットなどのネットワーク9を介して接続される。 The terminal device 3 is a computer terminal having a communication function such as a smartphone or a tablet terminal, and displays the 3D image content distributed from the content providing device 1 on a display. The content providing device 1 and the terminal device 3 are connected via a network 9 such as the Internet.
 媒体5からは、背景コンテンツに画像を重畳(合成)するオブジェクトを特定するための情報であるオブジェクトID(オブジェクト識別情報)を取得可能である。背景コンテンツとは、背景となる動画または静画の三次元画像コンテンツである。例えば、背景は部屋であり、オブジェクトは部屋に設置する椅子、テーブル、照明、カーテン、絵画などである。また、背景はゲームや物語が展開する場所であり、オブジェクトはゲームや物語に登場するキャラクターである。オブジェクトは、実際に物として画像を表示する対象だけではなく、画像の表示に関する属性を変化させる要因となるものであってもよい。例えば、太陽のオブジェクトは、画像の明るさを明るく変化させる。 From the medium 5, it is possible to acquire an object ID (object identification information) that is information for specifying an object on which an image is superimposed (synthesized) on the background content. The background content is a 3D image content of a moving image or a still image as a background. For example, the background is a room, and the object is a chair, table, lighting, curtain, painting, or the like installed in the room. The background is a place where a game or story develops, and the object is a character appearing in the game or story. An object may be a factor that changes an attribute related to image display as well as a target for actually displaying an image as an object. For example, a sun object changes the brightness of an image brightly.
 媒体5は、例えば、オブジェクトIDを表す文字列や画像(例えば、二次元バーコードなど)が印刷されたカード、タグ、シール、マーカーなどである。媒体5として、カード等を用いる場合、媒体5からどのオブジェクトのオブジェクトIDが得られるかをユーザが認識しやすいように、オブジェクトの写真や絵、オブジェクト名、オブジェクトの説明等が印刷または書かれたものを使用することが好ましい。また、媒体5であるカードやタグ、シール、マーカーを、媒体5から得られるオブジェクトIDにより特定されるオブジェクト(例えば、家具など)の実物に貼付してもよい。また、媒体5は、オブジェクトIDを記憶したRFID(Radio Frequency Identification)であってもよい。オブジェクトIDを記憶したRFIDは、例えば、カードや実物のオブジェクトに貼付される。 The medium 5 is, for example, a card, tag, sticker, marker, or the like on which a character string representing an object ID or an image (for example, a two-dimensional barcode) is printed. When a card or the like is used as the medium 5, a photograph or picture of the object, an object name, an object description, or the like is printed or written so that the user can easily recognize which object ID can be obtained from the medium 5. It is preferable to use one. Further, a card, a tag, a seal, or a marker that is the medium 5 may be affixed to an actual object (for example, furniture) specified by the object ID obtained from the medium 5. Further, the medium 5 may be an RFID (Radio Frequency Identification) storing an object ID. The RFID storing the object ID is attached to, for example, a card or a real object.
 図2は、コンテンツ提供装置1の構成を示す機能ブロック図であり、本実施形態と関係する機能ブロックのみを抽出して示してある。コンテンツ提供装置1は、1台または複数台のコンピュータサーバにより実現され、記憶部11と、通信部12と、処理部13とを備える。 FIG. 2 is a functional block diagram showing the configuration of the content providing apparatus 1, and only functional blocks related to the present embodiment are extracted and shown. The content providing apparatus 1 is realized by one or a plurality of computer servers, and includes a storage unit 11, a communication unit 12, and a processing unit 13.
 記憶部11は、背景コンテンツ、またはオブジェクトコンテンツ管理情報などの各種情報を記憶する。オブジェクトコンテンツ管理情報は、オブジェクトIDと、オブジェクトコンテンツと、オブジェクトの属性情報を対応付けたデータである。オブジェクトコンテンツは、オブジェクトの動画または静画の三次元画像コンテンツである。属性情報は、オブジェクトの配置位置に関する属性や、画像の表示に関する属性にどのような影響を与える(加工を行う)かの属性を示す。
 通信部12は、ネットワーク9を介して他の装置とデータを送受信する。
The storage unit 11 stores various pieces of information such as background content or object content management information. The object content management information is data in which an object ID, object content, and object attribute information are associated with each other. The object content is a three-dimensional image content of a moving image or a still image of the object. The attribute information indicates an attribute relating to an attribute related to the arrangement position of the object or an attribute related to an attribute related to image display (processing is performed).
The communication unit 12 transmits and receives data to and from other devices via the network 9.
 処理部13は、視聴方向情報受信部131と、オブジェクト情報受信部132と、合成部133と、コンテンツ送信部134と、補正指示受信部135と、削除指示受信部136とを備える。
 視聴方向情報受信部131は、ユーザの視聴方向、すなわち、ユーザが向いている方向を示す視聴方向情報を端末装置3から受信する。視聴方向情報は、ユーザの現在位置の情報を含み得る。現在位置は、地理的な絶対位置でもよく、背景コンテンツの画像における所定位置を基準とした相対位置でもよい。
 オブジェクト情報受信部132は、オブジェクトIDと、オブジェクトの画像を配置する三次元上の位置を取得可能な情報を示す配置位置情報とを端末装置3から受信する。配置位置情報には、地理的な絶対位置を設定してもよく、背景コンテンツの画像における所定位置を基準とした相対位置や方向を設定してもよい。
The processing unit 13 includes a viewing direction information receiving unit 131, an object information receiving unit 132, a synthesizing unit 133, a content transmitting unit 134, a correction instruction receiving unit 135, and a deletion instruction receiving unit 136.
The viewing direction information reception unit 131 receives viewing direction information indicating the viewing direction of the user, that is, the direction in which the user is facing, from the terminal device 3. The viewing direction information may include information on the current position of the user. The current position may be a geographical absolute position or a relative position based on a predetermined position in the background content image.
The object information receiving unit 132 receives from the terminal device 3 an object ID and arrangement position information indicating information that can acquire a three-dimensional position where an image of the object is arranged. In the arrangement position information, a geographical absolute position may be set, or a relative position or direction based on a predetermined position in the background content image may be set.
 合成部133は、背景コンテンツに、受信したオブジェクトIDに対応したオブジェクトコンテンツを、受信した配置位置情報から取得した三次元上の位置に配置するように合成し、配信用の三次元画像コンテンツである配信用三次元画像コンテンツを生成する。なお、合成部133は、配置位置情報が地理的な絶対位置を示す場合、絶対位置を、背景コンテンツの画像における位置に変換する。
 コンテンツ送信部134は、合成部133により生成された配信用三次元画像コンテンツから、視聴方向情報に基づいて抽出した三次元画像コンテンツを端末装置3に送信する。
The synthesizing unit 133 synthesizes the object content corresponding to the received object ID with the background content so that the object content is arranged at a three-dimensional position acquired from the received arrangement position information, and is a three-dimensional image content for distribution. Generate 3D image content for distribution. Note that, when the arrangement position information indicates a geographical absolute position, the composition unit 133 converts the absolute position into a position in the image of the background content.
The content transmission unit 134 transmits the 3D image content extracted based on the viewing direction information from the 3D image content for distribution generated by the combining unit 133 to the terminal device 3.
 補正指示受信部135は、配信用三次元画像コンテンツに重畳されているオブジェクトコンテンツの配置位置または向きの補正を指示する補正指示を受信する。合成部133は、配信用三次元画像コンテンツにおいてオブジェクトコンテンツが配置された位置または向きを、位置情報補正指示に従って補正する。
 削除指示受信部136は、配信用三次元画像コンテンツに重畳されているオブジェクトコンテンツの削除を指示する削除指示を受信する。合成部133は、削除指示により削除が指示されたオブジェクトコンテンツを、配信用三次元画像コンテンツから削除する。
The correction instruction receiving unit 135 receives a correction instruction for instructing correction of the arrangement position or orientation of the object content superimposed on the distribution 3D image content. The composition unit 133 corrects the position or orientation in which the object content is arranged in the distribution 3D image content in accordance with the position information correction instruction.
The deletion instruction receiving unit 136 receives a deletion instruction that instructs to delete the object content superimposed on the distribution 3D image content. The composition unit 133 deletes the object content instructed to be deleted by the deletion instruction from the distribution 3D image content.
 図3は、HMD2の外観図である。同図は、端末装置3としてスマートフォンを利用する場合のHMD2の例を示している。
 HMD2が備える右目用レンズ21は、端末装置3がディスプレイの右側に表示する右目用画像を見るためのレンズである。HMD2が備える左目用レンズ22は、端末装置3がディスプレイの左側に表示する左目用画像を見るためのレンズである。HMD2には、右目用レンズ21を通して左目用画像が見えないように、また、左目用レンズ22を通して右目用画像が見えないように仕切り23が取り付けられる。ユーザは、端末装置3の画面が表示する右目用画像と左目用画像の区切り付近に、仕切り23のレンズとは反対側の縁が重なるように、端末装置3をセットする。ユーザは、右目用レンズ21及び左目用レンズ22を通して、端末装置3が表示する三次元画像コンテンツを視聴する。
FIG. 3 is an external view of the HMD 2. The figure shows an example of the HMD 2 when a smartphone is used as the terminal device 3.
The right-eye lens 21 included in the HMD 2 is a lens for viewing the right-eye image displayed on the right side of the display by the terminal device 3. The left-eye lens 22 provided in the HMD 2 is a lens for viewing the left-eye image displayed on the left side of the display by the terminal device 3. A partition 23 is attached to the HMD 2 so that the left-eye image cannot be seen through the right-eye lens 21 and the right-eye image cannot be seen through the left-eye lens 22. The user sets the terminal device 3 so that the edge of the partition 23 opposite to the lens overlaps in the vicinity of the separation between the right-eye image and the left-eye image displayed on the screen of the terminal device 3. The user views the 3D image content displayed by the terminal device 3 through the right-eye lens 21 and the left-eye lens 22.
 図4は、端末装置3の構成を示す機能ブロック図であり、本実施形態と関係する機能ブロックのみを抽出して示してある。端末装置3は、例えば、スマートフォンであるが、タブレット端末、携帯電話端末、可搬のパーソナルコンピュータなどであってもよい。端末装置3は、入力部31と、処理部32と、検出部33と、撮像部34と、通信部35と、表示部36とを備える。なお、媒体5がRFIDである場合、端末装置3はタグリーダを備える。 FIG. 4 is a functional block diagram showing a configuration of the terminal device 3, and only functional blocks related to the present embodiment are extracted and shown. The terminal device 3 is, for example, a smartphone, but may be a tablet terminal, a mobile phone terminal, a portable personal computer, or the like. The terminal device 3 includes an input unit 31, a processing unit 32, a detection unit 33, an imaging unit 34, a communication unit 35, and a display unit 36. When the medium 5 is an RFID, the terminal device 3 includes a tag reader.
 入力部31は、ユーザの操作による情報の入力を受ける。処理部32は、視聴方向情報取得部321と、オブジェクト識別取得部322と、配置位置情報取得部323と、送信部324と、コンテンツ受信部325と、表示制御部326とを備える。視聴方向情報取得部321は、視聴方向情報を取得する。オブジェクト識別取得部322は、媒体5に記録されているオブジェクトIDを取得する。配置位置情報取得部323は、オブジェクトの画像を配置する三次元上の位置を表す配置位置情報を取得する。送信部324は、視聴方向情報、オブジェクトID、配置位置情報などの各種情報をコンテンツ提供装置1に送信する。また、送信部324は、入力部31により入力されたユーザ操作に基づいて、補正指示や削除指示をコンテンツ提供装置1に送信する。コンテンツ受信部325は、コンテンツ提供装置1から三次元画像コンテンツを受信する。表示制御部326は、三次元画像コンテンツなどの各種データを表示部36に表示させる。検出部33は、向きを検出するセンサである。また、検出部33は、現在位置の情報を得るGPS(Global Positioning System)を含んで構成されてもよい。撮像部34は、カメラである。通信部35は、ネットワーク9を介して情報を送受信する。表示部36は、ディスプレイであり、データを表示する。端末装置3がスマートフォンである場合、表示部36はタッチパネルであり、入力部31はタッチパネルに配されたセンサである。 The input unit 31 receives information input by a user operation. The processing unit 32 includes a viewing direction information acquisition unit 321, an object identification acquisition unit 322, an arrangement position information acquisition unit 323, a transmission unit 324, a content reception unit 325, and a display control unit 326. The viewing direction information acquisition unit 321 acquires viewing direction information. The object identification acquisition unit 322 acquires the object ID recorded on the medium 5. The arrangement position information acquisition unit 323 acquires arrangement position information representing a three-dimensional position where an object image is arranged. The transmission unit 324 transmits various types of information such as viewing direction information, object ID, and arrangement position information to the content providing apparatus 1. Further, the transmission unit 324 transmits a correction instruction and a deletion instruction to the content providing apparatus 1 based on the user operation input by the input unit 31. The content receiving unit 325 receives 3D image content from the content providing apparatus 1. The display control unit 326 displays various data such as 3D image content on the display unit 36. The detection unit 33 is a sensor that detects a direction. The detection unit 33 may include a GPS (Global Positioning System) that obtains information on the current position. The imaging unit 34 is a camera. The communication unit 35 transmits and receives information via the network 9. The display unit 36 is a display and displays data. When the terminal device 3 is a smartphone, the display unit 36 is a touch panel, and the input unit 31 is a sensor arranged on the touch panel.
 図5は、オブジェクトコンテンツ管理情報のデータ構成例を示す図である。同図に示すオブジェクトコンテンツ管理情報は、オブジェクトIDと、オブジェクト名と、オブジェクトコンテンツと、属性情報とを対応付けた情報である。1つのオブジェクトIDに対応してオブジェクトコンテンツと属性情報のいずれかのみが設定されてもよい。属性情報は、配置位置属性と画像加工属性の情報を含む。配置位置属性は、オブジェクトを配置する位置に関する属性を示す。画像加工属性は、画像の表示に関する属性にどのような影響を与える(加工を行う)かの属性を示す。属性情報に、配置位置属性と画像加工属性の一方のみが設定されてもよい。 FIG. 5 is a diagram showing a data configuration example of the object content management information. The object content management information shown in the figure is information in which an object ID, an object name, object content, and attribute information are associated with each other. Only one of the object content and the attribute information may be set corresponding to one object ID. The attribute information includes information on an arrangement position attribute and an image processing attribute. The placement position attribute indicates an attribute related to the position where the object is placed. The image processing attribute indicates an attribute that affects (performs processing) an attribute relating to image display. Only one of the arrangement position attribute and the image processing attribute may be set in the attribute information.
 続いて、コンテンツ提供システムの動作について説明する。
 図6は、端末装置3におけるコンテンツ要求処理のフロー図である。
 ユーザが端末装置3の入力部31によりコンテンツ要求を入力すると、視聴方向情報取得部321は、視聴方向情報を取得する(ステップS105)。例えば、視聴方向情報取得部321は、検出部33から方向や現在位置の情報を取得し、視聴方向情報とする。なお、視聴方向情報取得部321は、検出部33を構成するGPSから現在位置の情報を取得する代わりに、現在ユーザが居る建物の壁や床、ユーザが居る場所に設置してある看板や像などに備えられた通信装置から近距離通信により位置の情報を受信して現在位置の情報としてもよい。近距離通信には、WiFi(登録商標)、Bluetooth(登録商標)、可視光通信、赤外線通信、NFC(Near field communication)などを用いることができる。あるいは、現在ユーザが居る建物の壁や床、ユーザが居る場所に設置してある看板や像などに貼られたポスターやマーカーに描かれた位置の情報を示す文字や画像(二次元バーコードなど)を撮像部34により撮像してもよい。視聴方向情報取得部321は、撮像された画像のデータから位置の情報を取得し、現在位置の情報とする。送信部324は、視聴方向情報取得部321が取得した視聴方向情報をコンテンツ提供装置1に送信する(ステップS110)。
Next, the operation of the content providing system will be described.
FIG. 6 is a flowchart of content request processing in the terminal device 3.
When the user inputs a content request through the input unit 31 of the terminal device 3, the viewing direction information acquisition unit 321 acquires viewing direction information (step S105). For example, the viewing direction information acquisition unit 321 acquires information on the direction and the current position from the detection unit 33 and sets it as viewing direction information. Note that the viewing direction information acquisition unit 321 does not acquire information on the current position from the GPS that constitutes the detection unit 33, but a signboard or image installed on the wall or floor of the building where the user is currently located or where the user is located It is also possible to receive position information by short-range communication from a communication device provided for the current position and use it as current position information. For short-range communication, WiFi (registered trademark), Bluetooth (registered trademark), visible light communication, infrared communication, NFC (Near field communication), or the like can be used. Or characters and images (two-dimensional barcodes, etc.) that show information about the location on a poster or marker affixed to a wall or floor of the building where the user is currently located, or a signboard or image installed at the location where the user is ) May be imaged by the imaging unit 34. The viewing direction information acquisition unit 321 acquires position information from the captured image data, and uses it as current position information. The transmission unit 324 transmits the viewing direction information acquired by the viewing direction information acquisition unit 321 to the content providing apparatus 1 (step S110).
 コンテンツ受信部325は、コンテンツ提供装置1から三次元画像コンテンツを受信する(ステップS115)。表示制御部326は、受信した三次元画像コンテンツを表示部36に表示させる(ステップS120)。 The content receiving unit 325 receives the 3D image content from the content providing apparatus 1 (step S115). The display control unit 326 displays the received 3D image content on the display unit 36 (step S120).
 処理部32は、オブジェクトIDの取得指示が入力されないと判断した場合(ステップS125:NO)、後述するステップS145の処理を行う。
 一方、処理部32が入力部31によりオブジェクトIDの取得指示が入力されたと判断した場合(ステップS125:YES)、オブジェクト識別取得部322は、媒体5に記録されているオブジェクトIDを取得する(ステップS130)。例えば、撮像部34により、媒体5に印刷されているオブジェクトIDを表す文字や画像を撮像する。オブジェクト識別取得部322は、撮像された画像のデータからオブジェクトIDを取得する。あるいは、媒体5がRFIDである場合、オブジェクト識別取得部322は、タグリーダが媒体5から読み出したオブジェクトIDを取得する。あるいは、媒体5が通信機器である場合、オブジェクト識別取得部322は、近距離通信により媒体5から送信されるオブジェクトIDを受信する。
When it is determined that the object ID acquisition instruction is not input (step S125: NO), the processing unit 32 performs a process of step S145 described later.
On the other hand, when the processing unit 32 determines that an object ID acquisition instruction is input from the input unit 31 (step S125: YES), the object identification acquisition unit 322 acquires the object ID recorded on the medium 5 (step S125). S130). For example, the imaging unit 34 captures characters and images representing the object ID printed on the medium 5. The object identification acquisition unit 322 acquires the object ID from the captured image data. Alternatively, when the medium 5 is an RFID, the object identification acquisition unit 322 acquires the object ID read from the medium 5 by the tag reader. Alternatively, when the medium 5 is a communication device, the object identification acquisition unit 322 receives an object ID transmitted from the medium 5 by short-range communication.
 続いて、配置位置情報取得部323は、配置位置情報を取得する(ステップS135)。例えば、配置位置情報取得部323は、検出部33を構成するGPSから現在位置の情報を取得し、配置位置情報とする。あるいは、配置位置情報取得部323は、検出部33から取得したユーザの向きの情報を配置位置情報としてもよい。配置位置情報がユーザの向きを示す場合、背景コンテンツ(または配信用三次元画像コンテンツ)の画像における三次元上の所定位置からユーザの向きに所定距離だけ進んだ位置が配置位置となる。あるいは、配置位置情報取得部323は、現在ユーザが居る建物の壁や床、ユーザが居る場所に設置してある看板や像などに備えられた通信装置から近距離通信により位置の情報を受信して配置位置情報としてもよい。あるいは、現在ユーザが居る建物の壁や床、ユーザが居る場所に設置してある看板や像などに貼られたポスターやマーカーに描かれた位置の情報を示す文字や画像を撮像部34により撮像してもよい。視聴方向情報取得部321は、撮像された画像のデータから位置の情報を取得して配置位置情報とする。送信部324は、オブジェクト識別取得部322が取得したオブジェクトID及び配置位置情報取得部323が取得した配置位置情報をコンテンツ提供装置1に送信する(ステップS140)。端末装置3の処理部32は、ステップS115からの処理を繰り返す。 Subsequently, the arrangement position information acquisition unit 323 acquires arrangement position information (step S135). For example, the arrangement position information acquisition unit 323 acquires information on the current position from the GPS constituting the detection unit 33 and uses it as arrangement position information. Alternatively, the arrangement position information acquisition unit 323 may use the user orientation information acquired from the detection unit 33 as the arrangement position information. When the arrangement position information indicates the direction of the user, the position that is advanced by a predetermined distance in the direction of the user from the predetermined three-dimensional position in the image of the background content (or distribution three-dimensional image content) is the arrangement position. Alternatively, the arrangement position information acquisition unit 323 receives position information by short-range communication from a communication device provided on a wall or floor of a building where the user is currently present, a signboard or an image installed in a place where the user is present. It is good also as arrangement position information. Alternatively, the imaging unit 34 captures characters and images indicating information on the position drawn on a poster or marker attached to a wall or floor of a building where the user is present, a signboard or an image installed in a place where the user is present, or the like. May be. The viewing direction information acquisition unit 321 acquires position information from the captured image data and uses it as arrangement position information. The transmission unit 324 transmits the object ID acquired by the object identification acquisition unit 322 and the arrangement position information acquired by the arrangement position information acquisition unit 323 to the content providing apparatus 1 (step S140). The processing unit 32 of the terminal device 3 repeats the processing from step S115.
 ステップS125において処理部32がNOと判断した後、送信部324は、オブジェクトの位置または向きの補正指示が入力されたか否かを判断する(ステップS145)。送信部324は、オブジェクトの補正指示が入力されていないと判断した場合(ステップS145:NO)、後述するステップS155の処理を行う。 After the processing unit 32 determines NO in step S125, the transmission unit 324 determines whether an instruction to correct the position or orientation of the object has been input (step S145). When determining that the object correction instruction has not been input (step S145: NO), the transmission unit 324 performs the process of step S155 described later.
 一方、送信部324は、補正指示が入力されたと判断した場合(ステップS145:YES)、入力された補正指示をコンテンツ提供装置1に送信する(ステップS150)。
 例えば、オブジェクトの画像の位置を移動させたい場合、ユーザは、タッチパネルに表示されているオブジェクトの画像に指でタッチし、移動させたい方向に動かす(ドラッグ操作)。送信部324は、タッチされた画面上の位置に画像が表示されていたオブジェクトのオブジェクトIDと、移動方向と移動量を設定した補正指示をコンテンツ提供装置1に送信する。移動量は、指をタッチパネルにタッチしたまま動かした距離に応じた情報である。あるいは、ユーザは、HMD2を持ったまま、頭を移動させたい方向に振る。検出部33が備えるセンサは、頭が振られた方向を検出し、送信部324は、その検出された方向を移動方向として設定した補正指示をコンテンツ提供装置1に送信する。このとき、送信部324は、頭が振られた速さや、頭が振られた距離に応じた移動量を補正指示に設定してもよい。あるいは、端末装置3に、視線を検出するセンサを備えておき、ユーザが視線を動かした方向を検出する。送信部324は、ユーザの視線が当たっていた位置に画像が表示されていたオブジェクトのオブジェクトIDと、検出された方向を移動方向として設定した補正指示をコンテンツ提供装置1に送信する。このとき、送信部324は、視線が動いた距離に応じた移動量を補正指示に設定してもよい。
 また、オブジェクトの画像の向きを回転させたい場合、ユーザは、タッチパネルに表示されているオブジェクトの画像をタップするなどの操作を行う。送信部324は、タップされた画面上の位置に画像が表示されていたオブジェクトのオブジェクトIDと、回転を設定した補正指示をコンテンツ提供装置1に送信する。
 端末装置3の処理部32は、ステップS115からの処理を繰り返す。
On the other hand, when determining that the correction instruction has been input (step S145: YES), the transmission unit 324 transmits the input correction instruction to the content providing apparatus 1 (step S150).
For example, when the user wants to move the position of the object image, the user touches the object image displayed on the touch panel with a finger and moves the object image in the direction in which the user wants to move (drag operation). The transmission unit 324 transmits to the content providing apparatus 1 a correction instruction in which the object ID of the object whose image is displayed at the touched position on the screen, the movement direction, and the movement amount are set. The amount of movement is information according to the distance that the finger is moved while touching the touch panel. Or a user shakes in the direction which wants to move a head, holding HMD2. The sensor included in the detection unit 33 detects the direction in which the head is shaken, and the transmission unit 324 transmits a correction instruction in which the detected direction is set as the movement direction to the content providing apparatus 1. At this time, the transmission unit 324 may set, as the correction instruction, a movement amount corresponding to the speed at which the head is shaken or the distance at which the head is shaken. Alternatively, the terminal device 3 is provided with a sensor for detecting the line of sight, and the direction in which the user moves the line of sight is detected. The transmission unit 324 transmits to the content providing apparatus 1 the object ID of the object whose image is displayed at the position where the user's line of sight was hit and the correction instruction in which the detected direction is set as the movement direction. At this time, the transmission unit 324 may set a movement amount according to the distance that the line of sight has moved to the correction instruction.
In addition, when the user wants to rotate the orientation of the object image, the user performs an operation such as tapping the object image displayed on the touch panel. The transmission unit 324 transmits the object ID of the object whose image is displayed at the tapped position on the screen and the correction instruction that sets the rotation to the content providing apparatus 1.
The processing unit 32 of the terminal device 3 repeats the processing from step S115.
 送信部324は、ステップS145においてNOと判断した後、オブジェクトの削除指示が入力されたか否かを判断する(ステップS155)。 After determining NO in step S145, the transmission unit 324 determines whether an object deletion instruction has been input (step S155).
 送信部324は、オブジェクトの削除指示が入力されたと判断した場合(ステップS155:YES)、入力された削除指示をコンテンツ提供装置1に送信する(ステップS160)。例えば、ユーザは、タッチパネルに表示されている削除対象のオブジェクトの画像に指でタッチして画面外の方向にフリック操作をしたり、削除対象のオブジェクトの画像をダブルタップしたりする。送信部324は、このような削除操作が行われたオブジェクトのオブジェクトIDを設定した削除指示をコンテンツ提供装置1に送信する。
 端末装置3の処理部32は、ステップS115からの処理を繰り返す。
If the transmission unit 324 determines that an object deletion instruction has been input (step S155: YES), the transmission unit 324 transmits the input deletion instruction to the content providing apparatus 1 (step S160). For example, the user touches the image of the object to be deleted displayed on the touch panel with a finger and performs a flick operation in the direction outside the screen, or double taps the image of the object to be deleted. The transmission unit 324 transmits to the content providing apparatus 1 a deletion instruction in which the object ID of the object for which such a deletion operation has been performed is set.
The processing unit 32 of the terminal device 3 repeats the processing from step S115.
 一方、送信部324において、オブジェクトの削除指示が入力されていないと判断した場合(ステップS155:NO)、処理部32は、入力部31により処理終了が入力されたか否かを判断する(ステップS165)。処理部32は、処理終了が入力されていないと判断した場合(ステップS165:NO)、ステップS105からの処理を繰り返す。
 そして、処理部32は、ステップS165において処理終了が入力されたと判断した場合(ステップS165:YES)、処理を終了する。
On the other hand, when the transmission unit 324 determines that the object deletion instruction has not been input (step S155: NO), the processing unit 32 determines whether or not the processing end has been input by the input unit 31 (step S165). ). If the processing unit 32 determines that the end of processing has not been input (step S165: NO), the processing unit 32 repeats the processing from step S105.
Then, when the processing unit 32 determines that the processing end is input in step S165 (step S165: YES), the processing unit 32 ends the processing.
 図7は、コンテンツ提供装置1におけるコンテンツ提供処理のフロー図である。
 コンテンツ提供装置1の合成部133は、視聴方向情報受信部131が端末装置3から視聴方向情報を受信したか否かを判断する(ステップS205)。合成部133は、視聴方向情報受信部131が視聴方向情報を受信していないと判断した場合(ステップS205:NO)、後述するステップS220の処理を行う。
FIG. 7 is a flowchart of content providing processing in the content providing apparatus 1.
The composition unit 133 of the content providing apparatus 1 determines whether the viewing direction information receiving unit 131 has received viewing direction information from the terminal device 3 (step S205). When the synthesizing unit 133 determines that the viewing direction information receiving unit 131 has not received the viewing direction information (step S205: NO), the combining unit 133 performs a process of step S220 described later.
 合成部133は、視聴方向情報受信部131が視聴方向情報を受信したと判断した場合(ステップS205:YES)、配信用三次元画像コンテンツを生成済みであるか否かを判断する(ステップS210)。合成部133は、配信用三次元画像コンテンツを生成済みであると判断した場合(ステップS210:YES)、後述するステップS220の処理を行う。 When the synthesizing unit 133 determines that the viewing direction information receiving unit 131 has received the viewing direction information (step S205: YES), the synthesizing unit 133 determines whether the three-dimensional image content for distribution has been generated (step S210). . When it is determined that the distribution 3D image content has been generated (step S210: YES), the composition unit 133 performs a process of step S220 described later.
 合成部133は、配信用三次元画像コンテンツをまだ生成していないと判断した場合(ステップS210:NO)、記憶部11から背景コンテンツを読み出し、配信用三次元画像コンテンツとする(ステップS215)。なお、端末装置3から背景コンテンツを特定する情報をさらに受信してもよい。合成部133は、受信した情報により特定される背景コンテンツを記憶部11から読み出し、配信用三次元画像コンテンツとする。 When it is determined that the distribution 3D image content has not yet been generated (step S210: NO), the composition unit 133 reads the background content from the storage unit 11 and sets it as the distribution 3D image content (step S215). Note that information specifying background content may be further received from the terminal device 3. The synthesizing unit 133 reads the background content specified by the received information from the storage unit 11 and sets it as the three-dimensional image content for distribution.
 合成部133は、ステップS205でNOと判断した後、ステップS210でYESと判断した後、あるいは、ステップS215の処理の後、オブジェクト情報受信部132がオブジェクトID及び配置位置情報を受信したか否かを判断する(ステップS220)。合成部133は、オブジェクト情報受信部132がオブジェクトID及び配置位置情報を受信していないと判断した場合(ステップS220:NO)、後述するステップS245の処理を行う。 After determining NO in step S205, determining YES in step S210, or after processing in step S215, the combining unit 133 determines whether or not the object information receiving unit 132 has received the object ID and arrangement position information. Is determined (step S220). When the composition unit 133 determines that the object information reception unit 132 has not received the object ID and the arrangement position information (step S220: NO), the composition unit 133 performs a process of step S245 described later.
 一方、合成部133は、オブジェクト情報受信部132がオブジェクトID及び配置位置情報を受信したと判断した場合(ステップS220:YES)、ステップS225の処理を行う。すなわち、合成部133は、受信したオブジェクトIDに対応したオブジェクトコンテンツ及び属性情報を記憶部11に記憶されるオブジェクトコンテンツ管理情報から読み出す(ステップS225)。合成部133は、受信した配置位置情報から配信用三次元画像コンテンツにおける三次元上の配置位置を得る。合成部133は、配置位置情報が地理的な絶対位置を示す場合は、背景画像における位置に変換する。合成部133は、現在の配信用三次元画像コンテンツに、読み出したオブジェクトコンテンツを、配信用三次元画像コンテンツにおける配置位置に配置するように重畳(合成)して、更新後の配信用三次元画像コンテンツを生成する(ステップS230)。配信用三次元画像コンテンツに重畳されるオブジェクトコンテンツには、オブジェクトコンテンツのオブジェクトIDが付加される。 On the other hand, if the composition unit 133 determines that the object information reception unit 132 has received the object ID and the arrangement position information (step S220: YES), the composition unit 133 performs the process of step S225. That is, the composition unit 133 reads out the object content and attribute information corresponding to the received object ID from the object content management information stored in the storage unit 11 (step S225). The synthesizing unit 133 obtains a three-dimensional arrangement position in the distribution three-dimensional image content from the received arrangement position information. When the arrangement position information indicates a geographical absolute position, the synthesizing unit 133 converts the position information into a position in the background image. The synthesizing unit 133 superimposes (synthesizes) the read object content on the current distribution 3D image content so as to be arranged at the arrangement position in the distribution 3D image content, and the updated distribution 3D image Content is generated (step S230). The object ID of the object content is added to the object content superimposed on the 3D image content for distribution.
 さらに、合成部133は、読み出した属性情報が示す配置位置属性に従って、ステップS230において配置したオブジェクトコンテンツの三次元上の配置位置を補正する(ステップS235)。なお、合成部133は、合成前に、受信した配置位置情報が示す配置位置を配置位置属性に従って補正してもよい。この場合、合成部133は、現在の配信用三次元画像コンテンツに、オブジェクトコンテンツを、配信用三次元画像コンテンツにおける補正後の配置位置に配置するように重畳(合成)して、更新後の配信用三次元画像コンテンツを生成する。また、合成部133は、さらに、配置位置属性に従って、オブジェクトコンテンツの向きを補正してもよい。合成部133は、生成した配信用三次元画像コンテンツの表示に関する属性に、読み出した属性情報が示す画像加工属性に応じた加工を行う(ステップS240)。 Further, the composition unit 133 corrects the three-dimensional arrangement position of the object content arranged in step S230 according to the arrangement position attribute indicated by the read attribute information (step S235). Note that the composition unit 133 may correct the placement position indicated by the received placement position information according to the placement position attribute before composition. In this case, the synthesizing unit 133 superimposes (synthesizes) the object content on the current delivery 3D image content so that the object content is placed at the corrected placement position in the delivery 3D image content, and then delivers the updated delivery. 3D image content is generated. Further, the synthesis unit 133 may further correct the orientation of the object content according to the arrangement position attribute. The synthesizing unit 133 performs processing according to the image processing attribute indicated by the read attribute information on the attribute relating to the display of the generated distribution 3D image content (step S240).
 なお、ステップS225において、オブジェクトコンテンツが読み出されなかった場合は、合成部133は、ステップS230及びステップS235の処理を行わない。また、合成部133は、属性情報に配置位置属性が設定されていない場合、ステップS235の処理を行わず、属性情報に画像加工属性が設定されていない場合、ステップS240の処理を行わない。 In step S225, when the object content is not read, the composition unit 133 does not perform the processes of step S230 and step S235. Further, when the arrangement position attribute is not set in the attribute information, the synthesizing unit 133 does not perform the process of step S235, and does not perform the process of step S240 when the image processing attribute is not set in the attribute information.
 合成部133は、ステップS220においてNOと判断した後、または、ステップS240の処理の後、補正指示受信部135が端末装置3から補正指示を受信したか否かを判断する(ステップS245)。合成部133は、補正指示受信部135が補正指示を受信していないと判断した場合(ステップS245:NO)、後述するステップS255の処理を行う。 The composition unit 133 determines whether or not the correction instruction receiving unit 135 has received a correction instruction from the terminal device 3 after determining NO in step S220 or after the process of step S240 (step S245). When it is determined that the correction instruction receiving unit 135 has not received the correction instruction (step S245: NO), the synthesizing unit 133 performs a process of step S255 described later.
 合成部133は、補正指示受信部135が補正指示を受信したと判断した場合(ステップS245:YES)、補正指示に従って、配信用三次元画像コンテンツにおけるオブジェクトコンテンツの配置位置または向きを補正する(ステップS250)。
 具体的には、合成部133は、配信用三次元画像コンテンツにおけるオブジェクトコンテンツの現在の配置位置を、補正指示が示す移動方向及び移動量に応じて補正する。補正指示にオブジェクトIDが設定されている場合、合成部133は、そのオブジェクトIDにより特定されるオブジェクトコンテンツのみ配置位置を移動する。また、補正指示に移動量が設定されていない場合、合成部133は、固定の移動量だけ配置位置を移動させる。
 また、補正指示にオブジェクトID及び回転が設定されている場合、合成部133は、配信用三次元画像コンテンツに重畳されているオブジェクトコンテンツのうち、補正指示に設定されているオブジェクトIDにより特定されるオブジェクトコンテンツの向きを所定方向に所定角度だけ回転させる。
When it is determined that the correction instruction receiving unit 135 has received the correction instruction (step S245: YES), the composition unit 133 corrects the arrangement position or orientation of the object content in the distribution 3D image content according to the correction instruction (step S245). S250).
Specifically, the composition unit 133 corrects the current arrangement position of the object content in the distribution 3D image content according to the movement direction and the movement amount indicated by the correction instruction. When the object ID is set in the correction instruction, the composition unit 133 moves the arrangement position only for the object content specified by the object ID. When the movement amount is not set in the correction instruction, the composition unit 133 moves the arrangement position by a fixed movement amount.
When the object ID and rotation are set in the correction instruction, the compositing unit 133 is specified by the object ID set in the correction instruction among the object contents superimposed on the distribution 3D image content. The direction of the object content is rotated by a predetermined angle in a predetermined direction.
 合成部133は、ステップS245においてNOと判断した後、または、ステップS250の処理の後、削除指示受信部136が端末装置3から削除指示を受信したか否かを判断する(ステップS255)。合成部133は、削除指示受信部136が削除指示を受信していないと判断した場合(ステップS255:NO)、後述するステップS265の処理を行う。一方、合成部133は、削除指示受信部136が削除指示を受信したと判断した場合(ステップS255:YES)、ステップS260の処理を行う。つまり、合成部133は、配信用三次元画像コンテンツに重畳されているオブジェクトコンテンツから、削除指示に設定されているオブジェクトIDにより特定されるオブジェクトコンテンツを削除する(ステップS260)。 The composition unit 133 determines whether or not the deletion instruction receiving unit 136 has received a deletion instruction from the terminal device 3 after determining NO in step S245 or after the processing of step S250 (step S255). When it is determined that the deletion instruction receiving unit 136 has not received the deletion instruction (step S255: NO), the synthesizing unit 133 performs a process of step S265 described later. On the other hand, when determining that the delete instruction receiving unit 136 has received the delete instruction (step S255: YES), the synthesizing unit 133 performs the process of step S260. That is, the composition unit 133 deletes the object content specified by the object ID set in the deletion instruction from the object content superimposed on the distribution 3D image content (step S260).
 合成部133がステップS255においてNOと判断した場合、または、ステップS260の処理を行った後、コンテンツ送信部134は、ステップS265の処理を行う。すなわち、コンテンツ送信部134は、配信用三次元画像コンテンツから、端末装置3から受信した視聴方向情報が示すユーザの方向や位置に応じた領域(部分)の三次元画像コンテンツを抽出する(ステップS265)。視聴方向情報にユーザの位置が設定されていない場合、背景コンテンツの中心の位置など、予め決められた位置が使用される。コンテンツ送信部134は、配信用三次元画像コンテンツから抽出した三次元画像コンテンツを端末装置3に送信する(ステップS270)。 If the composition unit 133 determines NO in step S255, or after performing the process of step S260, the content transmission unit 134 performs the process of step S265. That is, the content transmission unit 134 extracts the 3D image content of the region (part) corresponding to the direction and position of the user indicated by the viewing direction information received from the terminal device 3 from the 3D image content for distribution (Step S265). ). When the user position is not set in the viewing direction information, a predetermined position such as the center position of the background content is used. The content transmission unit 134 transmits the 3D image content extracted from the 3D image content for distribution to the terminal device 3 (step S270).
 なお、端末装置3の撮像部34は、動画像データを継続して撮像し、オブジェクト識別取得部322は、ユーザが入力部31によりオブジェクトIDの取得指示を入力しなくとも、撮像部34が継続して撮像している動画像データからオブジェクトIDを表す文字や画像を検出し、検出した情報からオブジェクトIDを取得してもよい。オブジェクトIDが取得された場合、ユーザが入力部31により指示を入力しなくとも、配置位置情報取得部323は配置位置情報を取得し、送信部324はオブジェクト識別取得部322が取得したオブジェクトID及び配置位置情報取得部323が取得した配置位置情報をコンテンツ提供装置1に送信する。
 上記のように、端末装置3は、撮像部34が継続して撮像を行っている間は、ユーザによる指示が入力されなくとも、動画像データからオブジェクトIDを検出し、検出したオブジェクトIDと、オブジェクトIDを検出した時の配置位置情報とをコンテンツ提供装置1に送信する処理を繰り返す。これにより、ユーザは、都度操作を行わなくとも、端末装置3の撮像部34に媒体5をかざしたり、撮像部34を媒体5に向けたりするたけで、次々とオブジェクトが追加された三次元画像コンテンツを視聴することができる。
Note that the imaging unit 34 of the terminal device 3 continuously captures moving image data, and the object identification acquisition unit 322 continues even if the user does not input an object ID acquisition instruction through the input unit 31. Then, a character or an image representing the object ID may be detected from the moving image data being captured, and the object ID may be acquired from the detected information. When the object ID is acquired, the arrangement position information acquisition unit 323 acquires the arrangement position information even if the user does not input an instruction through the input unit 31, and the transmission unit 324 receives the object ID and the object ID acquired by the object identification acquisition unit 322. The arrangement position information acquired by the arrangement position information acquisition unit 323 is transmitted to the content providing apparatus 1.
As described above, the terminal device 3 detects the object ID from the moving image data while the imaging unit 34 is continuously capturing images, even if no instruction is input by the user, and the detected object ID, The process of transmitting the arrangement position information when the object ID is detected to the content providing apparatus 1 is repeated. Thereby, even if the user does not perform an operation each time, the user simply holds the medium 5 over the imaging unit 34 of the terminal device 3 or points the imaging unit 34 toward the medium 5, and the three-dimensional image in which objects are successively added. Content can be viewed.
 続いて、コンテンツ提供システムの具体的な使用例を説明する。
 1つ目の使用例として、ユーザが、部屋に家具を配置したイメージを3次元画像により視聴する場合を説明する。
Next, a specific usage example of the content providing system will be described.
As a first usage example, a case will be described in which a user views an image in which furniture is arranged in a room as a three-dimensional image.
 図8は、HMD2をユーザが使用する状況を示す図である。ユーザは、家具を配置する部屋と同じ間取りの部屋Rにいる。部屋Rの床や壁には、位置情報を印刷したマーカーMが床や壁に貼ってある。 FIG. 8 is a diagram showing a situation in which the user uses the HMD 2. The user is in the room R with the same floor plan as the room where the furniture is placed. On the floor or wall of the room R, a marker M printed with position information is attached to the floor or wall.
 図9は、オブジェクトコンテンツ管理情報の設定例を示す図である。同図に示すようにオブジェクトコンテンツ管理情報には、椅子、カーテン、時計等の家具もしくはインテリア、または部屋の状況に影響を及ぼす太陽などのオブジェクトについての情報が設定される。床の上に設置するオブジェクトについては、配置位置属性を「床」とし、窓に設置するオブジェクトについては、配置位置属性を「窓」とし、壁に設置するオブジェクトについては、配置位置属性を「壁」とし、天井にぶら下げられるオブジェクトについては、配置位置属性を「天井」とする。また、画像の明るさに影響を及ぼす照明や太陽などのオブジェクトの画像加工属性には、どのように画像を加工するかが設定される。
 なお、配置位置属性によりオブジェクトの向きの属性が得られるようにしてもよい。例えば、配置位置属性「床」の場合、部屋と向きが同じなるようにオブジェクトコンテンツが配置され、配置位置属性「窓」の場合、窓と並行になるようにオブジェクトコンテンツが配置され、配置位置属性「壁」の場合、壁と並行になるようにオブジェクトコンテンツを配置される。なお、配置位置属性に、オブジェクトの向きを明示的に設定してもよい。
FIG. 9 is a diagram illustrating an example of setting object content management information. As shown in the figure, the object content management information is set with information on furniture or interior such as chairs, curtains, clocks, etc., or objects such as the sun that affects the situation of the room. For an object placed on the floor, the placement position attribute is “floor”, for an object placed on the window, the placement position attribute is “window”, and for an object placed on the wall, the placement position attribute is “wall”. ", And for an object hung on the ceiling, the placement position attribute is" ceiling ". Also, how to process an image is set in the image processing attribute of an object such as lighting or the sun that affects the brightness of the image.
Note that the orientation attribute of the object may be obtained by the arrangement position attribute. For example, in the case of the arrangement position attribute “floor”, the object content is arranged so that the orientation is the same as the room, and in the case of the arrangement position attribute “window”, the object content is arranged so as to be parallel to the window. In the case of “wall”, the object content is arranged in parallel with the wall. Note that the orientation of the object may be explicitly set in the arrangement position attribute.
 図10は、配信用三次元画像コンテンツの生成例を示す図である。
 まず、HMD2の表示装置として使用されている端末装置3の視聴方向情報取得部321は、検出部33が取得したユーザの向きと現在位置の情報を視聴方向情報としてコンテンツ提供装置1に送信する(ステップS105、ステップS110)。コンテンツ提供装置1の視聴方向情報受信部131が端末装置3から視聴方向情報を受信すると(ステップS205:YES)、合成部133は、視聴方向情報が示す現在位置に対応した部屋Rの背景コンテンツB1を記憶部11から読み出す。合成部133は、背景コンテンツB1を、図10の(a)に示すように配信用三次元画像コンテンツG11とする(ステップS210:NO、ステップS215)。背景コンテンツB1は、部屋Rを事前に撮影して生成した三次元画像コンテンツである。背景コンテンツB1からは、位置と向きを指定することにより、指定された位置から指定された向きで見たときの三次元画像コンテンツを取得可能である。コンテンツ送信部134は、配信用三次元画像コンテンツG11から、受信した視聴方向情報が示すユーザの位置または向きに応じた領域の三次元画像コンテンツを抽出し(ステップS265)、端末装置3に送信する(ステップS270)。端末装置3の表示制御部326は、コンテンツ受信部325がコンテンツ提供装置1から受信した三次元画像コンテンツを表示部36に表示する(ステップS115~ステップS120)。これにより、ユーザは、現在の位置から部屋Rを見たときの三次元画像コンテンツを視聴することができる。
FIG. 10 is a diagram illustrating a generation example of the distribution 3D image content.
First, the viewing direction information acquisition unit 321 of the terminal device 3 used as the display device of the HMD 2 transmits the user orientation and current position information acquired by the detection unit 33 to the content providing device 1 as viewing direction information ( Step S105, Step S110). When the viewing direction information receiving unit 131 of the content providing apparatus 1 receives the viewing direction information from the terminal device 3 (step S205: YES), the composition unit 133 displays the background content B1 of the room R corresponding to the current position indicated by the viewing direction information. Is read from the storage unit 11. The composition unit 133 sets the background content B1 as the distribution three-dimensional image content G11 as shown in FIG. 10A (step S210: NO, step S215). The background content B1 is a three-dimensional image content generated by photographing the room R in advance. By designating the position and orientation from the background content B1, it is possible to acquire the 3D image content when viewed from the designated position in the designated direction. The content transmission unit 134 extracts the three-dimensional image content in the area corresponding to the position or orientation of the user indicated by the received viewing direction information from the distribution three-dimensional image content G11 (step S265), and transmits it to the terminal device 3. (Step S270). The display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36 (steps S115 to S120). Thereby, the user can view the 3D image content when the room R is viewed from the current position.
 ユーザが部屋Rの中を移動すると、端末装置3の視聴方向情報取得部321は、検出部33が新たに取得したユーザの向きと現在位置の情報を視聴方向情報としてコンテンツ提供装置1に送信する(ステップS105、ステップS110)。コンテンツ提供装置1の視聴方向情報受信部131は端末装置3から新たな視聴方向情報を受信する(ステップS205)。コンテンツ送信部134は、既に生成されている配信用三次元画像コンテンツG11から、新たに受信した視聴方向情報が示すユーザの位置や向きに応じた領域の三次元画像コンテンツを抽出し(ステップS265)、端末装置3に送信する(ステップS270)。端末装置3の表示制御部326は、コンテンツ受信部325がコンテンツ提供装置1から受信した三次元画像コンテンツを表示部36に表示する(ステップS115~ステップS120)。これにより、ユーザは、移動した位置から部屋Rを見たときの三次元画像コンテンツを視聴することができる。 When the user moves in the room R, the viewing direction information acquisition unit 321 of the terminal device 3 transmits the user orientation and current position information newly acquired by the detection unit 33 to the content providing device 1 as viewing direction information. (Step S105, Step S110). The viewing direction information receiving unit 131 of the content providing apparatus 1 receives new viewing direction information from the terminal device 3 (step S205). The content transmission unit 134 extracts the 3D image content of the area corresponding to the position and orientation of the user indicated by the newly received viewing direction information from the already generated 3D image content for distribution G11 (step S265). Then, the data is transmitted to the terminal device 3 (step S270). The display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36 (steps S115 to S120). Thereby, the user can view the 3D image content when the room R is viewed from the moved position.
 ユーザは、部屋Rにおいて家具を設置したい場所に移動すると、設置したい家具の写真とオブジェクトIDとが印刷されたカード(媒体5)を選び、撮像指示を端末装置3の入力部31により入力する(ステップS125:YES)。オブジェクト識別取得部322は、撮像部34が撮像した画像のデータからオブジェクトIDを取得する(ステップS130)。さらに、ユーザは、端末装置3の撮像部34により、家具を設置したい場所のマーカーMを撮像する。配置位置情報取得部323は、撮像部34が撮像した画像のデータから位置情報を取得し、配置位置情報とする(ステップS135)。送信部324は、取得したオブジェクトID及び配置位置情報をコンテンツ提供装置1に送信する(ステップS140)。 When the user moves to a place where the furniture is desired to be installed in the room R, the user selects a card (medium 5) on which the photograph of the furniture to be installed and the object ID are printed, and inputs an imaging instruction via the input unit 31 of the terminal device 3 ( Step S125: YES). The object identification acquisition unit 322 acquires the object ID from the image data captured by the imaging unit 34 (step S130). Further, the user images the marker M of the place where the furniture is to be installed by the imaging unit 34 of the terminal device 3. The arrangement position information acquisition unit 323 acquires the position information from the data of the image captured by the imaging unit 34 and sets it as the arrangement position information (step S135). The transmission unit 324 transmits the acquired object ID and arrangement position information to the content providing apparatus 1 (step S140).
 コンテンツ提供装置1のオブジェクト情報受信部132がオブジェクトID及び配置位置情報を受信すると(ステップS220:YES)、合成部133は、オブジェクトIDに対応したオブジェクトコンテンツ及び属性情報を読み出す(ステップS225)。ここでは、受信したオブジェクトIDが「00001」であり、図9に示すオブジェクトコンテンツ管理情報からオブジェクト名「椅子」のオブジェクトコンテンツC11と配置位置属性「床」を示す属性情報が読み出されたとする。合成部133は、現在の配信用三次元画像コンテンツG11に、オブジェクトコンテンツC11を重畳(合成)して、図10の(b)に示す配信用三次元画像コンテンツG12を生成する(ステップS230)。オブジェクトコンテンツC11は、配信用三次元画像コンテンツG11の画像に、配置位置情報が示す三次元上の配置位置に配置されるようにして重畳される。 When the object information receiving unit 132 of the content providing apparatus 1 receives the object ID and the arrangement position information (step S220: YES), the synthesizing unit 133 reads the object content and attribute information corresponding to the object ID (step S225). Here, it is assumed that the received object ID is “00001”, and the attribute information indicating the object content C11 of the object name “chair” and the arrangement position attribute “floor” is read from the object content management information illustrated in FIG. The composition unit 133 superimposes (synthesizes) the object content C11 on the current delivery 3D image content G11 to generate the delivery 3D image content G12 shown in FIG. 10B (step S230). The object content C11 is superimposed on the image of the distribution three-dimensional image content G11 so as to be arranged at a three-dimensional arrangement position indicated by the arrangement position information.
 合成部133は、生成された配信用三次元画像コンテンツG12におけるオブジェクトコンテンツC11の配置位置を、属性情報に設定されている配置位置属性「床」と一致するように補正する。例えば、オブジェクトコンテンツC11を配置する方向は合っているが、下端が配信用三次元画像コンテンツG12の床の高さより上になるように配置される場合がある。そこで、合成部133は、配信用三次元画像コンテンツG12におけるオブジェクトコンテンツC11の配置位置を、オブジェクトコンテンツC11の下端が床の高さと一致するように補正する(ステップS235)。さらに、合成部133は、図10の(c)に示すように、配信用三次元画像コンテンツG12に重畳されている背景コンテンツB1(または配信用三次元画像コンテンツG12)における部屋の向きと、オブジェクトコンテンツC11の向きを合わせるように、オブジェクトコンテンツC11を回転させてもよい。合成部133は、ステップS230において、例えば、カーペットなど配置位置属性が「床」であるオブジェクトコンテンツを重畳した場合も、同様に向きを含めて配置位置を補正する。また、例えば、合成部133は、時計など配置位置属性が「壁」であるオブジェクトコンテンツを重畳した場合は、オブジェクトコンテンツの配置位置を配信用三次元画像コンテンツにおける壁の位置に補正し、壁と並行になるように向きを回転させる。なお、合成部133は、オブジェクトコンテンツC11の配置位置補正後の配信用三次元画像コンテンツを生成してもよく、オブジェクトコンテンツC11を最初の配置位置から補正後の配置位置に移動する動画の配信用三次元画像コンテンツを生成してもよい。
コンテンツ送信部134は、オブジェクトコンテンツC11の配置位置を補正した配信用三次元画像コンテンツG12から、最新の視聴方向情報が示すユーザの位置や向きに応じた領域の三次元画像コンテンツを抽出し(ステップS265)、端末装置3に送信する(ステップS270)。
The synthesizing unit 133 corrects the arrangement position of the object content C11 in the generated distribution 3D image content G12 so as to coincide with the arrangement position attribute “floor” set in the attribute information. For example, the object content C11 may be arranged in the same direction, but may be arranged such that the lower end is above the floor height of the distribution 3D image content G12. Therefore, the composition unit 133 corrects the arrangement position of the object content C11 in the distribution 3D image content G12 so that the lower end of the object content C11 coincides with the height of the floor (step S235). Furthermore, as illustrated in FIG. 10C, the composition unit 133 determines the orientation of the room and the object in the background content B1 (or the three-dimensional image content G12 for distribution) superimposed on the three-dimensional image content G12 for distribution. The object content C11 may be rotated so that the content C11 is oriented. In step S230, for example, the composition unit 133 similarly corrects the arrangement position including the direction even when an object content such as a carpet with an arrangement position attribute “floor” is superimposed. Further, for example, when the object content such as a clock where the placement position attribute is “wall” is superimposed, the composition unit 133 corrects the placement position of the object content to the position of the wall in the 3D image content for distribution, Rotate the direction to be parallel. The synthesizing unit 133 may generate a 3D image content for distribution after correcting the arrangement position of the object content C11, and for distributing a moving image that moves the object content C11 from the initial arrangement position to the corrected arrangement position. Three-dimensional image content may be generated.
The content transmission unit 134 extracts the 3D image content of the area corresponding to the position and orientation of the user indicated by the latest viewing direction information from the 3D image content for distribution G12 in which the arrangement position of the object content C11 is corrected (step S1). S265), and transmits to the terminal device 3 (step S270).
 ユーザは、椅子の位置を動かしたい場合、端末装置3に表示されている椅子の画像を、指でタッチしたまま、移動させたい方向に動かす(ステップS145:YES)。端末装置3の送信部324は、椅子のオブジェクトIDと、移動方向及び移動量を設定した補正指示をコンテンツ提供装置1に送信する(ステップS150)。
 コンテンツ提供装置1の合成部133は、補正指示受信部135が受信した補正指示からオブジェクトID、移動方向及び移動量を取得する。合成部133は、配信用三次元画像コンテンツG12における椅子のオブジェクトコンテンツC11の配置位置を、補正指示に従って補正する(ステップS245:YES、ステップS250)。コンテンツ送信部134は、オブジェクトコンテンツC11の配置位置を補正した配信用三次元画像コンテンツG12から、最新の視聴方向情報が示すユーザの位置や向きに応じた領域の三次元画像コンテンツを抽出し(ステップS265)、端末装置3に送信する(ステップS270)。これにより、ユーザは、部屋Rに設置した椅子の位置を移動させたときの三次元画像コンテンツを視聴することができる。このようにして位置を補正することができるので、部屋Rに狭い間隔でマーカーMを設置する必要がない。
When the user wants to move the position of the chair, the user moves the chair image displayed on the terminal device 3 in the desired direction while touching with the finger (step S145: YES). The transmission unit 324 of the terminal device 3 transmits the correction instruction in which the chair object ID, the movement direction, and the movement amount are set to the content providing apparatus 1 (step S150).
The synthesizing unit 133 of the content providing apparatus 1 acquires the object ID, the moving direction, and the moving amount from the correction instruction received by the correction instruction receiving unit 135. The composition unit 133 corrects the arrangement position of the chair object content C11 in the distribution 3D image content G12 according to the correction instruction (step S245: YES, step S250). The content transmission unit 134 extracts the 3D image content of the area corresponding to the position and orientation of the user indicated by the latest viewing direction information from the 3D image content for distribution G12 in which the arrangement position of the object content C11 is corrected (step S1). S265), and transmits to the terminal device 3 (step S270). Thereby, the user can view the three-dimensional image content when the position of the chair installed in the room R is moved. Since the position can be corrected in this way, it is not necessary to install the markers M in the room R at a narrow interval.
 ユーザは、部屋Rにおいて違う家具を設置したい場所に移動する。移動する間も、端末装置3は、移動により新たに取得したユーザの向きと現在位置の情報をコンテンツ提供装置1に送信する。コンテンツ提供装置1は、配信用三次元画像コンテンツG12から、新たに受信した視聴方向情報が示すユーザの位置や向きに応じた領域の三次元画像コンテンツを抽出して、端末装置3に送信する。端末装置3の表示制御部326は、コンテンツ受信部325がコンテンツ提供装置1から受信した三次元画像コンテンツを表示部36に表示する。これにより、ユーザは、移動した位置から椅子が設置された部屋Rを見たときの三次元画像コンテンツを視聴することができる。 The user moves to a place where he wants to install different furniture in the room R. During the movement, the terminal device 3 transmits to the content providing apparatus 1 information on the user direction and the current position newly acquired by the movement. The content providing apparatus 1 extracts the 3D image content of the area corresponding to the position and orientation of the user indicated by the newly received viewing direction information from the 3D image content G12 for distribution, and transmits it to the terminal device 3. The display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36. Thereby, the user can view the 3D image content when the room R in which the chair is installed is viewed from the moved position.
 ユーザは、端末装置3により、設置したい新たな家具のカードを撮像し、オブジェクトIDを取得する(ステップS125:YES、ステップS130)。さらに、ユーザは、端末装置3により新たな家具を設置したい場所のマーカーMを撮像し、配置位置情報を得る(ステップS135)。送信部324は、取得したオブジェクトID及び配置位置情報をコンテンツ提供装置1に送信する(ステップS140)。 The user takes an image of a new furniture card to be installed by the terminal device 3 and acquires an object ID (step S125: YES, step S130). Further, the user images the marker M of the place where new furniture is desired to be installed by the terminal device 3, and obtains arrangement position information (step S135). The transmission unit 324 transmits the acquired object ID and arrangement position information to the content providing apparatus 1 (step S140).
 コンテンツ提供装置1は、端末装置3から受信したオブジェクトIDに対応したオブジェクトコンテンツ及び属性情報を読み出す(ステップS220:YES、ステップS225)。ここでは、受信したオブジェクトIDが「00004」であり、図9に示すオブジェクトコンテンツ管理情報からオブジェクト名「照明1」のオブジェクトコンテンツC14と配置位置属性「天井」及び画像加工属性「1段階明るく」を示す属性情報が読み出されたとする。合成部133は、配信用三次元画像コンテンツG12に、オブジェクトコンテンツC14を重畳(合成)して配信用三次元画像コンテンツを生成する(ステップS230)。オブジェクトコンテンツC14は、配信用三次元画像コンテンツG12の画像に、配置位置情報が示す三次元上の配置位置に配置されるようにして重畳される。ここで生成された配信用三次元画像コンテンツは、背景コンテンツB1と、オブジェクトコンテンツC11と、オブジェクトコンテンツC14とを重畳した三次元画像コンテンツである。 The content providing device 1 reads the object content and attribute information corresponding to the object ID received from the terminal device 3 (step S220: YES, step S225). Here, the received object ID is “00004”, and the object content C14 of the object name “lighting 1”, the arrangement position attribute “ceiling”, and the image processing attribute “one step brighter” are obtained from the object content management information shown in FIG. Assume that the indicated attribute information is read. The synthesizing unit 133 superimposes (synthesizes) the object content C14 on the distribution three-dimensional image content G12 to generate a distribution three-dimensional image content (step S230). The object content C14 is superimposed on the image of the distribution three-dimensional image content G12 so as to be arranged at a three-dimensional arrangement position indicated by the arrangement position information. The three-dimensional image content for distribution generated here is a three-dimensional image content in which the background content B1, the object content C11, and the object content C14 are superimposed.
 合成部133は、生成された配信用三次元画像コンテンツにおけるオブジェクトコンテンツC14の配置位置を、属性情報に設定されている配置位置属性「天井」と一致するように補正する。例えば、オブジェクトコンテンツC14の上端が配信用三次元画像コンテンツにおける天井より下や上の高さに配置される場合がある。そこで、合成部133は、オブジェクトコンテンツC14の配置位置を、オブジェクトコンテンツC14の上端が天井の高さと一致するように補正する(ステップS235)。さらに、合成部133は、配信用三次元画像コンテンツに重畳されている背景コンテンツB1(または配信用三次元画像コンテンツ)における部屋の向きと、オブジェクトコンテンツC14の向きを合わせるように、オブジェクトコンテンツC14を回転させてもよい。合成部133は、オブジェクトコンテンツC14の位置を補正した配信用三次元画像コンテンツに対し、画像加工属性が示すように「1段階明るく」する加工を行う(ステップS240)。コンテンツ送信部134は、加工された配信用三次元画像コンテンツから、最新の視聴方向情報が示すユーザの位置や向きに応じた領域の三次元画像コンテンツを抽出し(ステップS265)、端末装置3に送信する(ステップS270)。 The composition unit 133 corrects the arrangement position of the object content C14 in the generated distribution three-dimensional image content so as to coincide with the arrangement position attribute “ceiling” set in the attribute information. For example, the upper end of the object content C14 may be arranged at a height below or above the ceiling in the distribution 3D image content. Accordingly, the composition unit 133 corrects the arrangement position of the object content C14 so that the upper end of the object content C14 matches the height of the ceiling (step S235). Further, the composition unit 133 sets the object content C14 so that the orientation of the room in the background content B1 (or the delivery 3D image content) superimposed on the delivery 3D image content matches the orientation of the object content C14. It may be rotated. The synthesizing unit 133 performs a process of “one step brighter” on the three-dimensional image content for distribution whose position of the object content C14 is corrected as indicated by the image processing attribute (step S240). The content transmission unit 134 extracts the three-dimensional image content in the region corresponding to the position and orientation of the user indicated by the latest viewing direction information from the processed three-dimensional image content for distribution (step S265). Transmit (step S270).
 上記を繰り返すことにより、ユーザは、部屋Rに、家具を設置したときのイメージを三次元画像により視聴することができる。
 なお、端末装置3は、例えば、展示場などに展示している実物の家具に貼付されている媒体5からオブジェクトIDを予め読み取って、記憶しておいてもよい。この場合、オブジェクトIDに家具の名称などを表すオブジェクト名を付加しておくことが望ましい。ユーザは、端末装置3が表示部36に表示するオブジェクト名の中から、部屋Rに設置する家具のオブジェクト名を入力部31により選択する。端末装置3の送信部324は、選択されたオブジェクト名に対応したオブジェクトIDをステップS140において送信する。
By repeating the above, the user can view the image when the furniture is installed in the room R as a three-dimensional image.
In addition, the terminal device 3 may read and memorize | store object ID previously from the medium 5 affixed on the real furniture exhibited at an exhibition hall etc., for example. In this case, it is desirable to add an object name representing the name of the furniture to the object ID. The user selects an object name of furniture to be installed in the room R from the object names displayed on the display unit 36 by the terminal device 3 using the input unit 31. The transmission unit 324 of the terminal device 3 transmits the object ID corresponding to the selected object name in step S140.
 図11は、複数の家具のオブジェクトを設置したときの配信用三次元画像コンテンツの例を示す図である。同図では、部屋Rの三次元画像コンテンツである背景コンテンツB1に、オブジェクトコンテンツC21~C28を重畳した配信用三次元画像コンテンツが生成されている。同図に示すユーザは、配信用三次元画像コンテンツにおけるユーザの仮想的な位置を示す。ユーザは、配信用三次元画像コンテンツから抽出された三次元画像コンテンツをHMD2により視聴することによって、実際にオブジェクトコンテンツC21~C28の家具を設置した部屋Rにいるような没入感を体験することができる。 FIG. 11 is a diagram showing an example of the 3D image content for distribution when a plurality of furniture objects are installed. In the figure, a distribution 3D image content in which object contents C21 to C28 are superimposed on a background content B1 that is a 3D image content of a room R is generated. The user shown in the figure shows the virtual position of the user in the distribution 3D image content. The user can experience an immersive feeling as if he / she is actually in the room R where the furniture of the object contents C21 to C28 is installed by viewing the 3D image content extracted from the 3D image content for distribution with the HMD2. it can.
 次に2つ目の使用例として、ユーザが、3次元画像のゲーム画面に、ユーザが選択したキャラクターを登場させる場合を説明する。 Next, as a second usage example, a case where the user makes the character selected by the user appear on the game screen of the three-dimensional image will be described.
 図12は、オブジェクトコンテンツ管理情報の設定例を示す図である。同図に示すようにオブジェクトコンテンツ管理情報には、ゲーム画面に登場可能なキャラクター、または画面の表示に影響を及ぼすオブジェクトについての情報が設定される。地面を移動したり、地面に置かれたりするオブジェクトついては、配置位置属性を「地面」とし、空を移動したり、空に浮かぶオブジェクトについては、配置位置属性を「空」とする。なお、配置位置属性が空である場合は、地面からの高さを設定してもよい。また、画像の明るさに影響を及ぼす太陽や積乱雲などのすオブジェクトの画像加工属性には、どのように画像の表示に関する属性を加工するかが設定される。 FIG. 12 is a diagram showing an example of setting object content management information. As shown in the figure, information on characters that can appear on the game screen or objects that affect the display of the screen is set in the object content management information. For an object that moves on or is placed on the ground, the placement position attribute is “ground”, and for an object that moves in the sky or floats in the sky, the placement position attribute is “sky”. When the arrangement position attribute is empty, the height from the ground may be set. In addition, the image processing attribute of an object such as the sun or a cumulonimbus that affects the brightness of the image is set as to how to process an attribute related to image display.
 図13は、端末装置3が表示する三次元画像コンテンツの例を示す図である。
 最初に、HMD2の表示装置として使用されている端末装置3の視聴方向情報取得部321は、検出部33が取得したユーザの向きの情報を視聴方向情報としてコンテンツ提供装置1に送信する(ステップS105、ステップS110)。コンテンツ提供装置1の視聴方向情報受信部131が端末装置3から視聴方向情報を受信すると(ステップS205:YES)、合成部133は、背景コンテンツB2を記憶部11から読み出す。合成部133は、背景コンテンツB2を、配信用三次元画像コンテンツG31とする(ステップS210:NO、ステップS215)。コンテンツ送信部134は、配信用三次元画像コンテンツG31から、受信した視聴方向情報が示すユーザの向きに応じた領域の三次元画像コンテンツを抽出し(ステップS265)、端末装置3に送信する(ステップS270)。なお、ユーザの位置は、背景コンテンツ(または配信用三次元画像コンテンツ)における空間の中心位置など、固定位置とする。端末装置3の表示制御部326は、図13の(a)に示すように、コンテンツ受信部325がコンテンツ提供装置1から受信した三次元画像コンテンツを表示部36に表示する(ステップS115~ステップS120)。
FIG. 13 is a diagram illustrating an example of 3D image content displayed by the terminal device 3.
First, the viewing direction information acquisition unit 321 of the terminal device 3 used as the display device of the HMD 2 transmits the user orientation information acquired by the detection unit 33 to the content providing device 1 as viewing direction information (step S105). Step S110). When the viewing direction information receiving unit 131 of the content providing device 1 receives the viewing direction information from the terminal device 3 (step S205: YES), the combining unit 133 reads the background content B2 from the storage unit 11. The composition unit 133 sets the background content B2 as the three-dimensional image content for distribution G31 (step S210: NO, step S215). The content transmission unit 134 extracts the 3D image content in the area corresponding to the user orientation indicated by the received viewing direction information from the 3D image content G31 for distribution (step S265), and transmits the 3D image content to the terminal device 3 (step S265). S270). The user's position is a fixed position such as the center position of the space in the background content (or distribution 3D image content). As shown in FIG. 13A, the display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36 (steps S115 to S120). ).
 ユーザは、端末装置3の撮像部34により、蛙(子)のカード(媒体5)を撮像し、撮像した画像のデータからオブジェクトID「10001」を取得する(ステップS125:YES、ステップS130)。端末装置3の配置位置情報取得部323は、検出部33が取得したユーザの向きの情報を、配置位置情報とする(ステップS135)。送信部324は、取得したオブジェクトID及び配置位置情報をコンテンツ提供装置1に送信する(ステップS140)。 The user uses the image capturing unit 34 of the terminal device 3 to capture the card (medium 5), and acquires the object ID “10001” from the captured image data (step S125: YES, step S130). The arrangement position information acquisition unit 323 of the terminal device 3 uses the user orientation information acquired by the detection unit 33 as arrangement position information (step S135). The transmission unit 324 transmits the acquired object ID and arrangement position information to the content providing apparatus 1 (step S140).
 コンテンツ提供装置1の合成部133は、受信したオブジェクトID「10001」に対応したオブジェクトコンテンツC31と属性情報を読み出す(ステップS220:YES、ステップS225)。合成部133は、受信した配置位置情報からユーザの向きの情報を取得し、背景コンテンツ(または配信用三次元画像コンテンツ)の中心位置などの三次元上の所定位置から、取得した情報が示す向きに所定の距離進んだ位置を配置位置として取得する。合成部133は、現在の配信用三次元画像コンテンツG31に、オブジェクトコンテンツC31を重畳(合成)して、更新後の配信用三次元画像コンテンツを生成する(ステップS230)。オブジェクトコンテンツC31は、配信用三次元画像コンテンツG31の画像に、取得した三次元上の配置位置に配置されるようにして重畳される。 The composition unit 133 of the content providing apparatus 1 reads out the object content C31 and attribute information corresponding to the received object ID “10001” (step S220: YES, step S225). The synthesizing unit 133 acquires user orientation information from the received arrangement position information, and the direction indicated by the acquired information from a predetermined three-dimensional position such as the center position of the background content (or three-dimensional image content for distribution). The position advanced by a predetermined distance is acquired as the arrangement position. The synthesizing unit 133 superimposes (synthesizes) the object content C31 on the current distribution 3D image content G31 to generate the updated distribution 3D image content (step S230). The object content C31 is superimposed on the image of the distribution three-dimensional image content G31 so as to be arranged at the acquired three-dimensional arrangement position.
 合成部133は、更新後の配信用三次元画像コンテンツにおけるオブジェクトコンテンツC31の配置位置を属性情報に設定されている配置位置「地面」と一致するように補正する。つまり、合成部133は、配信用三次元画像コンテンツにおけるオブジェクトコンテンツC31の配置位置を、オブジェクトコンテンツC31の下端が地面の高さと一致するように補正する(ステップS235)。コンテンツ送信部134は、新たに生成した配信用三次元画像コンテンツから、最新の視聴方向情報が示すユーザの向きに応じた領域の三次元画像コンテンツを抽出し(ステップS265)、端末装置3に送信する(ステップS270)。 The composition unit 133 corrects the arrangement position of the object content C31 in the updated three-dimensional image content for distribution so as to coincide with the arrangement position “ground” set in the attribute information. That is, the composition unit 133 corrects the arrangement position of the object content C31 in the distribution 3D image content so that the lower end of the object content C31 matches the height of the ground (step S235). The content transmission unit 134 extracts the 3D image content of the area corresponding to the user orientation indicated by the latest viewing direction information from the newly generated 3D image content for distribution (step S265), and transmits the content to the terminal device 3. (Step S270).
 さらにユーザは、端末装置3の撮像部34により、雲Aのカード(媒体5)を撮像し、撮像した画像のデータからオブジェクトID「10008」を取得する(ステップS125:YES、ステップS130)。端末装置3の送信部324は、取得したオブジェクトIDと、ユーザの向きを示す配置位置情報をコンテンツ提供装置1に送信する(ステップS135~ステップS140)。 Further, the user images the cloud A card (medium 5) with the imaging unit 34 of the terminal device 3, and acquires the object ID “10008” from the captured image data (step S125: YES, step S130). The transmission unit 324 of the terminal device 3 transmits the acquired object ID and arrangement position information indicating the user orientation to the content providing device 1 (steps S135 to S140).
 コンテンツ提供装置1の合成部133は、受信したオブジェクトID「10008」に対応したオブジェクトコンテンツC38と属性情報を読み出す(ステップS220:YES、ステップS225)。合成部133は、配信用三次元画像コンテンツの中心位置など三次元上の所定位置から、受信した配置位置情報が示す向きに、所定の距離進んだ位置を配置位置として取得する。合成部133は、現在の配信用三次元画像コンテンツに、オブジェクトコンテンツC38を重畳(合成)して、配信用三次元画像コンテンツG32を生成する(ステップS230)。オブジェクトコンテンツC38は、現在の配信用三次元画像コンテンツの画像に、取得した三次元上の配置位置に配置されるようにして重畳される。 The composition unit 133 of the content providing apparatus 1 reads out the object content C38 and attribute information corresponding to the received object ID “10008” (step S220: YES, step S225). The synthesizing unit 133 acquires a position advanced by a predetermined distance from the predetermined three-dimensional position such as the center position of the distribution three-dimensional image content in the direction indicated by the received arrangement position information. The synthesizing unit 133 superimposes (synthesizes) the object content C38 on the current 3D image content for distribution to generate the 3D image content for distribution G32 (step S230). The object content C38 is superimposed on the image of the current 3D image content for distribution so as to be arranged at the obtained three-dimensional arrangement position.
 合成部133は、配信用三次元画像コンテンツG32におけるオブジェクトコンテンツC38の配置位置を、属性情報に設定されている配置位置「空(地面からの高さW)」と一致するように補正する(ステップS235)。コンテンツ送信部134は、配信用三次元画像コンテンツG32から、最新の視聴方向情報が示すユーザの向きに応じた領域の三次元画像コンテンツを抽出し(ステップS265)、端末装置3に送信する(ステップS270)。端末装置3の表示制御部326は、図13の(b)に示すように、コンテンツ受信部325がコンテンツ提供装置1から受信した三次元画像コンテンツを表示部36に表示する(ステップS115~ステップS120)。 The synthesizing unit 133 corrects the arrangement position of the object content C38 in the distribution 3D image content G32 so as to coincide with the arrangement position “sky (height W from the ground)” set in the attribute information (step S31). S235). The content transmission unit 134 extracts the three-dimensional image content in the area corresponding to the user orientation indicated by the latest viewing direction information from the distribution three-dimensional image content G32 (step S265), and transmits it to the terminal device 3 (step S265). S270). As shown in FIG. 13B, the display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36 (steps S115 to S120). ).
 同様にして、端末装置3が蛙(ママ)、鳩、雲Bのカードを読み込むことにより、コンテンツ提供装置1は背景コンテンツB2、オブジェクトコンテンツC31、C32、C34、C38、C39を重畳した配信用三次元画像コンテンツG33を生成する。コンテンツ提供装置1は、配信用三次元画像コンテンツG33から、ユーザの向きに対応した範囲を抽出した三次元画像コンテンツを端末装置3に送信する。端末装置3の表示制御部326は、図13の(c)に示す三次元画像コンテンツを表示部36に表示する(ステップS115~ステップS120)。 Similarly, when the terminal device 3 reads the card of candy (mama), pigeon, and cloud B, the content providing device 1 is the tertiary for distribution in which the background content B2 and the object content C31, C32, C34, C38, and C39 are superimposed. The original image content G33 is generated. The content providing apparatus 1 transmits the 3D image content obtained by extracting the range corresponding to the user direction from the 3D image content for distribution G33 to the terminal device 3. The display control unit 326 of the terminal device 3 displays the 3D image content shown in (c) of FIG. 13 on the display unit 36 (steps S115 to S120).
 ユーザが、HMD2により配信用三次元画像コンテンツを視聴しながら、頭を横に回転させると、端末装置3は、新たに取得したユーザの向きの情報をコンテンツ提供装置1に送信する。コンテンツ提供装置1は、配信用三次元画像コンテンツG33から、新たに受信したユーザの向きに応じた領域の三次元画像コンテンツを抽出して、端末装置3に送信する。端末装置3の表示制御部326は、コンテンツ受信部325がコンテンツ提供装置1から受信した三次元画像コンテンツを表示部36に表示する。これにより、ユーザは、異なる方向を見たときの三次元画像コンテンツを視聴することができる。 When the user rotates the head sideways while viewing the 3D image content for distribution using the HMD 2, the terminal device 3 transmits the newly acquired user orientation information to the content providing device 1. The content providing apparatus 1 extracts the newly received three-dimensional image content in the area corresponding to the orientation of the user from the distribution three-dimensional image content G33 and transmits the extracted three-dimensional image content to the terminal device 3. The display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36. Thereby, the user can view the 3D image content when viewing in a different direction.
 なお、コンテンツ提供装置1の合成部133は、所定のイベントを検出した場合に、そのイベントに応じて、配信用三次元画像コンテンツにそのイベントに応じた加工を行ってもよい。
 イベントは、例えば、所定のオブジェクトIDの組み合わせを受信した場合や、所定のオブジェクトコンテンツが所定の配置で配置された場合などがある。イベントは、オブジェクトIDの受信順や、背景コンテンツの種類をさらに加味しもよい。
 また、イベントに応じた加工には、配信用三次元画像コンテンツに新たなオブジェクトコンテンツを重畳する、配信用三次元画像コンテンツに重畳されているオブジェクトコンテンツを他のオブジェクトコンテンツに置き換える、背景コンテンツを他の背景コンテンツに変更する、配信用三次元画像コンテンツの表示に関する属性(明るさや色調など)を変更する、などがある。
When the composition unit 133 of the content providing apparatus 1 detects a predetermined event, the composition unit 133 may process the 3D image content for distribution according to the event according to the event.
The event includes, for example, a case where a combination of predetermined object IDs is received and a case where predetermined object content is arranged in a predetermined arrangement. The event may further consider the order of reception of the object IDs and the type of background content.
Also, for processing according to events, superimpose new object content on 3D image content for distribution, replace object content superimposed on 3D image content for distribution with other object content, and other background content Change to background content, change attributes (brightness, color tone, etc.) related to the display of 3D image content for distribution.
 例えば、コンテンツ提供装置1の合成部133は、「蛙(ママ)」のオブジェクトコンテンツと、「蛙(パパ)」のオブジェクトコンテンツとが所定の距離内に配置されたことをイベントとして検出する。合成部133は、配信用三次元画像コンテンツに、「ハートマーク」のオブジェクトコンテンツを、「蛙(ママ)」のオブジェクトコンテンツと「蛙(パパ)」のオブジェクトコンテンツとの間に配置するよう追加して重畳する。
 また、例えば、コンテンツ提供装置1の合成部133は、先に端末装置3から「双葉」のオブジェクトIDを受信し、その後に「雨」のオブジェクトIDを受信したことをイベントとして検出する。合成部133は、配信用三次元画像コンテンツに重畳されている双葉のオブジェクトコンテンツを、双葉から成長して花が咲く動画のオブジェクトコンテンツに置き換える。
 また、例えば、コンテンツ提供装置1の合成部133は、「桃太郎」、「犬」、「キジ」、「猿」のオブジェクトIDを受信したことをイベントとして検出する。合成部133は、配信用三次元画像コンテンツに重畳されている背景コンテンツを、鬼が島へ向かうストーリーの背景コンテンツに変更する。
 また、例えば、コンテンツ提供装置1の合成部133は、部屋の背景コンテンツの使用と、カーテンのオブジェクトID及び月のオブジェクトIDの受信とをイベントとして検出する。合成部133は、配信用三次元画像コンテンツの明るさを暗くするよう加工する。
For example, the composition unit 133 of the content providing apparatus 1 detects, as an event, that the object content “Mama” and the object content “Dad” are arranged within a predetermined distance. The synthesizing unit 133 adds the “heart mark” object content to the distribution 3D image content so as to be placed between the object content of “Mama” and the object content of “Daddy”. Superimposed.
Also, for example, the composition unit 133 of the content providing apparatus 1 detects as an event that the object ID “Futaba” is received from the terminal device 3 first and then the object ID “rain” is received. The synthesizing unit 133 replaces the Futaba object content superimposed on the distribution 3D image content with a moving image object content that grows from Futaba and blooms.
Further, for example, the composition unit 133 of the content providing apparatus 1 detects that an object ID of “Momotaro”, “dog”, “pheasant”, or “monkey” has been received as an event. The composition unit 133 changes the background content superimposed on the 3D image content for distribution to the background content of the story where the demon goes to the island.
For example, the composition unit 133 of the content providing apparatus 1 detects the use of the room background content and the reception of the curtain object ID and the moon object ID as events. The synthesizing unit 133 performs processing so that the brightness of the distribution three-dimensional image content is reduced.
 上述した実施形態によれば、ユーザは、自分が選択したオブジェクトの画像が合成された三次元画像のコンテンツを、没入感の高いHMDにより視聴することができる。 According to the above-described embodiment, the user can view the content of the three-dimensional image obtained by synthesizing the image of the object selected by the user with a highly immersive HMD.
 なお、上述のコンテンツ提供装置1及び端末装置3は、内部にコンピュータシステムを有している。そして、コンテンツ提供装置1及び端末装置3の動作の過程は、プログラムの形式でコンピュータ読み取り可能な記録媒体に記憶されており、このプログラムをコンピュータシステムが読み出して実行することによって、上記処理が行われる。ここでいうコンピュータシステムは、CPU及び各種メモリ、OS、周辺機器等のハードウェアを含む。 Note that the content providing device 1 and the terminal device 3 described above have a computer system therein. The process of operation of the content providing device 1 and the terminal device 3 is stored in a computer-readable recording medium in the form of a program, and the computer system reads and executes this program to perform the above processing. . The computer system referred to here includes a CPU, various memories, an OS, and hardware such as peripheral devices.
 また、「コンピュータシステム」は、WWWシステムを利用している場合であれば、ホームページ提供環境(あるいは表示環境)も含む。
 また、「コンピュータ読み取り可能な記録媒体」とは、フレキシブルディスク、光磁気ディスク、ROM、CD-ROM等の可搬媒体、コンピュータシステムに内蔵されるハードディスク等の記憶装置のことをいう。さらに「コンピュータ読み取り可能な記録媒体」とは、インターネット等のネットワークや電話回線等の通信回線を介してプログラムを送信する場合の通信線のように、短時間の間、動的にプログラムを保持するもの、その場合のサーバやクライアントとなるコンピュータシステム内部の揮発性メモリのように、一定時間プログラムを保持しているものも含むものとする。また上記プログラムは、前述した機能の一部を実現するためのものであってもよく、さらに前述した機能をコンピュータシステムにすでに記録されているプログラムとの組み合わせで実現できるものであってもよい。
Further, the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used.
The “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system. Furthermore, the “computer-readable recording medium” dynamically holds a program for a short time like a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line. In this case, a volatile memory in a computer system serving as a server or a client in that case, and a program that holds a program for a certain period of time are also included. The program may be a program for realizing a part of the functions described above, and may be a program capable of realizing the functions described above in combination with a program already recorded in a computer system.
 以上、この発明の実施形態について図面を参照して詳述してきたが、具体的な構成はこの実施形態に限られるものではなく、この発明の要旨を逸脱しない範囲の設計等も含まれる。 As described above, the embodiment of the present invention has been described in detail with reference to the drawings. However, the specific configuration is not limited to this embodiment, and includes design and the like within the scope not departing from the gist of the present invention.
1 コンテンツ提供装置
2 HMD(ヘッドマウントディスプレイ)
3 端末装置
5 媒体
9 ネットワーク
11 記憶部
12 通信部
13 処理部
131 視聴方向情報受信部
132 オブジェクト情報受信部
133 合成部
134 コンテンツ送信部
135 補正指示受信部
136 削除指示受信部
31 入力部
32 処理部
33 検出部
34 撮像部
35 通信部
36 表示部
321 視聴方向情報取得部
322 オブジェクト識別取得部
323 配置位置情報取得部
324 送信部
325 コンテンツ受信部
326 表示制御部
1 Content providing device 2 HMD (head mounted display)
3 Terminal device 5 Medium 9 Network 11 Storage unit 12 Communication unit 13 Processing unit 131 Viewing direction information receiving unit 132 Object information receiving unit 133 Composition unit 134 Content transmission unit 135 Correction instruction receiving unit 136 Deletion instruction receiving unit 31 Input unit 32 Processing unit 33 Detection unit 34 Imaging unit 35 Communication unit 36 Display unit 321 Viewing direction information acquisition unit 322 Object identification acquisition unit 323 Arrangement position information acquisition unit 324 Transmission unit 325 Content reception unit 326 Display control unit

Claims (10)

  1.  端末装置と、三次元画像コンテンツを前記端末装置に提供するコンテンツ提供装置とを有するコンテンツ提供システムであって、
     前記端末装置は、
     オブジェクトを特定するオブジェクト識別情報と、前記オブジェクトの画像を配置する三次元上の位置を取得可能な配置位置情報とを前記コンテンツ提供装置に送信する送信部と、
     送信した前記オブジェクト識別情報及び前記配置位置情報に対応して前記コンテンツ提供装置から受信した三次元画像コンテンツを表示させる表示制御部と、
     を備え、
     前記コンテンツ提供装置は、
     前記オブジェクト識別情報と、前記オブジェクト識別情報により特定される前記オブジェクトの三次元画像コンテンツであるオブジェクトコンテンツとを対応付けて記憶する記憶部と、
     前記オブジェクト識別情報及び前記配置位置情報を前記端末装置から受信するオブジェクト情報受信部と、
     背景の三次元画像コンテンツである背景コンテンツに、受信した前記オブジェクト識別情報に対応した前記オブジェクトコンテンツを、受信した前記配置位置情報から得られた三次元上の位置に配置するように合成した配信用の三次元画像コンテンツを生成する合成部と、
     前記合成部により生成された配信用の前記三次元画像コンテンツを前記端末装置に送信するコンテンツ送信部と、
     を備える、
     コンテンツ提供システム。
    A content providing system having a terminal device and a content providing device that provides 3D image content to the terminal device,
    The terminal device
    A transmission unit for transmitting object identification information for identifying an object and arrangement position information capable of acquiring a three-dimensional position where an image of the object is arranged to the content providing apparatus;
    A display control unit for displaying the three-dimensional image content received from the content providing device in response to the transmitted object identification information and the arrangement position information;
    With
    The content providing apparatus includes:
    A storage unit that associates and stores the object identification information and object content that is three-dimensional image content of the object specified by the object identification information;
    An object information receiving unit for receiving the object identification information and the arrangement position information from the terminal device;
    For distribution in which the object content corresponding to the received object identification information is combined with the background content, which is a background three-dimensional image content, to be arranged at a three-dimensional position obtained from the received arrangement position information. A synthesizing unit for generating 3D image content of
    A content transmitting unit that transmits the 3D image content for distribution generated by the combining unit to the terminal device;
    Comprising
    Content provision system.
  2.  オブジェクト識別情報と、前記オブジェクト識別情報により特定されるオブジェクトの三次元画像コンテンツであるオブジェクトコンテンツとを対応付けて記憶する記憶部と、
     前記オブジェクト識別情報と、前記オブジェクトの画像を配置する三次元上の位置を取得可能な配置位置情報とを端末装置から受信するオブジェクト情報受信部と、
     背景の三次元画像コンテンツである背景コンテンツに、受信した前記オブジェクト識別情報に対応した前記オブジェクトコンテンツを、受信した前記配置位置情報から得られた三次元上の位置に配置するように合成した配信用の三次元画像コンテンツを生成する合成部と、
     前記合成部により生成された配信用の前記三次元画像コンテンツを前記端末装置に送信するコンテンツ送信部と、
     を備える、コンテンツ提供装置。
    A storage unit that associates and stores object identification information and object content that is three-dimensional image content of the object specified by the object identification information;
    An object information receiving unit that receives from the terminal device the object identification information and arrangement position information capable of acquiring a three-dimensional position at which the image of the object is arranged;
    For distribution in which the object content corresponding to the received object identification information is combined with the background content, which is a background three-dimensional image content, to be arranged at a three-dimensional position obtained from the received arrangement position information. A synthesizing unit for generating 3D image content of
    A content transmitting unit that transmits the 3D image content for distribution generated by the combining unit to the terminal device;
    A content providing apparatus comprising:
  3.  前記端末装置から前記オブジェクトの配置位置または向きの補正を指示する補正指示を受信する補正指示受信部をさらに備え、
     前記合成部は、配信用の前記三次元画像コンテンツにおいて前記オブジェクトコンテンツが配置された位置または向きを、受信した前記補正指示に応じて補正する、
     請求項2に記載のコンテンツ提供装置。
    A correction instruction receiving unit for receiving a correction instruction for instructing correction of the arrangement position or orientation of the object from the terminal device;
    The combining unit corrects the position or orientation in which the object content is arranged in the 3D image content for distribution according to the received correction instruction.
    The content providing apparatus according to claim 2.
  4.  前記記憶部は、前記オブジェクト識別情報に対応付けて、前記オブジェクトの属性を示す属性情報をさらに記憶し、
     前記合成部は、前記配置位置情報から得られた三次元上の位置を、受信した前記オブジェクト識別情報に対応した前記属性情報に応じて補正し、前記背景コンテンツに、前記オブジェクトコンテンツを、補正した位置に配置するように合成した配信用の前記三次元画像コンテンツを生成する、
     請求項2または請求項3に記載のコンテンツ提供装置。
    The storage unit further stores attribute information indicating the attribute of the object in association with the object identification information,
    The combining unit corrects the three-dimensional position obtained from the arrangement position information according to the attribute information corresponding to the received object identification information, and corrects the object content to the background content. Generating the three-dimensional image content for distribution synthesized to be arranged at a position;
    The content provision apparatus of Claim 2 or Claim 3.
  5.  前記合成部は、受信した前記オブジェクト識別情報に対応した前記属性情報に応じて、配信用の前記三次元画像コンテンツに配置された前記オブジェクトコンテンツの向きを補正する、
     請求項4に記載のコンテンツ提供装置。
    The combining unit corrects the orientation of the object content arranged in the 3D image content for distribution according to the attribute information corresponding to the received object identification information.
    The content providing apparatus according to claim 4.
  6.  前記合成部は、配信用の前記三次元画像コンテンツの表示に関する属性を、受信した前記オブジェクト識別情報に対応した前記属性情報に応じて補正する、
     請求項4に記載のコンテンツ提供装置。
    The composition unit corrects an attribute relating to display of the 3D image content for distribution according to the attribute information corresponding to the received object identification information;
    The content providing apparatus according to claim 4.
  7.  前記合成部は、配信用の前記三次元画像コンテンツの表示に関する属性を、受信した前記オブジェクト識別情報に対応した前記属性情報に応じて補正する、
     請求項5に記載のコンテンツ提供装置。
    The composition unit corrects an attribute relating to display of the 3D image content for distribution according to the attribute information corresponding to the received object identification information;
    The content providing apparatus according to claim 5.
  8.  前記端末装置からユーザの向き、あるいは、ユーザの向き及び位置を示す視聴方向情報を受信する視聴方向情報受信部をさらに備え、
     前記コンテンツ送信部は、配信用の前記三次元画像コンテンツから、前記視聴方向情報が示す前記ユーザの向き、あるいは、前記ユーザの向き及び位置に応じた領域の三次元画像コンテンツを抽出して前記端末装置に送信する、
     請求項2に記載のコンテンツ提供装置。
    A viewing direction information receiving unit for receiving viewing direction information indicating the orientation of the user or the orientation and position of the user from the terminal device;
    The content transmission unit extracts, from the 3D image content for distribution, the 3D image content in the region corresponding to the orientation of the user indicated by the viewing direction information or the orientation and position of the user. Send to device,
    The content providing apparatus according to claim 2.
  9.  前記合成部は、所定のオブジェクト識別情報の組み合わせを受信した場合、配信用の前記三次元画像コンテンツに、前記組み合わせに応じた加工を行う、
     請求項2に記載のコンテンツ提供装置。
    When the combination unit receives a combination of predetermined object identification information, it performs processing according to the combination on the 3D image content for distribution.
    The content providing apparatus according to claim 2.
  10.  コンテンツ提供装置が実行するコンテンツ提供方法であって、
     オブジェクト情報受信部が、オブジェクトを特定するオブジェクト識別情報と、前記オブジェクトの画像を配置する三次元上の位置を取得可能な配置位置情報とを端末装置から受信するオブジェクト情報受信ステップと、
     合成部が、背景の三次元画像コンテンツである背景コンテンツに、受信した前記オブジェクト識別情報により特定される前記オブジェクトの三次元画像コンテンツであるオブジェクトコンテンツを、受信した前記配置位置情報から得られた三次元上の位置に配置するように合成した配信用の三次元画像コンテンツを生成する合成ステップと、
     コンテンツ送信部が、前記合成ステップにより生成された配信用の前記三次元画像コンテンツを前記端末装置に送信するコンテンツ送信ステップと、
     を有する、コンテンツ提供方法。
    A content providing method executed by a content providing apparatus,
    An object information receiving unit that receives from the terminal device object identification information for identifying the object and arrangement position information capable of acquiring a three-dimensional position where the image of the object is arranged;
    A synthesizer obtains the object content that is the three-dimensional image content of the object specified by the received object identification information to the background content that is the background three-dimensional image content, and the tertiary obtained from the received arrangement position information A synthesizing step for generating a three-dimensional image content for distribution that is synthesized so as to be placed in an original position;
    A content transmission step in which a content transmission unit transmits the 3D image content for distribution generated in the synthesis step to the terminal device;
    A content providing method.
PCT/JP2016/062523 2015-05-08 2016-04-20 Content provision system, content provision device, and content provision method WO2016181780A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015-095614 2015-05-08
JP2015095614A JP6582526B2 (en) 2015-05-08 2015-05-08 Content providing system, content providing apparatus, and content providing method

Publications (1)

Publication Number Publication Date
WO2016181780A1 true WO2016181780A1 (en) 2016-11-17

Family

ID=57249032

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/062523 WO2016181780A1 (en) 2015-05-08 2016-04-20 Content provision system, content provision device, and content provision method

Country Status (3)

Country Link
JP (1) JP6582526B2 (en)
TW (1) TW201706963A (en)
WO (1) WO2016181780A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7093996B2 (en) * 2018-04-30 2022-07-01 日本絨氈株式会社 Interior proposal system using virtual reality system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004341642A (en) * 2003-05-13 2004-12-02 Nippon Telegr & Teleph Corp <Ntt> Image compositing and display method, image compositing and display program, and recording medium with the image compositing and display program recorded
JP2010218107A (en) * 2009-03-16 2010-09-30 Toppan Printing Co Ltd Panorama vr file providing apparatus, program, method, and system
JP2014109802A (en) * 2012-11-30 2014-06-12 Casio Comput Co Ltd Image processor, image processing method and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004341642A (en) * 2003-05-13 2004-12-02 Nippon Telegr & Teleph Corp <Ntt> Image compositing and display method, image compositing and display program, and recording medium with the image compositing and display program recorded
JP2010218107A (en) * 2009-03-16 2010-09-30 Toppan Printing Co Ltd Panorama vr file providing apparatus, program, method, and system
JP2014109802A (en) * 2012-11-30 2014-06-12 Casio Comput Co Ltd Image processor, image processing method and program

Also Published As

Publication number Publication date
JP6582526B2 (en) 2019-10-02
TW201706963A (en) 2017-02-16
JP2016212621A (en) 2016-12-15

Similar Documents

Publication Publication Date Title
US10579134B2 (en) Improving advertisement relevance
US10026229B1 (en) Auxiliary device as augmented reality platform
CN109661686B (en) Object display system, user terminal device, object display method, and program
CN104081317B (en) Information processing equipment and information processing method
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
JP6348741B2 (en) Information processing system, information processing apparatus, information processing program, and information processing method
EP2491989A2 (en) Information processing system, information processing method, information processing device and information processing program
CN112074797A (en) System and method for anchoring virtual objects to physical locations
CN109298777B (en) Virtual reality experience control system
JP7209474B2 (en) Information processing program, information processing method and information processing system
KR20160103897A (en) System for augmented reality image display and method for augmented reality image display
JP6917340B2 (en) Data processing programs, data processing methods, and data processing equipment
JP6677890B2 (en) Information processing system, its control method and program, and information processing apparatus, its control method and program
CN106464773A (en) Augmented reality apparatus and method
JP6147966B2 (en) Information processing program, information processing apparatus, information processing system, and information processing method
JP2014217566A (en) Hunting game distribution system
KR101977314B1 (en) System for providing AR photo zone
WO2016181783A1 (en) Content distribution system, content distribution device, and content distribution method
JP6582526B2 (en) Content providing system, content providing apparatus, and content providing method
KR102154204B1 (en) Electronic device and method for providing advertisement data in electronic device
JP6617547B2 (en) Image management system, image management method, and program
CN111640190A (en) AR effect presentation method and apparatus, electronic device and storage medium
KR20150127472A (en) Apparatus and method for providing augmented reality
KR20210025102A (en) A moving image distribution system, a moving image distribution method, and a moving image distribution program for live distribution of moving images including animations of character objects generated based on the movement of the distribution user
WO2019105002A1 (en) Systems and methods for creating virtual 3d environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16792505

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16792505

Country of ref document: EP

Kind code of ref document: A1