WO2016181780A1 - コンテンツ提供システム、コンテンツ提供装置及びコンテンツ提供方法 - Google Patents

コンテンツ提供システム、コンテンツ提供装置及びコンテンツ提供方法 Download PDF

Info

Publication number
WO2016181780A1
WO2016181780A1 PCT/JP2016/062523 JP2016062523W WO2016181780A1 WO 2016181780 A1 WO2016181780 A1 WO 2016181780A1 JP 2016062523 W JP2016062523 W JP 2016062523W WO 2016181780 A1 WO2016181780 A1 WO 2016181780A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
unit
information
image
image content
Prior art date
Application number
PCT/JP2016/062523
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
正行 中里
平 張
友治 佐藤
Original Assignee
凸版印刷株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 凸版印刷株式会社 filed Critical 凸版印刷株式会社
Publication of WO2016181780A1 publication Critical patent/WO2016181780A1/ja

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a content providing system, a content providing apparatus, and a content providing method.
  • a technique for showing content in 3D (three-dimensional) using the parallax between the right-eye image and the left-eye image is widely known as a known technique.
  • a technique for reading a two-dimensional barcode attached to a (game) card or magazine with a terminal such as a smartphone, and downloading and viewing content associated with the two-dimensional barcode for example, see Patent Document 1).
  • the present invention has been made in view of the above circumstances, and provides a content providing system, a content providing apparatus, and a content providing method capable of providing a three-dimensional image content obtained by synthesizing an image of an object selected by a user. .
  • a first aspect of the present invention is a content providing system including a terminal device and a content providing device that provides 3D image content to the terminal device, the terminal device including object identification information that identifies an object;
  • a transmission unit that transmits, to the content providing apparatus, arrangement position information capable of acquiring a three-dimensional position where the image of the object is arranged, and the content provision corresponding to the transmitted object identification information and the arrangement position information
  • a display control unit for displaying the 3D image content received from the device, wherein the content providing device is the object content that is the 3D image content of the object specified by the object identification information and the object identification information
  • the storage unit The object information receiving unit that receives the object identification information and the arrangement position information from the terminal device, the background content that is the background three-dimensional image content, the object content corresponding to the received object identification information is received
  • a synthesizing unit that generates a three-dimensional image content for distribution synthesized so as to be arranged at a three-dimensional position obtained from the arrangement position information, and the three-
  • a content providing apparatus the storage unit storing the object identification information and the object content that is the three-dimensional image content of the object specified by the object identification information,
  • An object information receiving unit that receives object identification information and arrangement position information capable of acquiring a three-dimensional position where an image of the object is arranged is received from a terminal device, and background content that is a three-dimensional image content of the background is received.
  • a synthesizing unit that generates a three-dimensional image content for distribution that is synthesized so that the object content corresponding to the object identification information is arranged at a three-dimensional position obtained from the received arrangement position information;
  • the 3D image content for distribution generated by the combining unit Comprising a content transmitting unit that transmits device, the.
  • the composition providing unit further includes a correction instruction receiving unit that receives a correction instruction for instructing correction of the arrangement position or orientation of the object from the terminal device. May correct the position or orientation in which the object content is arranged in the three-dimensional image content for distribution according to the received correction instruction.
  • the storage unit further stores attribute information indicating an attribute of the object in association with the object identification information, and the composition
  • the unit corrects the three-dimensional position obtained from the arrangement position information according to the attribute information corresponding to the received object identification information, and sets the object content to the corrected position.
  • the three-dimensional image content for distribution synthesized so as to be arranged may be generated.
  • the synthesizing unit applies the three-dimensional image content for distribution according to the attribute information corresponding to the received object identification information.
  • the orientation of the arranged object content may be corrected.
  • a sixth aspect of the present invention is the content providing apparatus according to the fourth or fifth aspect, wherein the synthesizing unit corresponds to the received object identification information for an attribute relating to display of the three-dimensional image content for distribution. You may correct
  • the viewing direction information indicating the user orientation or the user orientation and position is received from the terminal device.
  • a direction information receiving unit is further provided, and the content transmitting unit is configured to generate a three-dimensional area corresponding to the orientation of the user indicated by the viewing direction information or the orientation and position of the user from the three-dimensional image content for distribution.
  • Image content may be extracted and transmitted to the terminal device.
  • the three-dimensional image for distribution when the combining unit receives a combination of predetermined object identification information may be processed according to the combination.
  • a content providing method executed by the content providing apparatus, wherein the object information receiving unit obtains object identification information for specifying the object and a three-dimensional position at which the image of the object is arranged.
  • An object information receiving step for receiving possible arrangement position information from the terminal device; and a three-dimensional image of the object identified by the received object identification information in the background content, which is a background three-dimensional image content
  • a synthesizing step for generating a three-dimensional image content for distribution by synthesizing the object content as the content so as to be arranged at a three-dimensional position obtained from the received arrangement position information;
  • the 3D image content for distribution generated by the step Having a content transmission step of transmitting to the terminal device.
  • 1 is a configuration diagram of a content providing system according to an embodiment of the present invention. It is a functional block diagram of the content provision apparatus by one Embodiment of this invention. 1 is an external view of an HMD (head mounted display) according to an embodiment of the present invention. It is a functional block diagram of the terminal device by one Embodiment of this invention. It is a figure which shows the object content management information by one Embodiment of this invention. It is a flowchart of the content request process in the terminal device by one Embodiment of this invention. It is a flowchart of the content provision process in the content provision apparatus by one Embodiment of this invention. It is a figure which shows the condition where a user uses HMD by one Embodiment of this invention.
  • FIG. 1 is a configuration diagram of a content providing system according to an embodiment of the present invention.
  • the content providing system includes a content providing apparatus 1, a head mounted display (hereinafter referred to as “HMD”) 2, and a medium 5.
  • HMD head mounted display
  • the figure only one HMD 2 and one medium 5 are shown, but there are actually a plurality. Further, a plurality of content providing apparatuses 1 may be provided.
  • the content providing apparatus 1 distributes 3D image content.
  • the three-dimensional image content is content data that displays an image such as a moving image or a still image in three dimensions.
  • the right eye image is displayed on the right half of the display screen and the left eye image is displayed on the left half.
  • HMD2 is an example of a stereoscopic device used for viewing 3D image content.
  • the head mounted display 2 uses the terminal device 3 as a display device.
  • the stereoscopic device may be glasses used for viewing 3D image content displayed on the terminal device 3.
  • the terminal device 3 is a computer terminal having a communication function such as a smartphone or a tablet terminal, and displays the 3D image content distributed from the content providing device 1 on a display.
  • the content providing device 1 and the terminal device 3 are connected via a network 9 such as the Internet.
  • an object ID object identification information
  • the background content is a 3D image content of a moving image or a still image as a background.
  • the background is a room
  • the object is a chair, table, lighting, curtain, painting, or the like installed in the room.
  • the background is a place where a game or story develops, and the object is a character appearing in the game or story.
  • An object may be a factor that changes an attribute related to image display as well as a target for actually displaying an image as an object. For example, a sun object changes the brightness of an image brightly.
  • the medium 5 is, for example, a card, tag, sticker, marker, or the like on which a character string representing an object ID or an image (for example, a two-dimensional barcode) is printed.
  • a card or the like is used as the medium 5
  • a photograph or picture of the object, an object name, an object description, or the like is printed or written so that the user can easily recognize which object ID can be obtained from the medium 5.
  • a card, a tag, a seal, or a marker that is the medium 5 may be affixed to an actual object (for example, furniture) specified by the object ID obtained from the medium 5.
  • the medium 5 may be an RFID (Radio Frequency Identification) storing an object ID.
  • the RFID storing the object ID is attached to, for example, a card or a real object.
  • FIG. 2 is a functional block diagram showing the configuration of the content providing apparatus 1, and only functional blocks related to the present embodiment are extracted and shown.
  • the content providing apparatus 1 is realized by one or a plurality of computer servers, and includes a storage unit 11, a communication unit 12, and a processing unit 13.
  • the storage unit 11 stores various pieces of information such as background content or object content management information.
  • the object content management information is data in which an object ID, object content, and object attribute information are associated with each other.
  • the object content is a three-dimensional image content of a moving image or a still image of the object.
  • the attribute information indicates an attribute relating to an attribute related to the arrangement position of the object or an attribute related to an attribute related to image display (processing is performed).
  • the communication unit 12 transmits and receives data to and from other devices via the network 9.
  • the processing unit 13 includes a viewing direction information receiving unit 131, an object information receiving unit 132, a synthesizing unit 133, a content transmitting unit 134, a correction instruction receiving unit 135, and a deletion instruction receiving unit 136.
  • the viewing direction information reception unit 131 receives viewing direction information indicating the viewing direction of the user, that is, the direction in which the user is facing, from the terminal device 3.
  • the viewing direction information may include information on the current position of the user.
  • the current position may be a geographical absolute position or a relative position based on a predetermined position in the background content image.
  • the object information receiving unit 132 receives from the terminal device 3 an object ID and arrangement position information indicating information that can acquire a three-dimensional position where an image of the object is arranged. In the arrangement position information, a geographical absolute position may be set, or a relative position or direction based on a predetermined position in the background content image may be set.
  • the synthesizing unit 133 synthesizes the object content corresponding to the received object ID with the background content so that the object content is arranged at a three-dimensional position acquired from the received arrangement position information, and is a three-dimensional image content for distribution. Generate 3D image content for distribution. Note that, when the arrangement position information indicates a geographical absolute position, the composition unit 133 converts the absolute position into a position in the image of the background content.
  • the content transmission unit 134 transmits the 3D image content extracted based on the viewing direction information from the 3D image content for distribution generated by the combining unit 133 to the terminal device 3.
  • the correction instruction receiving unit 135 receives a correction instruction for instructing correction of the arrangement position or orientation of the object content superimposed on the distribution 3D image content.
  • the composition unit 133 corrects the position or orientation in which the object content is arranged in the distribution 3D image content in accordance with the position information correction instruction.
  • the deletion instruction receiving unit 136 receives a deletion instruction that instructs to delete the object content superimposed on the distribution 3D image content.
  • the composition unit 133 deletes the object content instructed to be deleted by the deletion instruction from the distribution 3D image content.
  • FIG. 3 is an external view of the HMD 2.
  • the figure shows an example of the HMD 2 when a smartphone is used as the terminal device 3.
  • the right-eye lens 21 included in the HMD 2 is a lens for viewing the right-eye image displayed on the right side of the display by the terminal device 3.
  • the left-eye lens 22 provided in the HMD 2 is a lens for viewing the left-eye image displayed on the left side of the display by the terminal device 3.
  • a partition 23 is attached to the HMD 2 so that the left-eye image cannot be seen through the right-eye lens 21 and the right-eye image cannot be seen through the left-eye lens 22.
  • the user sets the terminal device 3 so that the edge of the partition 23 opposite to the lens overlaps in the vicinity of the separation between the right-eye image and the left-eye image displayed on the screen of the terminal device 3.
  • the user views the 3D image content displayed by the terminal device 3 through the right-eye lens 21 and the left-eye lens 22.
  • FIG. 4 is a functional block diagram showing a configuration of the terminal device 3, and only functional blocks related to the present embodiment are extracted and shown.
  • the terminal device 3 is, for example, a smartphone, but may be a tablet terminal, a mobile phone terminal, a portable personal computer, or the like.
  • the terminal device 3 includes an input unit 31, a processing unit 32, a detection unit 33, an imaging unit 34, a communication unit 35, and a display unit 36.
  • the medium 5 is an RFID
  • the terminal device 3 includes a tag reader.
  • the input unit 31 receives information input by a user operation.
  • the processing unit 32 includes a viewing direction information acquisition unit 321, an object identification acquisition unit 322, an arrangement position information acquisition unit 323, a transmission unit 324, a content reception unit 325, and a display control unit 326.
  • the viewing direction information acquisition unit 321 acquires viewing direction information.
  • the object identification acquisition unit 322 acquires the object ID recorded on the medium 5.
  • the arrangement position information acquisition unit 323 acquires arrangement position information representing a three-dimensional position where an object image is arranged.
  • the transmission unit 324 transmits various types of information such as viewing direction information, object ID, and arrangement position information to the content providing apparatus 1.
  • the transmission unit 324 transmits a correction instruction and a deletion instruction to the content providing apparatus 1 based on the user operation input by the input unit 31.
  • the content receiving unit 325 receives 3D image content from the content providing apparatus 1.
  • the display control unit 326 displays various data such as 3D image content on the display unit 36.
  • the detection unit 33 is a sensor that detects a direction.
  • the detection unit 33 may include a GPS (Global Positioning System) that obtains information on the current position.
  • the imaging unit 34 is a camera.
  • the communication unit 35 transmits and receives information via the network 9.
  • the display unit 36 is a display and displays data. When the terminal device 3 is a smartphone, the display unit 36 is a touch panel, and the input unit 31 is a sensor arranged on the touch panel.
  • FIG. 5 is a diagram showing a data configuration example of the object content management information.
  • the object content management information shown in the figure is information in which an object ID, an object name, object content, and attribute information are associated with each other. Only one of the object content and the attribute information may be set corresponding to one object ID.
  • the attribute information includes information on an arrangement position attribute and an image processing attribute.
  • the placement position attribute indicates an attribute related to the position where the object is placed.
  • the image processing attribute indicates an attribute that affects (performs processing) an attribute relating to image display. Only one of the arrangement position attribute and the image processing attribute may be set in the attribute information.
  • FIG. 6 is a flowchart of content request processing in the terminal device 3.
  • the viewing direction information acquisition unit 321 acquires viewing direction information (step S105).
  • the viewing direction information acquisition unit 321 acquires information on the direction and the current position from the detection unit 33 and sets it as viewing direction information.
  • the viewing direction information acquisition unit 321 does not acquire information on the current position from the GPS that constitutes the detection unit 33, but a signboard or image installed on the wall or floor of the building where the user is currently located or where the user is located It is also possible to receive position information by short-range communication from a communication device provided for the current position and use it as current position information.
  • WiFi registered trademark
  • Bluetooth registered trademark
  • visible light communication infrared communication
  • NFC Near field communication
  • characters and images two-dimensional barcodes, etc.
  • the viewing direction information acquisition unit 321 acquires position information from the captured image data, and uses it as current position information.
  • the transmission unit 324 transmits the viewing direction information acquired by the viewing direction information acquisition unit 321 to the content providing apparatus 1 (step S110).
  • the content receiving unit 325 receives the 3D image content from the content providing apparatus 1 (step S115).
  • the display control unit 326 displays the received 3D image content on the display unit 36 (step S120).
  • step S125: NO the processing unit 32 performs a process of step S145 described later.
  • the object identification acquisition unit 322 acquires the object ID recorded on the medium 5 (step S125). S130).
  • the imaging unit 34 captures characters and images representing the object ID printed on the medium 5.
  • the object identification acquisition unit 322 acquires the object ID from the captured image data.
  • the medium 5 is an RFID
  • the object identification acquisition unit 322 acquires the object ID read from the medium 5 by the tag reader.
  • the medium 5 is a communication device
  • the object identification acquisition unit 322 receives an object ID transmitted from the medium 5 by short-range communication.
  • the arrangement position information acquisition unit 323 acquires arrangement position information (step S135).
  • the arrangement position information acquisition unit 323 acquires information on the current position from the GPS constituting the detection unit 33 and uses it as arrangement position information.
  • the arrangement position information acquisition unit 323 may use the user orientation information acquired from the detection unit 33 as the arrangement position information.
  • the arrangement position information indicates the direction of the user, the position that is advanced by a predetermined distance in the direction of the user from the predetermined three-dimensional position in the image of the background content (or distribution three-dimensional image content) is the arrangement position.
  • the arrangement position information acquisition unit 323 receives position information by short-range communication from a communication device provided on a wall or floor of a building where the user is currently present, a signboard or an image installed in a place where the user is present. It is good also as arrangement position information.
  • the imaging unit 34 captures characters and images indicating information on the position drawn on a poster or marker attached to a wall or floor of a building where the user is present, a signboard or an image installed in a place where the user is present, or the like. May be.
  • the viewing direction information acquisition unit 321 acquires position information from the captured image data and uses it as arrangement position information.
  • the transmission unit 324 transmits the object ID acquired by the object identification acquisition unit 322 and the arrangement position information acquired by the arrangement position information acquisition unit 323 to the content providing apparatus 1 (step S140).
  • the processing unit 32 of the terminal device 3 repeats the processing from step S115.
  • step S145 the transmission unit 324 determines whether an instruction to correct the position or orientation of the object has been input.
  • step S145 the transmission unit 324 performs the process of step S155 described later.
  • the transmission unit 324 transmits the input correction instruction to the content providing apparatus 1 (step S150). For example, when the user wants to move the position of the object image, the user touches the object image displayed on the touch panel with a finger and moves the object image in the direction in which the user wants to move (drag operation). The transmission unit 324 transmits to the content providing apparatus 1 a correction instruction in which the object ID of the object whose image is displayed at the touched position on the screen, the movement direction, and the movement amount are set. The amount of movement is information according to the distance that the finger is moved while touching the touch panel. Or a user shakes in the direction which wants to move a head, holding HMD2.
  • the sensor included in the detection unit 33 detects the direction in which the head is shaken, and the transmission unit 324 transmits a correction instruction in which the detected direction is set as the movement direction to the content providing apparatus 1.
  • the transmission unit 324 may set, as the correction instruction, a movement amount corresponding to the speed at which the head is shaken or the distance at which the head is shaken.
  • the terminal device 3 is provided with a sensor for detecting the line of sight, and the direction in which the user moves the line of sight is detected.
  • the transmission unit 324 transmits to the content providing apparatus 1 the object ID of the object whose image is displayed at the position where the user's line of sight was hit and the correction instruction in which the detected direction is set as the movement direction.
  • the transmission unit 324 may set a movement amount according to the distance that the line of sight has moved to the correction instruction.
  • the user wants to rotate the orientation of the object image, the user performs an operation such as tapping the object image displayed on the touch panel.
  • the transmission unit 324 transmits the object ID of the object whose image is displayed at the tapped position on the screen and the correction instruction that sets the rotation to the content providing apparatus 1.
  • the processing unit 32 of the terminal device 3 repeats the processing from step S115.
  • step S145 After determining NO in step S145, the transmission unit 324 determines whether an object deletion instruction has been input (step S155).
  • the transmission unit 324 determines that an object deletion instruction has been input (step S155: YES)
  • the transmission unit 324 transmits the input deletion instruction to the content providing apparatus 1 (step S160).
  • the user touches the image of the object to be deleted displayed on the touch panel with a finger and performs a flick operation in the direction outside the screen, or double taps the image of the object to be deleted.
  • the transmission unit 324 transmits to the content providing apparatus 1 a deletion instruction in which the object ID of the object for which such a deletion operation has been performed is set.
  • the processing unit 32 of the terminal device 3 repeats the processing from step S115.
  • step S155: NO the processing unit 32 determines whether or not the object deletion instruction has been input by the input unit 31 (step S165). ). If the processing unit 32 determines that the end of processing has not been input (step S165: NO), the processing unit 32 repeats the processing from step S105. Then, when the processing unit 32 determines that the processing end is input in step S165 (step S165: YES), the processing unit 32 ends the processing.
  • FIG. 7 is a flowchart of content providing processing in the content providing apparatus 1.
  • the composition unit 133 of the content providing apparatus 1 determines whether the viewing direction information receiving unit 131 has received viewing direction information from the terminal device 3 (step S205).
  • the synthesizing unit 133 determines that the viewing direction information receiving unit 131 has not received the viewing direction information (step S205: NO)
  • the combining unit 133 performs a process of step S220 described later.
  • the synthesizing unit 133 determines whether the viewing direction information receiving unit 131 has received the viewing direction information (step S205: YES). . When it is determined that the distribution 3D image content has been generated (step S210: YES), the composition unit 133 performs a process of step S220 described later.
  • the composition unit 133 reads the background content from the storage unit 11 and sets it as the distribution 3D image content (step S215). Note that information specifying background content may be further received from the terminal device 3.
  • the synthesizing unit 133 reads the background content specified by the received information from the storage unit 11 and sets it as the three-dimensional image content for distribution.
  • the combining unit 133 determines whether or not the object information receiving unit 132 has received the object ID and arrangement position information. Is determined (step S220). When the composition unit 133 determines that the object information reception unit 132 has not received the object ID and the arrangement position information (step S220: NO), the composition unit 133 performs a process of step S245 described later.
  • the composition unit 133 determines that the object information reception unit 132 has received the object ID and the arrangement position information (step S220: YES)
  • the composition unit 133 performs the process of step S225. That is, the composition unit 133 reads out the object content and attribute information corresponding to the received object ID from the object content management information stored in the storage unit 11 (step S225).
  • the synthesizing unit 133 obtains a three-dimensional arrangement position in the distribution three-dimensional image content from the received arrangement position information.
  • the synthesizing unit 133 converts the position information into a position in the background image.
  • the synthesizing unit 133 superimposes (synthesizes) the read object content on the current distribution 3D image content so as to be arranged at the arrangement position in the distribution 3D image content, and the updated distribution 3D image Content is generated (step S230).
  • the object ID of the object content is added to the object content superimposed on the 3D image content for distribution.
  • the composition unit 133 corrects the three-dimensional arrangement position of the object content arranged in step S230 according to the arrangement position attribute indicated by the read attribute information (step S235).
  • the composition unit 133 may correct the placement position indicated by the received placement position information according to the placement position attribute before composition.
  • the synthesizing unit 133 superimposes (synthesizes) the object content on the current delivery 3D image content so that the object content is placed at the corrected placement position in the delivery 3D image content, and then delivers the updated delivery. 3D image content is generated.
  • the synthesis unit 133 may further correct the orientation of the object content according to the arrangement position attribute.
  • the synthesizing unit 133 performs processing according to the image processing attribute indicated by the read attribute information on the attribute relating to the display of the generated distribution 3D image content (step S240).
  • step S225 when the object content is not read, the composition unit 133 does not perform the processes of step S230 and step S235. Further, when the arrangement position attribute is not set in the attribute information, the synthesizing unit 133 does not perform the process of step S235, and does not perform the process of step S240 when the image processing attribute is not set in the attribute information.
  • the composition unit 133 determines whether or not the correction instruction receiving unit 135 has received a correction instruction from the terminal device 3 after determining NO in step S220 or after the process of step S240 (step S245). When it is determined that the correction instruction receiving unit 135 has not received the correction instruction (step S245: NO), the synthesizing unit 133 performs a process of step S255 described later.
  • the composition unit 133 corrects the arrangement position or orientation of the object content in the distribution 3D image content according to the correction instruction (step S245). S250). Specifically, the composition unit 133 corrects the current arrangement position of the object content in the distribution 3D image content according to the movement direction and the movement amount indicated by the correction instruction.
  • the composition unit 133 moves the arrangement position only for the object content specified by the object ID.
  • the composition unit 133 moves the arrangement position by a fixed movement amount.
  • the compositing unit 133 is specified by the object ID set in the correction instruction among the object contents superimposed on the distribution 3D image content.
  • the direction of the object content is rotated by a predetermined angle in a predetermined direction.
  • the composition unit 133 determines whether or not the deletion instruction receiving unit 136 has received a deletion instruction from the terminal device 3 after determining NO in step S245 or after the processing of step S250 (step S255). When it is determined that the deletion instruction receiving unit 136 has not received the deletion instruction (step S255: NO), the synthesizing unit 133 performs a process of step S265 described later. On the other hand, when determining that the delete instruction receiving unit 136 has received the delete instruction (step S255: YES), the synthesizing unit 133 performs the process of step S260. That is, the composition unit 133 deletes the object content specified by the object ID set in the deletion instruction from the object content superimposed on the distribution 3D image content (step S260).
  • the content transmission unit 134 performs the process of step S265. That is, the content transmission unit 134 extracts the 3D image content of the region (part) corresponding to the direction and position of the user indicated by the viewing direction information received from the terminal device 3 from the 3D image content for distribution (Step S265). ). When the user position is not set in the viewing direction information, a predetermined position such as the center position of the background content is used. The content transmission unit 134 transmits the 3D image content extracted from the 3D image content for distribution to the terminal device 3 (step S270).
  • the imaging unit 34 of the terminal device 3 continuously captures moving image data, and the object identification acquisition unit 322 continues even if the user does not input an object ID acquisition instruction through the input unit 31. Then, a character or an image representing the object ID may be detected from the moving image data being captured, and the object ID may be acquired from the detected information.
  • the arrangement position information acquisition unit 323 acquires the arrangement position information even if the user does not input an instruction through the input unit 31, and the transmission unit 324 receives the object ID and the object ID acquired by the object identification acquisition unit 322.
  • the arrangement position information acquired by the arrangement position information acquisition unit 323 is transmitted to the content providing apparatus 1.
  • the terminal device 3 detects the object ID from the moving image data while the imaging unit 34 is continuously capturing images, even if no instruction is input by the user, and the detected object ID, The process of transmitting the arrangement position information when the object ID is detected to the content providing apparatus 1 is repeated. Thereby, even if the user does not perform an operation each time, the user simply holds the medium 5 over the imaging unit 34 of the terminal device 3 or points the imaging unit 34 toward the medium 5, and the three-dimensional image in which objects are successively added. Content can be viewed.
  • FIG. 8 is a diagram showing a situation in which the user uses the HMD 2.
  • the user is in the room R with the same floor plan as the room where the furniture is placed.
  • a marker M printed with position information is attached to the floor or wall.
  • FIG. 9 is a diagram illustrating an example of setting object content management information.
  • the object content management information is set with information on furniture or interior such as chairs, curtains, clocks, etc., or objects such as the sun that affects the situation of the room.
  • the placement position attribute is “floor”
  • the placement position attribute is “window”
  • the placement position attribute is “wall”.
  • the placement position attribute is " ceiling ".
  • how to process an image is set in the image processing attribute of an object such as lighting or the sun that affects the brightness of the image.
  • the orientation attribute of the object may be obtained by the arrangement position attribute.
  • the object content is arranged so that the orientation is the same as the room, and in the case of the arrangement position attribute “window”, the object content is arranged so as to be parallel to the window. In the case of “wall”, the object content is arranged in parallel with the wall. Note that the orientation of the object may be explicitly set in the arrangement position attribute.
  • FIG. 10 is a diagram illustrating a generation example of the distribution 3D image content.
  • the viewing direction information acquisition unit 321 of the terminal device 3 used as the display device of the HMD 2 transmits the user orientation and current position information acquired by the detection unit 33 to the content providing device 1 as viewing direction information (Ste S105, Step S110).
  • the viewing direction information receiving unit 131 of the content providing apparatus 1 receives the viewing direction information from the terminal device 3 (step S205: YES)
  • the composition unit 133 displays the background content B1 of the room R corresponding to the current position indicated by the viewing direction information. Is read from the storage unit 11.
  • the composition unit 133 sets the background content B1 as the distribution three-dimensional image content G11 as shown in FIG. 10A (step S210: NO, step S215).
  • the background content B1 is a three-dimensional image content generated by photographing the room R in advance. By designating the position and orientation from the background content B1, it is possible to acquire the 3D image content when viewed from the designated position in the designated direction.
  • the content transmission unit 134 extracts the three-dimensional image content in the area corresponding to the position or orientation of the user indicated by the received viewing direction information from the distribution three-dimensional image content G11 (step S265), and transmits it to the terminal device 3. (Step S270).
  • the display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36 (steps S115 to S120). Thereby, the user can view the 3D image content when the room R is viewed from the current position.
  • the viewing direction information acquisition unit 321 of the terminal device 3 transmits the user orientation and current position information newly acquired by the detection unit 33 to the content providing device 1 as viewing direction information.
  • the viewing direction information receiving unit 131 of the content providing apparatus 1 receives new viewing direction information from the terminal device 3 (step S205).
  • the content transmission unit 134 extracts the 3D image content of the area corresponding to the position and orientation of the user indicated by the newly received viewing direction information from the already generated 3D image content for distribution G11 (step S265). Then, the data is transmitted to the terminal device 3 (step S270).
  • the display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36 (steps S115 to S120). Thereby, the user can view the 3D image content when the room R is viewed from the moved position.
  • Step S125: YES When the user moves to a place where the furniture is desired to be installed in the room R, the user selects a card (medium 5) on which the photograph of the furniture to be installed and the object ID are printed, and inputs an imaging instruction via the input unit 31 of the terminal device 3 ( Step S125: YES).
  • the object identification acquisition unit 322 acquires the object ID from the image data captured by the imaging unit 34 (step S130). Further, the user images the marker M of the place where the furniture is to be installed by the imaging unit 34 of the terminal device 3.
  • the arrangement position information acquisition unit 323 acquires the position information from the data of the image captured by the imaging unit 34 and sets it as the arrangement position information (step S135).
  • the transmission unit 324 transmits the acquired object ID and arrangement position information to the content providing apparatus 1 (step S140).
  • the synthesizing unit 133 reads the object content and attribute information corresponding to the object ID (step S225).
  • the received object ID is “00001”
  • the attribute information indicating the object content C11 of the object name “chair” and the arrangement position attribute “floor” is read from the object content management information illustrated in FIG.
  • the composition unit 133 superimposes (synthesizes) the object content C11 on the current delivery 3D image content G11 to generate the delivery 3D image content G12 shown in FIG. 10B (step S230).
  • the object content C11 is superimposed on the image of the distribution three-dimensional image content G11 so as to be arranged at a three-dimensional arrangement position indicated by the arrangement position information.
  • the synthesizing unit 133 corrects the arrangement position of the object content C11 in the generated distribution 3D image content G12 so as to coincide with the arrangement position attribute “floor” set in the attribute information.
  • the object content C11 may be arranged in the same direction, but may be arranged such that the lower end is above the floor height of the distribution 3D image content G12. Therefore, the composition unit 133 corrects the arrangement position of the object content C11 in the distribution 3D image content G12 so that the lower end of the object content C11 coincides with the height of the floor (step S235). Furthermore, as illustrated in FIG.
  • the composition unit 133 determines the orientation of the room and the object in the background content B1 (or the three-dimensional image content G12 for distribution) superimposed on the three-dimensional image content G12 for distribution.
  • the object content C11 may be rotated so that the content C11 is oriented.
  • the composition unit 133 similarly corrects the arrangement position including the direction even when an object content such as a carpet with an arrangement position attribute “floor” is superimposed. Further, for example, when the object content such as a clock where the placement position attribute is “wall” is superimposed, the composition unit 133 corrects the placement position of the object content to the position of the wall in the 3D image content for distribution, Rotate the direction to be parallel.
  • the synthesizing unit 133 may generate a 3D image content for distribution after correcting the arrangement position of the object content C11, and for distributing a moving image that moves the object content C11 from the initial arrangement position to the corrected arrangement position. Three-dimensional image content may be generated.
  • the content transmission unit 134 extracts the 3D image content of the area corresponding to the position and orientation of the user indicated by the latest viewing direction information from the 3D image content for distribution G12 in which the arrangement position of the object content C11 is corrected (step S1). S265), and transmits to the terminal device 3 (step S270).
  • step S145 When the user wants to move the position of the chair, the user moves the chair image displayed on the terminal device 3 in the desired direction while touching with the finger (step S145: YES).
  • the transmission unit 324 of the terminal device 3 transmits the correction instruction in which the chair object ID, the movement direction, and the movement amount are set to the content providing apparatus 1 (step S150).
  • the synthesizing unit 133 of the content providing apparatus 1 acquires the object ID, the moving direction, and the moving amount from the correction instruction received by the correction instruction receiving unit 135.
  • the composition unit 133 corrects the arrangement position of the chair object content C11 in the distribution 3D image content G12 according to the correction instruction (step S245: YES, step S250).
  • the content transmission unit 134 extracts the 3D image content of the area corresponding to the position and orientation of the user indicated by the latest viewing direction information from the 3D image content for distribution G12 in which the arrangement position of the object content C11 is corrected (step S1). S265), and transmits to the terminal device 3 (step S270). Thereby, the user can view the three-dimensional image content when the position of the chair installed in the room R is moved. Since the position can be corrected in this way, it is not necessary to install the markers M in the room R at a narrow interval.
  • the user moves to a place where he wants to install different furniture in the room R.
  • the terminal device 3 transmits to the content providing apparatus 1 information on the user direction and the current position newly acquired by the movement.
  • the content providing apparatus 1 extracts the 3D image content of the area corresponding to the position and orientation of the user indicated by the newly received viewing direction information from the 3D image content G12 for distribution, and transmits it to the terminal device 3.
  • the display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36. Thereby, the user can view the 3D image content when the room R in which the chair is installed is viewed from the moved position.
  • the user takes an image of a new furniture card to be installed by the terminal device 3 and acquires an object ID (step S125: YES, step S130). Further, the user images the marker M of the place where new furniture is desired to be installed by the terminal device 3, and obtains arrangement position information (step S135).
  • the transmission unit 324 transmits the acquired object ID and arrangement position information to the content providing apparatus 1 (step S140).
  • the content providing device 1 reads the object content and attribute information corresponding to the object ID received from the terminal device 3 (step S220: YES, step S225).
  • the received object ID is “00004”
  • the object content C14 of the object name “lighting 1”, the arrangement position attribute “ceiling”, and the image processing attribute “one step brighter” are obtained from the object content management information shown in FIG. Assume that the indicated attribute information is read.
  • the synthesizing unit 133 superimposes (synthesizes) the object content C14 on the distribution three-dimensional image content G12 to generate a distribution three-dimensional image content (step S230).
  • the object content C14 is superimposed on the image of the distribution three-dimensional image content G12 so as to be arranged at a three-dimensional arrangement position indicated by the arrangement position information.
  • the three-dimensional image content for distribution generated here is a three-dimensional image content in which the background content B1, the object content C11, and the object content C14 are superimposed.
  • the composition unit 133 corrects the arrangement position of the object content C14 in the generated distribution three-dimensional image content so as to coincide with the arrangement position attribute “ceiling” set in the attribute information.
  • the upper end of the object content C14 may be arranged at a height below or above the ceiling in the distribution 3D image content. Accordingly, the composition unit 133 corrects the arrangement position of the object content C14 so that the upper end of the object content C14 matches the height of the ceiling (step S235). Further, the composition unit 133 sets the object content C14 so that the orientation of the room in the background content B1 (or the delivery 3D image content) superimposed on the delivery 3D image content matches the orientation of the object content C14. It may be rotated.
  • the synthesizing unit 133 performs a process of “one step brighter” on the three-dimensional image content for distribution whose position of the object content C14 is corrected as indicated by the image processing attribute (step S240).
  • the content transmission unit 134 extracts the three-dimensional image content in the region corresponding to the position and orientation of the user indicated by the latest viewing direction information from the processed three-dimensional image content for distribution (step S265). Transmit (step S270).
  • the terminal device 3 may read and memorize
  • the user selects an object name of furniture to be installed in the room R from the object names displayed on the display unit 36 by the terminal device 3 using the input unit 31.
  • the transmission unit 324 of the terminal device 3 transmits the object ID corresponding to the selected object name in step S140.
  • FIG. 11 is a diagram showing an example of the 3D image content for distribution when a plurality of furniture objects are installed.
  • a distribution 3D image content in which object contents C21 to C28 are superimposed on a background content B1 that is a 3D image content of a room R is generated.
  • the user shown in the figure shows the virtual position of the user in the distribution 3D image content.
  • the user can experience an immersive feeling as if he / she is actually in the room R where the furniture of the object contents C21 to C28 is installed by viewing the 3D image content extracted from the 3D image content for distribution with the HMD2. it can.
  • FIG. 12 is a diagram showing an example of setting object content management information.
  • information on characters that can appear on the game screen or objects that affect the display of the screen is set in the object content management information.
  • the placement position attribute is “ground”, and for an object that moves in the sky or floats in the sky, the placement position attribute is “sky”.
  • the arrangement position attribute is empty, the height from the ground may be set.
  • the image processing attribute of an object such as the sun or a cumulonimbus that affects the brightness of the image is set as to how to process an attribute related to image display.
  • FIG. 13 is a diagram illustrating an example of 3D image content displayed by the terminal device 3.
  • the viewing direction information acquisition unit 321 of the terminal device 3 used as the display device of the HMD 2 transmits the user orientation information acquired by the detection unit 33 to the content providing device 1 as viewing direction information (step S105).
  • Step S110 When the viewing direction information receiving unit 131 of the content providing device 1 receives the viewing direction information from the terminal device 3 (step S205: YES), the combining unit 133 reads the background content B2 from the storage unit 11.
  • the composition unit 133 sets the background content B2 as the three-dimensional image content for distribution G31 (step S210: NO, step S215).
  • the content transmission unit 134 extracts the 3D image content in the area corresponding to the user orientation indicated by the received viewing direction information from the 3D image content G31 for distribution (step S265), and transmits the 3D image content to the terminal device 3 (step S265).
  • the user's position is a fixed position such as the center position of the space in the background content (or distribution 3D image content).
  • the display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36 (steps S115 to S120). ).
  • the user uses the image capturing unit 34 of the terminal device 3 to capture the card (medium 5), and acquires the object ID “10001” from the captured image data (step S125: YES, step S130).
  • the arrangement position information acquisition unit 323 of the terminal device 3 uses the user orientation information acquired by the detection unit 33 as arrangement position information (step S135).
  • the transmission unit 324 transmits the acquired object ID and arrangement position information to the content providing apparatus 1 (step S140).
  • the composition unit 133 of the content providing apparatus 1 reads out the object content C31 and attribute information corresponding to the received object ID “10001” (step S220: YES, step S225).
  • the synthesizing unit 133 acquires user orientation information from the received arrangement position information, and the direction indicated by the acquired information from a predetermined three-dimensional position such as the center position of the background content (or three-dimensional image content for distribution). The position advanced by a predetermined distance is acquired as the arrangement position.
  • the synthesizing unit 133 superimposes (synthesizes) the object content C31 on the current distribution 3D image content G31 to generate the updated distribution 3D image content (step S230).
  • the object content C31 is superimposed on the image of the distribution three-dimensional image content G31 so as to be arranged at the acquired three-dimensional arrangement position.
  • the composition unit 133 corrects the arrangement position of the object content C31 in the updated three-dimensional image content for distribution so as to coincide with the arrangement position “ground” set in the attribute information. That is, the composition unit 133 corrects the arrangement position of the object content C31 in the distribution 3D image content so that the lower end of the object content C31 matches the height of the ground (step S235).
  • the content transmission unit 134 extracts the 3D image content of the area corresponding to the user orientation indicated by the latest viewing direction information from the newly generated 3D image content for distribution (step S265), and transmits the content to the terminal device 3. (Step S270).
  • the user images the cloud A card (medium 5) with the imaging unit 34 of the terminal device 3, and acquires the object ID “10008” from the captured image data (step S125: YES, step S130).
  • the transmission unit 324 of the terminal device 3 transmits the acquired object ID and arrangement position information indicating the user orientation to the content providing device 1 (steps S135 to S140).
  • the composition unit 133 of the content providing apparatus 1 reads out the object content C38 and attribute information corresponding to the received object ID “10008” (step S220: YES, step S225).
  • the synthesizing unit 133 acquires a position advanced by a predetermined distance from the predetermined three-dimensional position such as the center position of the distribution three-dimensional image content in the direction indicated by the received arrangement position information.
  • the synthesizing unit 133 superimposes (synthesizes) the object content C38 on the current 3D image content for distribution to generate the 3D image content for distribution G32 (step S230).
  • the object content C38 is superimposed on the image of the current 3D image content for distribution so as to be arranged at the obtained three-dimensional arrangement position.
  • the synthesizing unit 133 corrects the arrangement position of the object content C38 in the distribution 3D image content G32 so as to coincide with the arrangement position “sky (height W from the ground)” set in the attribute information (step S31).
  • the content transmission unit 134 extracts the three-dimensional image content in the area corresponding to the user orientation indicated by the latest viewing direction information from the distribution three-dimensional image content G32 (step S265), and transmits it to the terminal device 3 (step S265).
  • the display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36 (steps S115 to S120). ).
  • the content providing device 1 is the tertiary for distribution in which the background content B2 and the object content C31, C32, C34, C38, and C39 are superimposed.
  • the original image content G33 is generated.
  • the content providing apparatus 1 transmits the 3D image content obtained by extracting the range corresponding to the user direction from the 3D image content for distribution G33 to the terminal device 3.
  • the display control unit 326 of the terminal device 3 displays the 3D image content shown in (c) of FIG. 13 on the display unit 36 (steps S115 to S120).
  • the terminal device 3 transmits the newly acquired user orientation information to the content providing device 1.
  • the content providing apparatus 1 extracts the newly received three-dimensional image content in the area corresponding to the orientation of the user from the distribution three-dimensional image content G33 and transmits the extracted three-dimensional image content to the terminal device 3.
  • the display control unit 326 of the terminal device 3 displays the 3D image content received by the content receiving unit 325 from the content providing device 1 on the display unit 36. Thereby, the user can view the 3D image content when viewing in a different direction.
  • the composition unit 133 of the content providing apparatus 1 may process the 3D image content for distribution according to the event according to the event.
  • the event includes, for example, a case where a combination of predetermined object IDs is received and a case where predetermined object content is arranged in a predetermined arrangement. The event may further consider the order of reception of the object IDs and the type of background content. Also, for processing according to events, superimpose new object content on 3D image content for distribution, replace object content superimposed on 3D image content for distribution with other object content, and other background content Change to background content, change attributes (brightness, color tone, etc.) related to the display of 3D image content for distribution.
  • the composition unit 133 of the content providing apparatus 1 detects, as an event, that the object content “Mama” and the object content “Dad” are arranged within a predetermined distance.
  • the synthesizing unit 133 adds the “heart mark” object content to the distribution 3D image content so as to be placed between the object content of “Mama” and the object content of “Daddy”. Superimposed.
  • the composition unit 133 of the content providing apparatus 1 detects as an event that the object ID “Futaba” is received from the terminal device 3 first and then the object ID “rain” is received.
  • the synthesizing unit 133 replaces the Futaba object content superimposed on the distribution 3D image content with a moving image object content that grows from Futaba and blooms.
  • the composition unit 133 of the content providing apparatus 1 detects that an object ID of “Momotaro”, “dog”, “pheasant”, or “monkey” has been received as an event.
  • the composition unit 133 changes the background content superimposed on the 3D image content for distribution to the background content of the story where the demon goes to the island.
  • the composition unit 133 of the content providing apparatus 1 detects the use of the room background content and the reception of the curtain object ID and the moon object ID as events.
  • the synthesizing unit 133 performs processing so that the brightness of the distribution three-dimensional image content is reduced.
  • the user can view the content of the three-dimensional image obtained by synthesizing the image of the object selected by the user with a highly immersive HMD.
  • the content providing device 1 and the terminal device 3 described above have a computer system therein.
  • the process of operation of the content providing device 1 and the terminal device 3 is stored in a computer-readable recording medium in the form of a program, and the computer system reads and executes this program to perform the above processing.
  • the computer system referred to here includes a CPU, various memories, an OS, and hardware such as peripheral devices.
  • the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used.
  • the “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system.
  • the “computer-readable recording medium” dynamically holds a program for a short time like a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line.
  • a volatile memory in a computer system serving as a server or a client in that case, and a program that holds a program for a certain period of time are also included.
  • the program may be a program for realizing a part of the functions described above, and may be a program capable of realizing the functions described above in combination with a program already recorded in a computer system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)
PCT/JP2016/062523 2015-05-08 2016-04-20 コンテンツ提供システム、コンテンツ提供装置及びコンテンツ提供方法 WO2016181780A1 (ja)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015-095614 2015-05-08
JP2015095614A JP6582526B2 (ja) 2015-05-08 2015-05-08 コンテンツ提供システム、コンテンツ提供装置及びコンテンツ提供方法

Publications (1)

Publication Number Publication Date
WO2016181780A1 true WO2016181780A1 (ja) 2016-11-17

Family

ID=57249032

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/062523 WO2016181780A1 (ja) 2015-05-08 2016-04-20 コンテンツ提供システム、コンテンツ提供装置及びコンテンツ提供方法

Country Status (3)

Country Link
JP (1) JP6582526B2 (zh)
TW (1) TW201706963A (zh)
WO (1) WO2016181780A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7093996B2 (ja) * 2018-04-30 2022-07-01 日本絨氈株式会社 仮想現実システムを用いたインテリア提案システム

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004341642A (ja) * 2003-05-13 2004-12-02 Nippon Telegr & Teleph Corp <Ntt> 画像合成表示方法、画像合成表示プログラム、並びにこの画像合成表示プログラムを記録した記録媒体
JP2010218107A (ja) * 2009-03-16 2010-09-30 Toppan Printing Co Ltd パノラマvrファイル提供装置、プログラム、方法及び、システム
JP2014109802A (ja) * 2012-11-30 2014-06-12 Casio Comput Co Ltd 画像処理装置、画像処理方法およびプログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004341642A (ja) * 2003-05-13 2004-12-02 Nippon Telegr & Teleph Corp <Ntt> 画像合成表示方法、画像合成表示プログラム、並びにこの画像合成表示プログラムを記録した記録媒体
JP2010218107A (ja) * 2009-03-16 2010-09-30 Toppan Printing Co Ltd パノラマvrファイル提供装置、プログラム、方法及び、システム
JP2014109802A (ja) * 2012-11-30 2014-06-12 Casio Comput Co Ltd 画像処理装置、画像処理方法およびプログラム

Also Published As

Publication number Publication date
JP6582526B2 (ja) 2019-10-02
JP2016212621A (ja) 2016-12-15
TW201706963A (zh) 2017-02-16

Similar Documents

Publication Publication Date Title
US10579134B2 (en) Improving advertisement relevance
US10026229B1 (en) Auxiliary device as augmented reality platform
CN109661686B (zh) 对象显示系统、用户终端装置、对象显示方法及程序
JP6348741B2 (ja) 情報処理システム、情報処理装置、情報処理プログラム、および情報処理方法
EP2491989A2 (en) Information processing system, information processing method, information processing device and information processing program
CN112074797A (zh) 用于将虚拟对象锚定到物理位置的系统和方法
CN109298777B (zh) 虚拟现实体验控制系统
JP6677890B2 (ja) 情報処理システム、その制御方法、及びプログラム、並びに情報処理装置、その制御方法、及びプログラム
JP7209474B2 (ja) 情報処理プログラム、情報処理方法及び情報処理システム
CN106464773A (zh) 增强现实的装置及方法
JP6917340B2 (ja) データ処理プログラム、データ処理方法、および、データ処理装置
JP6147966B2 (ja) 情報処理プログラム、情報処理装置、情報処理システム、および情報処理方法
JP2014217566A (ja) 宝探しゲーム配信システム
KR101977314B1 (ko) 증강현실 포토존 제공 시스템
CN112675541A (zh) Ar信息共享方法、装置、电子设备及存储介质
WO2016181783A1 (ja) コンテンツ配信システム、コンテンツ配信装置及びコンテンツ配信方法
JP6617547B2 (ja) 画像管理システム、画像管理方法、プログラム
JP2011060254A (ja) 拡張現実システム、拡張現実装置および仮想オブジェクト表示方法
JP6582526B2 (ja) コンテンツ提供システム、コンテンツ提供装置及びコンテンツ提供方法
CN111640190A (zh) Ar效果的呈现方法、装置、电子设备及存储介质
CN111918114A (zh) 图像显示方法、装置、显示设备及计算机可读存储介质
KR20150127472A (ko) 증강 현실 제공 장치 및 방법
US11257250B2 (en) Blended physical and virtual realities
WO2019105002A1 (en) Systems and methods for creating virtual 3d environment
KR20150090351A (ko) 복수의 사용자 장치를 기반으로 동영상을 생성하는 컨텐츠 생성 서비스 장치, 복수의 사용자 장치를 기반으로 동영상을 생성하는 방법 및 컴퓨터 프로그램이 기록된 기록매체

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16792505

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16792505

Country of ref document: EP

Kind code of ref document: A1