WO2018116538A1 - Virtual content storage method - Google Patents

Virtual content storage method Download PDF

Info

Publication number
WO2018116538A1
WO2018116538A1 PCT/JP2017/032113 JP2017032113W WO2018116538A1 WO 2018116538 A1 WO2018116538 A1 WO 2018116538A1 JP 2017032113 W JP2017032113 W JP 2017032113W WO 2018116538 A1 WO2018116538 A1 WO 2018116538A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual content
content storage
image
user terminal
virtual
Prior art date
Application number
PCT/JP2017/032113
Other languages
French (fr)
Japanese (ja)
Inventor
翔 阮
甫 西川
智博 中川
太郎 綿末
Original Assignee
株式会社tiwaki
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社tiwaki filed Critical 株式会社tiwaki
Publication of WO2018116538A1 publication Critical patent/WO2018116538A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services

Definitions

  • the present invention relates to a virtual content storage method capable of virtually leaving a character, a still image, a moving image or the like on a predetermined object.
  • each of the above methods has a problem in that it has a difficulty in preservation. That is, in the case of graffiti, it is forcibly erased as it impairs beautification, and when a note is written on a note, the note may be damaged or lost.
  • many tourists who visit the historical sites of the sights draw graffiti on national treasure-class buildings to satisfy their wishes, which is a social problem.
  • an object of the present invention is to provide a virtual content storage method capable of virtually leaving a character, a still image, a moving image or the like on a predetermined object.
  • the virtual content storage method is a virtual content storage method in which a user terminal (2) having an imaging means (camera function) and a virtual content storage server (1) are connected via a network (N). Because The user images a predetermined object (see image P1 shown in FIG. 4A) using the imaging means (camera function) of the user terminal (2), and the imaged predetermined object (FIG. 4A ) (See image P1)) to the virtual content storage server (1) (step S1); The user inputs virtual content using the user terminal (2) to the captured predetermined object (see the image P1, particularly the object A shown in FIG.
  • step S4 The virtual content storage server (1) transmits the input virtual content in association with the imaged predetermined object (see the image P1, particularly the object A shown in FIG. 4C), to the virtual content storage server. (1) storing (step S5).
  • a virtual content storage method is the virtual content storage method according to claim 1, wherein the user uses the imaging means (camera function) of the user terminal (2) to obtain a predetermined object (FIG. Step 5: newly picking up an image P1 shown in FIG. 5 (a) and transmitting the newly picked up predetermined object (see image P1 shown in FIG. 5A) to the virtual content storage server (1) (Step S10), In the virtual content storage server (1), the virtual content storage server (1) based on the transmitted newly captured predetermined object (see the image P2, particularly the object A shown in FIG. 5B). ) To read the virtual content stored in association with the imaged predetermined object (see image P2 shown in FIGS. 4B and 4C, in particular, object A), and read the read virtual content to the user. A step of transmitting to the terminal (2) (step S12).
  • the virtual content storage method according to claim 3 is the virtual content storage method according to claim 1, wherein the user terminal (2) includes position information acquisition means (GPS function) capable of acquiring position information. And When transmitting the imaged predetermined object (see the image P1 shown in FIGS. 4 (a) and 5 (a)) to the virtual content storage server (1), the position information acquisition means (GPS function) The acquired position information is also transmitted.
  • GPS function position information acquisition means
  • the user images a predetermined object (see the image P1 shown in FIG. 4A) using the imaging means (camera function) of the user terminal (2), and images the image.
  • a predetermined object (see image P1 shown in FIG. 4A) is transmitted to the virtual content storage server (1).
  • the user inputs virtual contents such as characters, still images, or moving images using the user terminal (2) for a predetermined captured object (see the image P1, particularly the object A shown in FIG. 4C).
  • the inputted virtual content is transmitted to the virtual content storage server (1).
  • the virtual content storage server (1) can store the input virtual content in association with the imaged predetermined object (see the image P1, particularly, the object A shown in FIG. 4C). .
  • the user newly captures a predetermined object (see the image P1 shown in FIG. 5A) using the imaging means (camera function) of the user terminal (2). Then, the newly captured predetermined object (see image P1 shown in FIG. 5A) is transmitted to the virtual content storage server (1). Then, in the virtual content storage server (1), the virtual content storage server (1) based on the newly captured predetermined object (see the image P2, particularly the object A shown in FIG. 5B). ) Is read in association with a predetermined object (see the image P2 shown in FIGS. 4B and 4C, in particular, the object A), and the read virtual content is read out to the user terminal (2). To send to. As a result, characters, still images, moving images, and the like that are virtually left on a predetermined object can be shared with other people.
  • the virtual content storage server (1) when the predetermined imaged object (see the image P1 shown in FIGS. 4A and 5A) is transmitted to the virtual content storage server (1), the position The position information acquired by the information acquisition means (GPS function) is also transmitted.
  • the virtual content storage server (1) is configured based on the newly-captured predetermined object (see the image P2 shown in FIG. 5B, particularly the object A).
  • FIG. 1 is an overall view showing an embodiment of a virtual content storage system according to the present invention. It is a conceptual block diagram of the virtual content storage server according to the embodiment. It is a figure which shows the table stored in the virtual content storage database which concerns on the embodiment.
  • (A)-(c) is a figure which shows the example of a screen in the case of inputting virtual content using the virtual content storage system which concerns on the embodiment.
  • (A)-(c) is a figure which shows the example of a screen in the case of seeing a virtual content using the virtual content storage system concerning the embodiment.
  • a virtual content storage server 1 and a user terminal 2 used by a user are connected via a network N as shown in FIG.
  • the user terminal 2 includes a smartphone such as iphone (registered trademark), a mobile phone, a PDA (Personal Digital Assistance), and the like, and can acquire a camera function and position information that can capture a predetermined object. It has a function.
  • iphone registered trademark
  • PDA Personal Digital Assistance
  • the virtual content storage server 1 includes a central control unit 10 including a CPU and a user transmission information acquisition unit that acquires information transmitted by the user using the user terminal 2 (see FIG. 1). 11, a virtual content storage database 12, an object specifying unit 13, a virtual content input frame image generating unit 14, a virtual content extracting unit 15, and connected to the network N by communication means such as a wireless LAN, a wired LAN, and dial-up. It is comprised with the communication part 16 which can carry out.
  • the virtual content storage database 12 stores a table TBL in which virtual content is stored in association with specific objects and position information. More specifically, in this table TBL, a specific object (in the figure, object A, object) that is captured by the user using the camera function of the user terminal 2 (see FIG. 1) and specified by the object specifying unit 13 is displayed. B and the object C) are stored (see TB1), and the user terminal 2 (see FIG. 1) has a GPS function. Information A, position information C, and position information D) are stored (see TB2), and further, virtual contents (in the figure, virtual characters are input by the user using the user terminal 2 (see FIG. 1)). (I love this building, it is highly complete, it looks delicious, I want to buy it!
  • the object identification unit 13 identifies a predetermined object from a captured image captured by the user using the camera function of the user terminal 2 (see FIG. 1). Specifically, as shown in FIG. 4A, when the user captures an image P1 using the camera function of the user terminal 2, the user transmission information acquisition unit 11 (see FIG. 2) acquires the information. . In response to this, the object specifying unit 13 detects an object such as a building or a picture from the image P1, and specifies an object having a large proportion of the image P1 among them. That is, in the image P1 shown in FIG. 4A, an object having a large proportion of the image P1 is a building. Therefore, the object specifying unit 13 specifies the building as shown in FIG. 4B.
  • an image (image P2a) whose portion is surrounded by a frame is displayed on the user terminal 2 as the image P2.
  • the user transmission information acquisition part 11 acquires the image P1 shown to Fig.4 (a)
  • the user transmission information acquisition part 11 (FIG. 2)
  • the position information acquired by the GPS function is also acquired.
  • a plurality of objects are specified.
  • the virtual content input frame image generation unit 14 generates a frame image that prompts the user to input virtual content such as characters, still images, or moving images using the user terminal 2 (see FIG. 1). It is. That is, as illustrated in FIG. 4B, after the image P2 is displayed on the user terminal 2, the virtual content input frame image generation unit 14 generates a frame image P2b illustrated in FIG. As a result, as shown in FIG. 4C, the frame image P2b is displayed on the user terminal 2, and the user can input virtual contents using the user terminal 2 in the frame image P2b.
  • the input virtual content (in FIG. 4C, an example in which “I love this building” is input as a virtual character is described) is acquired by the user transmission information acquisition unit 11 (see FIG. 2).
  • the central control unit 10 stores the specific object specified by the object specifying unit 13 (see the object A shown in FIG. 4B) in TB1 of the table TBL stored in the virtual content storage database 12.
  • the position information of the specific object (see position information A shown in FIG. 3) is stored in TB2, and further, “I love this building” is input as a virtual character in the input virtual contents (FIG. 4C). Will be stored in TB3.
  • the virtual content (see TB3) is stored in the TBL shown in FIG. 3 in association with the specific object (see TB1) and the position information (see TB2).
  • the virtual content extraction unit 15 reads the virtual content stored in the table TBL (see FIG. 3) stored in the virtual character content storage database 12.
  • the user transmission information acquisition unit 11 acquires the information.
  • the object specifying unit 13 specifies an object such as a building or a picture from the image P1, and among them, specifies an object having a large proportion of the image P1. That is, in the image P1 shown in FIG. 5A, an object having a large proportion of the image P1 is a building. Therefore, the object specifying unit 13 specifies the building as shown in FIG. 5B. (Refer to the object A), an image (image P2a) whose portion is surrounded by a frame is displayed on the user terminal 2 as the image P2.
  • the user transmission information acquisition part 11 acquires the image P1 shown to Fig.5 (a)
  • the user transmission information acquisition part 11 since the user terminal 2 is equipped with the GPS function, the user transmission information acquisition part 11 (FIG. 2), the position information acquired by the GPS function is also acquired.
  • the user activates an application installed in advance on the user terminal 2 (see FIG. 1), selects content indicating that virtual content is to be input from a menu screen (not shown), and uses the camera function of the user terminal 2 to An image P1 shown in 4 (a) is taken.
  • the position information is added to the image P1 when the image P1 is captured.
  • the user transmits the selected content, the captured image P1, and the position information to the virtual content storage server 1 (see FIG. 1) via the network N using the user terminal 2 (step S1). It should be noted that capturing the image P1 shown in FIG.
  • the user transmission information acquisition unit 11 of the virtual content storage server 1 shown in FIG. 2 acquires the selected content, the captured image P1, and the position information, and sends them to the central control unit 10.
  • the central control unit 10 receives the transmitted content and transmits the image P1 acquired by the user transmission information acquisition unit 11 to the object specifying unit 13.
  • the object specifying unit 13 specifies an object (see the object A shown in FIG. 4B) that has a large proportion of the image P1 in the image P1, and the specified part is surrounded by a frame.
  • An image (image P2a) is generated and sent to the central control unit 10.
  • the central control unit 10 transmits the transmitted content to the user terminal 2 (see FIG. 1) via the network N (step S2). Thereby, the image P2 shown in FIG. 4B is displayed on the user terminal 2.
  • an example in which one object is specified is shown for easy understanding, but a plurality of objects may be specified as described above.
  • the central control unit 10 shown in FIG. 2 instructs the virtual content input frame image generation unit 14 to generate a frame image that prompts the user to input virtual content.
  • the virtual content input frame image generation unit 14 generates a frame image (see the frame image P2b shown in FIG. 4C) that prompts input of virtual content such as characters, still images, or moving images, and performs central control.
  • the central control unit 10 transmits the transmitted content to the user terminal 2 (see FIG. 1) via the network N (step S3).
  • the frame image P2b shown in FIG. 4C is displayed on the user terminal 2.
  • the user uses the user terminal 2 (see FIG. 1) to input desired virtual contents such as characters, still images, or moving images in the displayed frame image P2b (in FIG. 4C, virtual It shows an example of text input, and an example of “I love this building”. Then, the user transmits the input virtual content to the virtual content storage server 1 via the network N using the user terminal 2 (step S4).
  • desired virtual contents such as characters, still images, or moving images in the displayed frame image P2b (in FIG. 4C, virtual It shows an example of text input, and an example of “I love this building”.
  • the user transmits the input virtual content to the virtual content storage server 1 via the network N using the user terminal 2 (step S4).
  • the user transmission information acquisition unit 11 of the virtual content storage server 1 shown in FIG. 2 acquires the input virtual content and sends it to the central control unit 10.
  • the central control unit 10 receives the transmitted content, and the object specified by the object specifying unit 13 in TB1 of the table TBL (see FIG. 3) stored in the virtual content storage database 12 (FIG. 4B). And the positional information (see positional information A shown in FIG. 3) of the identified object is stored in TB2, and the input virtual content (FIG. 4 (c)) An example in which virtual characters are input is shown, and an example in which “I love this building” is shown) is stored in TB3 (step S5). As a result, the virtual contents (see TB3) are stored in the TBL shown in FIG. 3 in association with the specific object (see TB1) and the position information (see TB2).
  • the user activates an application installed in the user terminal 2 (see FIG. 1) in advance, selects contents for viewing virtual contents from a menu screen (not shown), and uses the camera function of the user terminal 2 to An image P1 shown in FIG.
  • the position information is added to the image P1 when the image P1 is captured.
  • the selected content, the captured image P1, and the position information are transmitted to the virtual content storage server 1 via the network N using the user terminal 2 (step S10).
  • the imaging of the image P1 shown in FIG. 5A by using the camera function of the user terminal 2 shown in the present embodiment means what is actually captured using the camera function and the camera function. What is not what was actually image
  • the user transmission information acquisition unit 11 of the virtual content storage server 1 shown in FIG. 2 acquires the selected content, the captured image P1, and the position information, and sends them to the central control unit 10.
  • the central control unit 10 receives the transmitted content and transmits the image P1 acquired by the user transmission information acquisition unit 11 to the object specifying unit 13.
  • the object specifying unit 13 specifies an object (see the object A shown in FIG. 5B) that has a large proportion of the image P1 in the image P1, and the specified part is surrounded by a frame.
  • An image (image P2a) is generated and sent to the central control unit 10.
  • the central control unit 10 causes the virtual content extraction unit 15 to specify the object (see FIG. 3) specified in the table TBL (see FIG. 3) stored in the virtual character content storage database 12. An instruction is given to specify whether or not the object A (see FIG. 5B) is stored.
  • the virtual content extraction unit 15 stores the object specified by the object specifying unit 13 in the table TBL stored in the virtual content storage database 12 (see the object A shown in FIG. 5B). And whether a specific result is sent to the central control unit 10. If the specified object (see object A shown in FIG.
  • the central control unit 10 determines that the object specifying unit 13
  • the transmitted content is transmitted to the user terminal 2 (see FIG. 1) via the network N (step S11).
  • the image P2 shown in FIG. 5B is displayed on the user terminal 2.
  • the present embodiment for ease of understanding, only an example of specifying one object is shown. However, as described above, a plurality of objects may be specified.
  • the central control unit 10 instructs the virtual content extraction unit 15 to read the virtual content stored in the table TBL (see FIG. 3) stored in the virtual character content storage database 12.
  • the virtual content extraction unit 15 reads the virtual content (see TB3) stored in association with the object specified in step S11 and sends it to the central control unit 10.
  • the central control unit 10 transmits the transmitted content to the user terminal 2 (see FIG. 1) via the network N (step S12).
  • the image P2c shown in FIG. 5C is displayed on the user terminal 2 (step S13).
  • the user captures the image P1 shown in FIG. 4A using the camera function of the user terminal 2, and the captured image P1 shown in FIG. Transmit to the virtual content storage server 1. Then, the user uses the user terminal 2 to display virtual contents such as characters, still images, and moving images with respect to the object A shown in FIGS. 4B and 4C in the captured image P1 shown in FIG.
  • the input virtual content is transmitted to the virtual content storage server 1.
  • the virtual content storage server 1 can store the input virtual content in association with the object A shown in FIGS. 4B and 4C in the captured image P1 shown in FIG. it can.
  • characters, still images, moving images, etc. can be virtually left on the object A shown in FIGS. 4B and 4C in the captured image P1.
  • the user newly takes an image P1 shown in FIG. 5A using the camera function of the user terminal 2, and the newly taken image P1 shown in FIG. Transmit to the virtual content storage server 1.
  • the object A is stored in the virtual content storage server 1 based on the object A shown in FIG. 5B out of the newly captured image P1 shown in FIG.
  • the virtual contents such as characters, still images, and moving pictures stored in association with the virtual contents are read out, and the read virtual contents are transmitted to the user terminal 2. This makes it possible to share characters, still images, moving images, and the like virtually left on the object A shown in FIGS. 4B and 4C with other people.
  • the user terminal 2 is provided with a GPS function
  • a similar object such as a temple or a shrine is stored in the table TBL (see FIG. 3).
  • the user terminal 2 has the GPS function, the position information acquired by the GPS function can be specified by using the GPS function. The object can be identified. Therefore, it is preferable that the user terminal 2 has a GPS function.
  • the position information of the object A shown in FIG. 3 is all the same position information as the position information A has been shown. If the positions (places) to be imaged using the above camera function are different, even if the same object A is imaged, some error will occur in the position information. Therefore, when the object specified by the object specifying unit 13 is specified by the virtual content extracting unit 15 using the position information, the position information within a certain range (this error range is determined in advance). May be processed assuming that all are the same position information.
  • the object is specified from the image P1 captured using the camera function of the user terminal 2 shown in FIGS. 4A and 5A, but the object is not specified.
  • Virtual content may be input to the captured image P1.
  • specification part 13 of the virtual content storage server 1 was shown in specifying an object, not only that but the method of specifying an object may be what kind of method.
  • the object is not limited to being automatically specified, and the user may specify the object manually.
  • the virtual content storage server 1 includes the object specifying unit 13 has been described. The function of the object specifying unit 13 may be provided.
  • the present invention in the example in which the image P2c shown in FIG. 5C is displayed on the user terminal 2, an example in which all the virtual contents stored in association with the specific object are displayed has been shown.
  • the present invention is not limited to this, and a filter function may be provided in an application that the user has installed in the user terminal 2 (see FIG. 1) in advance so that, for example, only ten cases may be displayed.
  • the virtual content storage server 1 is constructed by one unit, but it is needless to say that processing may be distributed and constructed by a plurality of units.
  • the virtual content storage system, the usage example, and the screen example described in the present embodiment are merely examples, and the present invention is not limited thereto.
  • such a virtual content storage system can be applied to the following uses.
  • virtual graffiti can be realized.
  • the present invention can be applied to games such as a game for searching for treasure and an exploration / matching game.
  • a virtual explanatory document can be displayed on the museum exhibit, and a virtual introduction text of the landmark of the tourist spot can be displayed.
  • virtual advertising texts and campaign information can be displayed for a predetermined product, and a virtual signboard can be displayed on a landmark of a building or a sightseeing spot.

Abstract

An objective of the present invention is to provide a virtual content storage method with which it is possible to leave text, still image, motion video, etc. on a prescribed object in virtual fashion. The virtual content storage method comprises: a step (step S1) in which a user captures an image of a prescribed object using a camera function of a user terminal and transmits the imaged prescribed object to a virtual content storage server; a step (step S4) in which the user inputs virtual content with respect to the imaged prescribed object using the user terminal 2 and transmits the inputted virtual content to the virtual content storage server; and a step (step S5) of allowing the virtual content storage server to store the virtual content inputted in association with the imaged prescribed object in the virtual content storage server.

Description

仮想内容記憶方法Virtual content storage method
 本発明は、所定の対象物に文字や静止画あるいは動画等を仮想的に残すことができる仮想内容記憶方法に関する。 The present invention relates to a virtual content storage method capable of virtually leaving a character, a still image, a moving image or the like on a predetermined object.
 元来、人間の願望として、名所旧跡を訪れた際、感想やメッセージを残そうと思う人が多く、その手段として、従来、落書きをしたり、あるいは、ノートに寄せ書きをしたりする場合が多く見受けられた。 Originally, there are many people who want to leave impressions and messages when visiting historic sites as human desires, and as a means of doing so, they have traditionally used graffiti or written notes. It was seen.
 しかしながら、上記のような方法は、何れも保存性に難点があるという問題があった。すなわち、落書きの場合は美化を損ねるものとして強制的に消去され、ノートに寄せ書きをした場合は、そのノートが破損したり紛失したりすることがあった。また、名所旧跡を訪れる観光客の中には、願望を満たすべく国宝級の建造物などに落書きをする者が多々あり、社会問題にもなっている。 However, each of the above methods has a problem in that it has a difficulty in preservation. That is, in the case of graffiti, it is forcibly erased as it impairs beautification, and when a note is written on a note, the note may be damaged or lost. In addition, many tourists who visit the historical sites of the sights draw graffiti on national treasure-class buildings to satisfy their wishes, which is a social problem.
 そこで本発明は、上記問題に鑑み、所定の対象物に文字や静止画あるいは動画等を仮想的に残すことができる仮想内容記憶方法を提供することを目的としている。 Therefore, in view of the above problems, an object of the present invention is to provide a virtual content storage method capable of virtually leaving a character, a still image, a moving image or the like on a predetermined object.
 上記課題を解決するための手段は、以下の手段によって達成される。なお、括弧内は、後述する実施形態の参照符号を付したものであるが、本発明はこれに限定されるものではない。 The means for solving the above-described problems are achieved by the following means. In addition, although the code | symbol in a parenthesis attaches the referential mark of embodiment mentioned later, this invention is not limited to this.
 請求項1に係る仮想内容記憶方法は、撮像手段(カメラ機能)を備えたユーザ端末(2)と仮想内容記憶サーバ(1)とがネットワーク(N)を介して接続されてなる仮想内容記憶方法であって、
 ユーザが前記ユーザ端末(2)の撮像手段(カメラ機能)を用いて所定の対象物(図4(a)に示す画像P1参照)を撮像し、該撮像した所定の対象物(図4(a)に示す画像P1参照)を前記仮想内容記憶サーバ(1)に送信するステップ(ステップS1)と、
 前記撮像した所定の対象物(図4(c)に示す画像P1,特に、物体A参照)に対してユーザが前記ユーザ端末(2)を用いて仮想内容を入力し、該入力した仮想内容を前記仮想内容記憶サーバ(1)に送信するステップ(ステップS4)と、
 前記撮像した所定の対象物(図4(c)に示す画像P1,特に、物体A参照)に関連付けて前記入力した仮想内容を、前記仮想内容記憶サーバ(1)にて、該仮想内容記憶サーバ(1)に記憶するステップ(ステップS5)と、を含んでなることを特徴としている。
The virtual content storage method according to claim 1 is a virtual content storage method in which a user terminal (2) having an imaging means (camera function) and a virtual content storage server (1) are connected via a network (N). Because
The user images a predetermined object (see image P1 shown in FIG. 4A) using the imaging means (camera function) of the user terminal (2), and the imaged predetermined object (FIG. 4A ) (See image P1)) to the virtual content storage server (1) (step S1);
The user inputs virtual content using the user terminal (2) to the captured predetermined object (see the image P1, particularly the object A shown in FIG. 4C), and the input virtual content is Transmitting to the virtual content storage server (1) (step S4);
The virtual content storage server (1) transmits the input virtual content in association with the imaged predetermined object (see the image P1, particularly the object A shown in FIG. 4C), to the virtual content storage server. (1) storing (step S5).
 また、請求項2に係る仮想内容記憶方法は、上記請求項1に記載の仮想内容記憶方法において、ユーザが前記ユーザ端末(2)の撮像手段(カメラ機能)を用いて所定の対象物(図5(a)に示す画像P1参照)を新たに撮像し、該新たに撮像した所定の対象物(図5(a)に示す画像P1参照)を前記仮想内容記憶サーバ(1)に送信するステップ(ステップS10)と、
 前記仮想内容記憶サーバ(1)にて、前記送信された新たに撮像した所定の対象物(図5(b)に示す画像P2,特に、物体A参照)に基づいて当該仮想内容記憶サーバ(1)に前記撮像した所定の対象物(図4(b),(c)に示す画像P2,特に、物体A参照)に関連付けて記憶されている仮想内容を読み出し、該読み出した仮想内容を前記ユーザ端末(2)に送信するステップ(ステップS12)と、をさらに含んでなることを特徴としている。
A virtual content storage method according to claim 2 is the virtual content storage method according to claim 1, wherein the user uses the imaging means (camera function) of the user terminal (2) to obtain a predetermined object (FIG. Step 5: newly picking up an image P1 shown in FIG. 5 (a) and transmitting the newly picked up predetermined object (see image P1 shown in FIG. 5A) to the virtual content storage server (1) (Step S10),
In the virtual content storage server (1), the virtual content storage server (1) based on the transmitted newly captured predetermined object (see the image P2, particularly the object A shown in FIG. 5B). ) To read the virtual content stored in association with the imaged predetermined object (see image P2 shown in FIGS. 4B and 4C, in particular, object A), and read the read virtual content to the user. A step of transmitting to the terminal (2) (step S12).
 さらに、請求項3に係る仮想内容記憶方法は、上記請求項1に記載の仮想内容記憶方法において、前記ユーザ端末(2)は、位置情報を取得可能な位置情報取得手段(GPS機能)を備えており、
 前記撮像した所定の対象物(図4(a),図5(a)に示す画像P1参照)を前記仮想内容記憶サーバ(1)に送信する際、前記位置情報取得手段(GPS機能)にて取得した位置情報も送信してなることを特徴としている。
Furthermore, the virtual content storage method according to claim 3 is the virtual content storage method according to claim 1, wherein the user terminal (2) includes position information acquisition means (GPS function) capable of acquiring position information. And
When transmitting the imaged predetermined object (see the image P1 shown in FIGS. 4 (a) and 5 (a)) to the virtual content storage server (1), the position information acquisition means (GPS function) The acquired position information is also transmitted.
 次に、本発明の効果について、図面の参照符号を付して説明する。なお、括弧内は、後述する実施形態の参照符号を付したものであるが、本発明はこれに限定されるものではない。 Next, effects of the present invention will be described with reference numerals in the drawings. In addition, although the code | symbol in a parenthesis attaches the referential mark of embodiment mentioned later, this invention is not limited to this.
 請求項1に係る発明によれば、ユーザがユーザ端末(2)の撮像手段(カメラ機能)を用いて所定の対象物(図4(a)に示す画像P1参照)を撮像し、その撮像した所定の対象物(図4(a)に示す画像P1参照)を仮想内容記憶サーバ(1)に送信する。そして、撮像した所定の対象物(図4(c)に示す画像P1,特に、物体A参照)に対してユーザがユーザ端末(2)を用いて文字や静止画あるいは動画等の仮想内容を入力し、その入力した仮想内容を仮想内容記憶サーバ(1)に送信する。これにより、仮想内容記憶サーバ(1)は、入力された仮想内容を、撮像した所定の対象物(図4(c)に示す画像P1,特に、物体A参照)に関連付けて記憶することができる。 According to the first aspect of the invention, the user images a predetermined object (see the image P1 shown in FIG. 4A) using the imaging means (camera function) of the user terminal (2), and images the image. A predetermined object (see image P1 shown in FIG. 4A) is transmitted to the virtual content storage server (1). Then, the user inputs virtual contents such as characters, still images, or moving images using the user terminal (2) for a predetermined captured object (see the image P1, particularly the object A shown in FIG. 4C). Then, the inputted virtual content is transmitted to the virtual content storage server (1). Thereby, the virtual content storage server (1) can store the input virtual content in association with the imaged predetermined object (see the image P1, particularly, the object A shown in FIG. 4C). .
 しかして、本発明によれば、所定の対象物に文字や静止画あるいは動画等を仮想的に残すことができる。 However, according to the present invention, it is possible to virtually leave a character, a still image, a moving image, or the like on a predetermined object.
 また、請求項2に係る発明によれば、ユーザがユーザ端末(2)の撮像手段(カメラ機能)を用いて所定の対象物(図5(a)に示す画像P1参照)を新たに撮像し、その新たに撮像した所定の対象物(図5(a)に示す画像P1参照)を仮想内容記憶サーバ(1)に送信する。そして、仮想内容記憶サーバ(1)にて、送信された新たに撮像した所定の対象物(図5(b)に示す画像P2,特に、物体A参照)に基づいて当該仮想内容記憶サーバ(1)に所定の対象物(図4(b),(c)に示す画像P2,特に、物体A参照)に関連付けて記憶されている仮想内容を読み出し、その読み出した仮想内容をユーザ端末(2)に送信するようにしている。これにより、所定の対象物に仮想的に残した文字や静止画あるいは動画等を他の人と共有することができる。 According to the invention of claim 2, the user newly captures a predetermined object (see the image P1 shown in FIG. 5A) using the imaging means (camera function) of the user terminal (2). Then, the newly captured predetermined object (see image P1 shown in FIG. 5A) is transmitted to the virtual content storage server (1). Then, in the virtual content storage server (1), the virtual content storage server (1) based on the newly captured predetermined object (see the image P2, particularly the object A shown in FIG. 5B). ) Is read in association with a predetermined object (see the image P2 shown in FIGS. 4B and 4C, in particular, the object A), and the read virtual content is read out to the user terminal (2). To send to. As a result, characters, still images, moving images, and the like that are virtually left on a predetermined object can be shared with other people.
 さらに、請求項3に係る発明によれば、撮像した所定の対象物(図4(a),図5(a)に示す画像P1参照)を仮想内容記憶サーバ(1)に送信する際、位置情報取得手段(GPS機能)にて取得した位置情報も送信するようにしている。これにより、仮想内容記憶サーバ(1)にて、送信された新たに撮像した所定の対象物(図5(b)に示す画像P2,特に、物体A参照)に基づいて当該仮想内容記憶サーバ(1)に所定の対象物(図4(b),(c)に示す画像P2,特に、物体A参照)に関連付けて記憶されている仮想内容を読み出す際、寺院や神社等似たような物体が記憶されていたとしても、位置情報も用いて特定するようにすれば、確実に所望の仮想内容を読み出すことができる。 Furthermore, according to the third aspect of the present invention, when the predetermined imaged object (see the image P1 shown in FIGS. 4A and 5A) is transmitted to the virtual content storage server (1), the position The position information acquired by the information acquisition means (GPS function) is also transmitted. As a result, the virtual content storage server (1) is configured based on the newly-captured predetermined object (see the image P2 shown in FIG. 5B, particularly the object A). When reading the virtual contents stored in association with the predetermined object 1 (see the image P2 shown in FIGS. 4B and 4C, especially the object A) in 1), a similar object such as a temple or a shrine Even if it is stored, the desired virtual content can be reliably read out by specifying the position information as well.
本発明に係る仮想内容記憶システムの一実施形態を示す全体図である。1 is an overall view showing an embodiment of a virtual content storage system according to the present invention. 同実施形態に係る仮想内容記憶サーバの概念ブロック図である。It is a conceptual block diagram of the virtual content storage server according to the embodiment. 同実施形態に係る仮想内容記憶データベースに格納されているテーブルを示す図である。It is a figure which shows the table stored in the virtual content storage database which concerns on the embodiment. (a)~(c)は、同実施形態に係る仮想内容記憶システムを用いて仮想内容を入力する場合の画面例を示す図である。(A)-(c) is a figure which shows the example of a screen in the case of inputting virtual content using the virtual content storage system which concerns on the embodiment. (a)~(c)は、同実施形態に係る仮想内容記憶システムを用いて仮想内容を見る場合の画面例を示す図である。(A)-(c) is a figure which shows the example of a screen in the case of seeing a virtual content using the virtual content storage system concerning the embodiment. 同実施形態に係る仮想内容記憶システムの一使用例のうち、仮想内容を入力する場合を説明するフローチャート図である。It is a flowchart figure explaining the case where a virtual content is input among the usage examples of the virtual content storage system which concerns on the embodiment. 同実施形態に係る仮想内容記憶システムの一使用例のうち、仮想内容を見る場合を説明するフローチャート図である。It is a flowchart figure explaining the case where a virtual content is seen among one usage examples of the virtual content storage system which concerns on the embodiment.
 以下、本発明に係る仮想内容記憶方法に関する仮想内容記憶システムの一実施形態について、図面を参照して具体的に説明する。仮想内容記憶システムは、図1に示すように、仮想内容記憶サーバ1と、ユーザが使用するユーザ端末2とがネットワークNを介して接続されている。このユーザ端末2は、iphone(登録商標)等のスマートフォンや携帯電話、PDA(Personal Digital Assistance)等で構成されており、所定の対象物を撮像できるカメラ機能及び位置情報を取得することができるGPS機能を備えているものである。 Hereinafter, an embodiment of a virtual content storage system related to a virtual content storage method according to the present invention will be specifically described with reference to the drawings. In the virtual content storage system, a virtual content storage server 1 and a user terminal 2 used by a user are connected via a network N as shown in FIG. The user terminal 2 includes a smartphone such as iphone (registered trademark), a mobile phone, a PDA (Personal Digital Assistance), and the like, and can acquire a camera function and position information that can capture a predetermined object. It has a function.
 一方、仮想内容記憶サーバ1は、図2に示すように、CPU等からなる中央制御部10と、ユーザがユーザ端末2(図1参照)を用いて送信した情報を取得するユーザ送信情報取得部11と、仮想内容記憶データベース12と、物体特定部13と、仮想内容入力枠画像生成部14と、仮想内容抽出部15と、無線LAN,有線LAN,ダイヤルアップ等の通信手段でネットワークNに接続が可能な通信部16とで構成されている。 On the other hand, as shown in FIG. 2, the virtual content storage server 1 includes a central control unit 10 including a CPU and a user transmission information acquisition unit that acquires information transmitted by the user using the user terminal 2 (see FIG. 1). 11, a virtual content storage database 12, an object specifying unit 13, a virtual content input frame image generating unit 14, a virtual content extracting unit 15, and connected to the network N by communication means such as a wireless LAN, a wired LAN, and dial-up. It is comprised with the communication part 16 which can carry out.
 ところで、仮想内容記憶データベース12には、図3に示すように、特定物体及び位置情報に関連付けて仮想内容が記憶されているテーブルTBLが格納されている。より具体的に説明すると、このテーブルTBLには、ユーザがユーザ端末2(図1参照)のカメラ機能を用いて撮像し、物体特定部13にて特定した特定物体(図示では、物体A,物体B,物体Cと記載)が記憶され(TB1参照)、そして、ユーザ端末2(図1参照)がGPS機能を備えていることから、この特定物体に対応する夫々の位置情報(図示では、位置情報A,位置情報C,位置情報Dと記載)が記憶され(TB2参照)、そしてさらに、ユーザがユーザ端末2(図1参照)を用いて入力した仮想内容(図示では、仮想文字が入力された例を示している(この建物が大好き、完成度が高いですね、おいしそうですね、購入したい!、一番感動する作品だよ))が、特定物体(TB1参照)及び位置情報(TB2参照)に関連付けられて記憶(TB3参照)されている。なお、図3に示すTB3に記憶されている仮想内容として、本実施形態においては、仮想文字が記憶されている例を示しているが、それに限らず、静止画や動画等を記憶しておくことも可能である。 Incidentally, as shown in FIG. 3, the virtual content storage database 12 stores a table TBL in which virtual content is stored in association with specific objects and position information. More specifically, in this table TBL, a specific object (in the figure, object A, object) that is captured by the user using the camera function of the user terminal 2 (see FIG. 1) and specified by the object specifying unit 13 is displayed. B and the object C) are stored (see TB1), and the user terminal 2 (see FIG. 1) has a GPS function. Information A, position information C, and position information D) are stored (see TB2), and further, virtual contents (in the figure, virtual characters are input by the user using the user terminal 2 (see FIG. 1)). (I love this building, it is highly complete, it looks delicious, I want to buy it! It ’s the most impressive work)), but the specific object (see TB1) and location information (see TB2) Associate with Is it stored (see TB3). In addition, in this embodiment, although the virtual character is memorize | stored as an example of the virtual content memorize | stored in TB3 shown in FIG. 3, not only that but a still image, a moving image, etc. are memorize | stored. It is also possible.
 一方、物体特定部13は、ユーザがユーザ端末2(図1参照)のカメラ機能を用いて撮像した撮像画像から所定の物体を特定するものである。具体的には、図4(a)に示すように、ユーザがユーザ端末2のカメラ機能を用いて、画像P1を撮像すると、ユーザ送信情報取得部11(図2参照)がその情報を取得する。これを受けて、物体特定部13は、その画像P1から、建物や絵等の物体を検出し、その中でも画像P1に占める割合が多い物体を特定する。すなわち、図4(a)に示す画像P1のうち、画像P1に占める割合が多い物体は、建物であるため、物体特定部13にて、図4(b)に示すように、建物が特定され(物体A参照)、その部分が枠で囲まれた画像(画像P2a)が画像P2としてユーザ端末2に表示されることとなる。なお、図4(a)に示す画像P1をユーザ送信情報取得部11(図2参照)が取得する際、ユーザ端末2にはGPS機能が備わっていることから、ユーザ送信情報取得部11(図2参照)は、このGPS機能により取得した位置情報も合せて取得することとなる。なおまた、画像P1に占める割合が多い物体が複数存在する場合は、複数の物体が特定されることとなる。 On the other hand, the object identification unit 13 identifies a predetermined object from a captured image captured by the user using the camera function of the user terminal 2 (see FIG. 1). Specifically, as shown in FIG. 4A, when the user captures an image P1 using the camera function of the user terminal 2, the user transmission information acquisition unit 11 (see FIG. 2) acquires the information. . In response to this, the object specifying unit 13 detects an object such as a building or a picture from the image P1, and specifies an object having a large proportion of the image P1 among them. That is, in the image P1 shown in FIG. 4A, an object having a large proportion of the image P1 is a building. Therefore, the object specifying unit 13 specifies the building as shown in FIG. 4B. (Refer to the object A), an image (image P2a) whose portion is surrounded by a frame is displayed on the user terminal 2 as the image P2. In addition, when the user transmission information acquisition part 11 (refer FIG. 2) acquires the image P1 shown to Fig.4 (a), since the user terminal 2 is equipped with the GPS function, the user transmission information acquisition part 11 (FIG. 2), the position information acquired by the GPS function is also acquired. In addition, when there are a plurality of objects having a large proportion of the image P1, a plurality of objects are specified.
 一方、仮想内容入力枠画像生成部14は、ユーザに対し、ユーザ端末2(図1参照)を用いて、文字や静止画あるいは動画等の仮想内容の入力をするよう促す枠画像を生成するものである。すなわち、図4(b)に示すように、画像P2がユーザ端末2に表示された後、仮想内容入力枠画像生成部14は、図4(c)に示す枠画像P2bを生成する。これにより、図4(c)に示すように、ユーザ端末2に枠画像P2bが表示され、この枠画像P2b内にユーザがユーザ端末2を用いて仮想内容を入力することができることとなる。 On the other hand, the virtual content input frame image generation unit 14 generates a frame image that prompts the user to input virtual content such as characters, still images, or moving images using the user terminal 2 (see FIG. 1). It is. That is, as illustrated in FIG. 4B, after the image P2 is displayed on the user terminal 2, the virtual content input frame image generation unit 14 generates a frame image P2b illustrated in FIG. As a result, as shown in FIG. 4C, the frame image P2b is displayed on the user terminal 2, and the user can input virtual contents using the user terminal 2 in the frame image P2b.
 かくして、この入力された仮想内容(図4(c)では、仮想文字として「この建物が大好き」と入力された例を記載)がユーザ送信情報取得部11(図2参照)にて取得されると、中央制御部10は、仮想内容記憶データベース12に格納されているテーブルTBLのTB1に物体特定部13にて特定した特定物体(図4(b)に示す物体A参照)を記憶すると共に、TB2にその特定物体の位置情報(図3に示す位置情報A参照)を記憶し、さらに、この入力された仮想内容(図4(c)では、仮想文字として「この建物が大好き」と入力された例を記載)をTB3に記憶することとなる。 Thus, the input virtual content (in FIG. 4C, an example in which “I love this building” is input as a virtual character is described) is acquired by the user transmission information acquisition unit 11 (see FIG. 2). The central control unit 10 stores the specific object specified by the object specifying unit 13 (see the object A shown in FIG. 4B) in TB1 of the table TBL stored in the virtual content storage database 12. The position information of the specific object (see position information A shown in FIG. 3) is stored in TB2, and further, “I love this building” is input as a virtual character in the input virtual contents (FIG. 4C). Will be stored in TB3.
 しかして、このようにして、図3に示すTBLに、特定物体(TB1参照)及び位置情報(TB2参照)に関連付けて仮想内容(TB3参照)が記憶されていくこととなる。 Thus, in this way, the virtual content (see TB3) is stored in the TBL shown in FIG. 3 in association with the specific object (see TB1) and the position information (see TB2).
 他方、仮想内容抽出部15は、仮想文字内容記憶データベース12に格納されているテーブルTBL(図3参照)内に記憶されている仮想内容を読み出すものである。 On the other hand, the virtual content extraction unit 15 reads the virtual content stored in the table TBL (see FIG. 3) stored in the virtual character content storage database 12.
 具体的には、図5(a)に示すように、ユーザがユーザ端末2のカメラ機能を用いて、画像P1を撮像すると、ユーザ送信情報取得部11(図2参照)がその情報を取得する。これを受けて、物体特定部13は、その画像P1から、建物や絵等の物体を特定し、その中でも画像P1に占める割合が多い物体を特定する。すなわち、図5(a)に示す画像P1のうち、画像P1に占める割合が多い物体は、建物であるため、物体特定部13にて、図5(b)に示すように、建物が特定され(物体A参照)、その部分が枠で囲まれた画像(画像P2a)が画像P2としてユーザ端末2に表示されることとなる。なお、図5(a)に示す画像P1をユーザ送信情報取得部11(図2参照)が取得する際、ユーザ端末2にはGPS機能が備わっていることから、ユーザ送信情報取得部11(図2参照)は、このGPS機能により取得した位置情報も合せて取得することとなる。 Specifically, as shown in FIG. 5A, when the user captures an image P1 using the camera function of the user terminal 2, the user transmission information acquisition unit 11 (see FIG. 2) acquires the information. . In response to this, the object specifying unit 13 specifies an object such as a building or a picture from the image P1, and among them, specifies an object having a large proportion of the image P1. That is, in the image P1 shown in FIG. 5A, an object having a large proportion of the image P1 is a building. Therefore, the object specifying unit 13 specifies the building as shown in FIG. 5B. (Refer to the object A), an image (image P2a) whose portion is surrounded by a frame is displayed on the user terminal 2 as the image P2. In addition, when the user transmission information acquisition part 11 (refer FIG. 2) acquires the image P1 shown to Fig.5 (a), since the user terminal 2 is equipped with the GPS function, the user transmission information acquisition part 11 (FIG. 2), the position information acquired by the GPS function is also acquired.
 次いで、上記物体特定部13にて特定した物体(図5(b)に示す物体A参照)が仮想文字内容記憶データベース12に格納されているテーブルTBL(図3参照)に記憶されているか否かを、仮想内容抽出部15にて特定する。これにより、物体特定部13にて特定した物体(図5(b)に示す物体A参照)に関連付けて記憶されている仮想内容(図3に示すTB3参照)が読み出され、もって、図5(c)に示すように、入力された仮想内容(枠画像P2c参照)がユーザ端末2に表示されることとなる。なお、物体特定部13にて特定した物体を特定するにあたり、位置情報は特段必要ないが、寺院や神社等似たような特定物体がテーブルTBL(図3参照)に記憶されていた場合、誤った特定物体を特定してしまう可能性があるため、GPS機能により取得した位置情報も用いて特定するようにすれば、確実に目的の物体を特定することができる。 Next, whether the object specified by the object specifying unit 13 (see object A shown in FIG. 5B) is stored in the table TBL (see FIG. 3) stored in the virtual character content storage database 12 or not. Is specified by the virtual content extraction unit 15. Thereby, the virtual content (see TB3 shown in FIG. 3) stored in association with the object specified by the object specifying unit 13 (see object A shown in FIG. 5B) is read out, and FIG. As shown in (c), the input virtual content (see the frame image P2c) is displayed on the user terminal 2. In addition, when specifying the object specified by the object specifying unit 13, position information is not particularly required, but if a similar specific object such as a temple or a shrine is stored in the table TBL (see FIG. 3), an error occurs. Since the specific object may be specified, if the position information acquired by the GPS function is also specified, the target object can be specified reliably.
<仮想内容記憶システム使用例>
 ここで、上記説明した仮想内容記憶システムの一使用例を、仮想内容を入力する場合と、仮想内容を見る場合とに分けて、図6~図7のフローチャート図も参照して説明する。
<Usage example of virtual content storage system>
Here, an example of use of the virtual content storage system described above will be described with reference to the flowcharts of FIGS. 6 to 7 separately for the case of inputting virtual content and the case of viewing virtual content.
 まず、仮想内容を入力する場合を、図6を参照して具体的に説明する。 First, the case of inputting virtual contents will be specifically described with reference to FIG.
 ユーザが予めユーザ端末2(図1参照)にインストールされているアプリケーションを起動し、図示しないメニュー画面から仮想内容を入力したい旨の内容を選択するとともに、ユーザ端末2のカメラ機能を用いて、図4(a)に示す画像P1を撮像する。この際、ユーザ端末2には位置情報を取得することができるGPS機能が備わっていることから、画像P1を撮像した際、その画像P1に位置情報が付加されることとなる。そして、その選択した内容及び撮像した画像P1並びに位置情報を、ユーザがユーザ端末2を用いてネットワークNを介して仮想内容記憶サーバ1(図1参照)に送信する(ステップS1)。なお、本実施形態において示す、ユーザ端末2のカメラ機能を用いて、図4(a)に示す画像P1を撮像するとは、カメラ機能を用いて実際に撮影したもの、並びに、カメラ機能を用いて実際に撮影したものでなく、単に、ユーザ端末2のカメラ機能を用いて、図4(a)に示す画像P1を液晶画面に表示させただけのものも含まれる。 The user activates an application installed in advance on the user terminal 2 (see FIG. 1), selects content indicating that virtual content is to be input from a menu screen (not shown), and uses the camera function of the user terminal 2 to An image P1 shown in 4 (a) is taken. At this time, since the user terminal 2 has a GPS function capable of acquiring position information, the position information is added to the image P1 when the image P1 is captured. Then, the user transmits the selected content, the captured image P1, and the position information to the virtual content storage server 1 (see FIG. 1) via the network N using the user terminal 2 (step S1). It should be noted that capturing the image P1 shown in FIG. 4A using the camera function of the user terminal 2 shown in the present embodiment means what is actually taken using the camera function and using the camera function. What is not what was actually image | photographed but the thing which only displayed the image P1 shown to Fig.4 (a) on the liquid crystal screen using the camera function of the user terminal 2 is also contained.
 しかして、これを受けて、図2に示す仮想内容記憶サーバ1のユーザ送信情報取得部11は、その選択した内容及び撮像した画像P1並びに位置情報を取得し、中央制御部10に送出する。中央制御部10は、その送出された内容を受けて、物体特定部13に、ユーザ送信情報取得部11にて取得した画像P1を送出する。これを受けて、物体特定部13は、画像P1のうち、画像P1に占める割合が多い物体(図4(b)に示す物体A参照)を特定し、その特定した部分が枠で囲まれた画像(画像P2a)を生成し、中央制御部10に送出する。中央制御部10は、その送出された内容を、ネットワークNを介して、ユーザ端末2(図1参照)に送信する(ステップS2)。これにより、ユーザ端末2に図4(b)に示す画像P2が表示されることとなる。なお、本実施形態おいては、理解を容易にするために、1つの物体を特定する例を示したが、上記説明したように、複数の物体を特定するようにしても良い。 In response to this, the user transmission information acquisition unit 11 of the virtual content storage server 1 shown in FIG. 2 acquires the selected content, the captured image P1, and the position information, and sends them to the central control unit 10. The central control unit 10 receives the transmitted content and transmits the image P1 acquired by the user transmission information acquisition unit 11 to the object specifying unit 13. In response to this, the object specifying unit 13 specifies an object (see the object A shown in FIG. 4B) that has a large proportion of the image P1 in the image P1, and the specified part is surrounded by a frame. An image (image P2a) is generated and sent to the central control unit 10. The central control unit 10 transmits the transmitted content to the user terminal 2 (see FIG. 1) via the network N (step S2). Thereby, the image P2 shown in FIG. 4B is displayed on the user terminal 2. In the present embodiment, an example in which one object is specified is shown for easy understanding, but a plurality of objects may be specified as described above.
 次いで、図2に示す中央制御部10は、仮想内容入力枠画像生成部14に、ユーザに仮想内容の入力を促す枠画像を生成するよう指示する。これを受けて、仮想内容入力枠画像生成部14は、文字や静止画あるいは動画等の仮想内容の入力を促す枠画像(図4(c)に示す枠画像P2b参照)を生成し、中央制御部10に送出する。中央制御部10は、その送出された内容を、ネットワークNを介して、ユーザ端末2(図1参照)に送信する(ステップS3)。これにより、ユーザ端末2に図4(c)に示す枠画像P2bが表示されることとなる。 Next, the central control unit 10 shown in FIG. 2 instructs the virtual content input frame image generation unit 14 to generate a frame image that prompts the user to input virtual content. In response to this, the virtual content input frame image generation unit 14 generates a frame image (see the frame image P2b shown in FIG. 4C) that prompts input of virtual content such as characters, still images, or moving images, and performs central control. Send to unit 10. The central control unit 10 transmits the transmitted content to the user terminal 2 (see FIG. 1) via the network N (step S3). As a result, the frame image P2b shown in FIG. 4C is displayed on the user terminal 2.
 次いで、ユーザは、ユーザ端末2(図1参照)を用いて、表示されている枠画像P2b内に文字や静止画あるいは動画等の所望の仮想内容を入力する(図4(c)では、仮想文字が入力された例を示しており、「この建物が大好き」と入力した例を示している)。そして、その入力した仮想内容を、ユーザがユーザ端末2を用いてネットワークNを介して仮想内容記憶サーバ1に送信する(ステップS4)。 Next, the user uses the user terminal 2 (see FIG. 1) to input desired virtual contents such as characters, still images, or moving images in the displayed frame image P2b (in FIG. 4C, virtual It shows an example of text input, and an example of “I love this building”. Then, the user transmits the input virtual content to the virtual content storage server 1 via the network N using the user terminal 2 (step S4).
 これを受けて、図2に示す仮想内容記憶サーバ1のユーザ送信情報取得部11は、その入力された仮想内容を取得し、中央制御部10に送出する。中央制御部10は、その送出された内容を受けて、仮想内容記憶データベース12に格納されているテーブルTBL(図3参照)のTB1に物体特定部13にて特定した物体(図4(b)に示す物体A参照)を記憶すると共に、TB2にその特定した物体の位置情報(図3に示す位置情報A参照)を記憶し、さらに、この入力された仮想内容(図4(c)では、仮想文字が入力された例を示しており、「この建物が大好き」と入力した例を示している)をTB3に記憶することとなる(ステップS5)。これにより、図3に示すTBLに、特定物体(TB1参照)及び位置情報(TB2参照)に関連付けて仮想内容(TB3参照)が記憶されていくこととなる。 In response to this, the user transmission information acquisition unit 11 of the virtual content storage server 1 shown in FIG. 2 acquires the input virtual content and sends it to the central control unit 10. The central control unit 10 receives the transmitted content, and the object specified by the object specifying unit 13 in TB1 of the table TBL (see FIG. 3) stored in the virtual content storage database 12 (FIG. 4B). And the positional information (see positional information A shown in FIG. 3) of the identified object is stored in TB2, and the input virtual content (FIG. 4 (c)) An example in which virtual characters are input is shown, and an example in which “I love this building” is shown) is stored in TB3 (step S5). As a result, the virtual contents (see TB3) are stored in the TBL shown in FIG. 3 in association with the specific object (see TB1) and the position information (see TB2).
 次に、仮想内容を見る場合を、図7を参照して具体的に説明する。 Next, the case where the virtual contents are viewed will be specifically described with reference to FIG.
 ユーザが予めユーザ端末2(図1参照)にインストールされているアプリケーションを起動し、図示しないメニュー画面から仮想内容を見たい旨の内容を選択するとともに、ユーザ端末2のカメラ機能を用いて、図5(a)に示す画像P1を撮像する。この際、ユーザ端末2には位置情報を取得することができるGPS機能が備わっていることから、画像P1を撮像した際、その画像P1に位置情報が付加されることとなる。そして、その選択した内容及び撮像した画像P1並びに位置情報を、ユーザ端末2を用いてネットワークNを介して仮想内容記憶サーバ1に送信する(ステップS10)。なお、本実施形態において示す、ユーザ端末2のカメラ機能を用いて、図5(a)に示す画像P1を撮像するとは、カメラ機能を用いて実際に撮影したもの、並びに、カメラ機能を用いて実際に撮影したものでなく、単に、ユーザ端末2のカメラ機能を用いて、図5(a)に示す画像P1を液晶画面に表示させただけのものも含まれる。 The user activates an application installed in the user terminal 2 (see FIG. 1) in advance, selects contents for viewing virtual contents from a menu screen (not shown), and uses the camera function of the user terminal 2 to An image P1 shown in FIG. At this time, since the user terminal 2 has a GPS function capable of acquiring position information, the position information is added to the image P1 when the image P1 is captured. Then, the selected content, the captured image P1, and the position information are transmitted to the virtual content storage server 1 via the network N using the user terminal 2 (step S10). Note that the imaging of the image P1 shown in FIG. 5A by using the camera function of the user terminal 2 shown in the present embodiment means what is actually captured using the camera function and the camera function. What is not what was actually image | photographed but the thing which only displayed the image P1 shown to Fig.5 (a) on the liquid crystal screen using the camera function of the user terminal 2 is also contained.
 これを受けて、図2に示す仮想内容記憶サーバ1のユーザ送信情報取得部11は、その選択した内容及び撮像した画像P1並びに位置情報を取得し、中央制御部10に送出する。中央制御部10は、その送出された内容を受けて、物体特定部13に、ユーザ送信情報取得部11にて取得した画像P1を送出する。これを受けて、物体特定部13は、画像P1のうち、画像P1に占める割合が多い物体(図5(b)に示す物体A参照)を特定し、その特定した部分が枠で囲まれた画像(画像P2a)を生成し、中央制御部10に送出する。 In response to this, the user transmission information acquisition unit 11 of the virtual content storage server 1 shown in FIG. 2 acquires the selected content, the captured image P1, and the position information, and sends them to the central control unit 10. The central control unit 10 receives the transmitted content and transmits the image P1 acquired by the user transmission information acquisition unit 11 to the object specifying unit 13. In response to this, the object specifying unit 13 specifies an object (see the object A shown in FIG. 5B) that has a large proportion of the image P1 in the image P1, and the specified part is surrounded by a frame. An image (image P2a) is generated and sent to the central control unit 10.
 これを受けて、中央制御部10は、仮想内容抽出部15に、仮想文字内容記憶データベース12に格納されているテーブルTBL(図3参照)内に、物体特定部13にて特定した物体(図5(b)に示す物体A参照)が記憶されているか否かを特定するよう指示する。これを受けて、仮想内容抽出部15は、仮想内容記憶データベース12に格納されているテーブルTBLに物体特定部13にて特定した物体(図5(b)に示す物体A参照)が記憶されているか否かを特定し、中央制御部10に特定の結果を送出する。特定の結果、特定した物体(図5(b)に示す物体A参照)が仮想内容記憶データベース12に格納されているテーブルTBLに記憶されていれば、中央制御部10は、物体特定部13より送出された内容を、ネットワークNを介して、ユーザ端末2(図1参照)に送信する(ステップS11)。これにより、ユーザ端末2に図5(b)に示す画像P2が表示されることとなる。なお、本実施形態おいては、理解を容易にするために、1つの物体を特定する例しか示していないが、上記説明したように、複数の物体を特定するようにしても良い。 In response to this, the central control unit 10 causes the virtual content extraction unit 15 to specify the object (see FIG. 3) specified in the table TBL (see FIG. 3) stored in the virtual character content storage database 12. An instruction is given to specify whether or not the object A (see FIG. 5B) is stored. In response, the virtual content extraction unit 15 stores the object specified by the object specifying unit 13 in the table TBL stored in the virtual content storage database 12 (see the object A shown in FIG. 5B). And whether a specific result is sent to the central control unit 10. If the specified object (see object A shown in FIG. 5B) is stored in the table TBL stored in the virtual content storage database 12 as a result of the specific determination, the central control unit 10 determines that the object specifying unit 13 The transmitted content is transmitted to the user terminal 2 (see FIG. 1) via the network N (step S11). As a result, the image P2 shown in FIG. 5B is displayed on the user terminal 2. In the present embodiment, for ease of understanding, only an example of specifying one object is shown. However, as described above, a plurality of objects may be specified.
 次いで、中央制御部10は、仮想内容抽出部15に、仮想文字内容記憶データベース12に格納されているテーブルTBL(図3参照)に記憶されている仮想内容を読み出すよう指示する。これを受けて、仮想内容抽出部15は、ステップS11にて特定した物体に関連付けて記憶されている仮想内容(TB3参照)を読み出し、中央制御部10に送出する。中央制御部10は、その送出された内容を、ネットワークNを介して、ユーザ端末2(図1参照)に送信する(ステップS12)。これにより、ユーザ端末2に図5(c)に示す画像P2cが表示されることとなる(ステップS13)。なお、複数の物体が特定された場合は、その特定された物体に関連付けて記憶されている仮想内容が特定された物体毎にそれぞれ読み出されることとなる。 Next, the central control unit 10 instructs the virtual content extraction unit 15 to read the virtual content stored in the table TBL (see FIG. 3) stored in the virtual character content storage database 12. In response to this, the virtual content extraction unit 15 reads the virtual content (see TB3) stored in association with the object specified in step S11 and sends it to the central control unit 10. The central control unit 10 transmits the transmitted content to the user terminal 2 (see FIG. 1) via the network N (step S12). As a result, the image P2c shown in FIG. 5C is displayed on the user terminal 2 (step S13). When a plurality of objects are specified, virtual contents stored in association with the specified objects are read for each specified object.
 しかして、以上説明した本実施形態によれば、ユーザがユーザ端末2のカメラ機能を用いて図4(a)に示す画像P1を撮像し、その撮像した図4(a)に示す画像P1を仮想内容記憶サーバ1に送信する。そして、撮像した図4(a)に示す画像P1のうち図4(b),(c)に示す物体Aに対してユーザがユーザ端末2を用いて文字や静止画あるいは動画等の仮想内容を入力し、その入力した仮想内容を仮想内容記憶サーバ1に送信する。これにより、仮想内容記憶サーバ1は、入力された仮想内容を、撮像した図4(a)に示す画像P1のうち図4(b),(c)に示す物体Aに関連付けて記憶することができる。 Thus, according to the present embodiment described above, the user captures the image P1 shown in FIG. 4A using the camera function of the user terminal 2, and the captured image P1 shown in FIG. Transmit to the virtual content storage server 1. Then, the user uses the user terminal 2 to display virtual contents such as characters, still images, and moving images with respect to the object A shown in FIGS. 4B and 4C in the captured image P1 shown in FIG. The input virtual content is transmitted to the virtual content storage server 1. Thereby, the virtual content storage server 1 can store the input virtual content in association with the object A shown in FIGS. 4B and 4C in the captured image P1 shown in FIG. it can.
 しかして、本実施形態によれば、撮像した画像P1のうち、図4(b),(c)に示す物体Aに文字や静止画あるいは動画等を仮想的に残すことができる。 However, according to the present embodiment, characters, still images, moving images, etc. can be virtually left on the object A shown in FIGS. 4B and 4C in the captured image P1.
 また、本実施形態によれば、ユーザがユーザ端末2のカメラ機能を用いて図5(a)に示す画像P1を新たに撮像し、その新たに撮像した図5(a)に示す画像P1を仮想内容記憶サーバ1に送信する。そして、仮想内容記憶サーバ1にて、送信された新たに撮像した図5(a)に示す画像P1のうち、図5(b)に示す物体Aに基づいて当該仮想内容記憶サーバ1に物体Aに関連付けて記憶されている文字や静止画あるいは動画等の仮想内容を読み出し、その読み出した仮想内容をユーザ端末2に送信するようにしている。これにより、図4(b),(c)に示す物体Aに仮想的に残した文字や静止画あるいは動画等を他の人と共有することができる。 Further, according to the present embodiment, the user newly takes an image P1 shown in FIG. 5A using the camera function of the user terminal 2, and the newly taken image P1 shown in FIG. Transmit to the virtual content storage server 1. Then, in the virtual content storage server 1, the object A is stored in the virtual content storage server 1 based on the object A shown in FIG. 5B out of the newly captured image P1 shown in FIG. The virtual contents such as characters, still images, and moving pictures stored in association with the virtual contents are read out, and the read virtual contents are transmitted to the user terminal 2. This makes it possible to share characters, still images, moving images, and the like virtually left on the object A shown in FIGS. 4B and 4C with other people.
 なお、本実施形態においては、ユーザ端末2がGPS機能を備えている例を示したが、備えていなくとも良い。しかしながら、図2に示す仮想内容抽出部15にて物体特定部13にて特定した物体を特定するにあたり、寺院や神社等似たような物体がテーブルTBL(図3参照)に記憶されていた場合、誤った物体を特定してしまう可能性があるため、ユーザ端末2がGPS機能を備えていた方が、そのGPS機能により取得した位置情報も用いて特定することができ、もって、確実に目的の物体を特定することができる。それゆえ、ユーザ端末2がGPS機能を備えていた方が好ましい。 In addition, in this embodiment, although the example in which the user terminal 2 is provided with a GPS function was shown, it does not need to be provided. However, when specifying the object specified by the object specifying unit 13 in the virtual content extracting unit 15 shown in FIG. 2, a similar object such as a temple or a shrine is stored in the table TBL (see FIG. 3). If the user terminal 2 has the GPS function, the position information acquired by the GPS function can be specified by using the GPS function. The object can be identified. Therefore, it is preferable that the user terminal 2 has a GPS function.
 ところで、本実施形態においては、理解を容易にするために、図3に示す物体Aの位置情報を、位置情報Aとして全て同じ位置情報を示している例を示したが、実際は、ユーザ端末2のカメラ機能を用いて撮像する位置(場所)が異なれば、同じ物体Aを撮像したとしても、位置情報に多少の誤差が生じることとなる。それゆえ、位置情報を用いて仮想内容抽出部15にて物体特定部13にて特定した物体を特定する際は、一定の範囲内(この誤差範囲は、予め決定しておく)にある位置情報は全て同じ位置情報であるとして処理するようにすれば良い。 By the way, in the present embodiment, in order to facilitate understanding, the example in which the position information of the object A shown in FIG. 3 is all the same position information as the position information A has been shown. If the positions (places) to be imaged using the above camera function are different, even if the same object A is imaged, some error will occur in the position information. Therefore, when the object specified by the object specifying unit 13 is specified by the virtual content extracting unit 15 using the position information, the position information within a certain range (this error range is determined in advance). May be processed assuming that all are the same position information.
 一方、本実施形態においては、図4(a),図5(a)に示すユーザ端末2のカメラ機能を用いて撮像した画像P1から、物体を特定するようにしたが、物体を特定せず、撮像した画像P1に対して仮想内容を入力するようにしても良い。さらには、物体を特定するにあたり、仮想内容記憶サーバ1の物体特定部13を用いて物体を特定する例を示したが、それに限らず、物体を特定する方法は、どのような方法でも良い。例えば、自動で特定する場合に限らず、ユーザが手動で、物体を特定するようにしても良い。また、本実施形態においては、仮想内容記憶サーバ1が物体特定部13を備えている例を示したが、それに限らず、ユーザが予めユーザ端末2(図1参照)にインストールしているアプリケーションに物体特定部13の機能を備えさせても良い。 On the other hand, in the present embodiment, the object is specified from the image P1 captured using the camera function of the user terminal 2 shown in FIGS. 4A and 5A, but the object is not specified. Virtual content may be input to the captured image P1. Furthermore, although the example which specifies an object using the object specific | specification part 13 of the virtual content storage server 1 was shown in specifying an object, not only that but the method of specifying an object may be what kind of method. For example, the object is not limited to being automatically specified, and the user may specify the object manually. In the present embodiment, an example in which the virtual content storage server 1 includes the object specifying unit 13 has been described. The function of the object specifying unit 13 may be provided.
 さらに、本実施形態においては、ユーザ端末2に図5(c)に示す画像P2cが表示される例を示すにあたり、特定物体に関連付けて記憶されている仮想内容を全て表示する例を示したが、それに限らず、ユーザが予めユーザ端末2(図1参照)にインストールしているアプリケーションにフィルタ機能を設けて、例えば、10件だけ表示するようにしても良い。 Furthermore, in the present embodiment, in the example in which the image P2c shown in FIG. 5C is displayed on the user terminal 2, an example in which all the virtual contents stored in association with the specific object are displayed has been shown. However, the present invention is not limited to this, and a filter function may be provided in an application that the user has installed in the user terminal 2 (see FIG. 1) in advance so that, for example, only ten cases may be displayed.
 一方、本実施形態においては、仮想内容記憶サーバ1を一台で構築する例を示したが、勿論、処理を分散させ複数台で構築しても良い。 On the other hand, in the present embodiment, an example in which the virtual content storage server 1 is constructed by one unit is shown, but it is needless to say that processing may be distributed and constructed by a plurality of units.
 また、本実施形態において説明した、仮想内容記憶システム及び使用例並びに画面例はあくまで一例であり、これに限定されるものではない。 In addition, the virtual content storage system, the usage example, and the screen example described in the present embodiment are merely examples, and the present invention is not limited thereto.
 ところで、このような仮想内容記憶システムは、次の用途に適用することができる。例えば、仮想的な落書きを実現することができる。さらには、特定の物に対して、家族や友達などにメッセージを残すことができ、特定の物に対して、リアルタイムに様々な人々とのコミュニケーションを実現することができる。またさらには、宝物を探すゲームや探検・対戦ゲームなどのゲームにも適用可能である。そしてさらには、美術館の展示物に仮想的な説明文書を表示することもでき、観光地のランドマークの仮想的な紹介文章を表示することもできる。さらには、所定の商品に対して仮想的な宣伝文章やキャンペーン情報などを表示することもでき、建物や観光地のランドマーク等に仮想的な看板を表示することもできる。 By the way, such a virtual content storage system can be applied to the following uses. For example, virtual graffiti can be realized. Furthermore, it is possible to leave messages for family members and friends for specific objects, and to realize real-time communication with various people for specific objects. Furthermore, the present invention can be applied to games such as a game for searching for treasure and an exploration / matching game. Further, a virtual explanatory document can be displayed on the museum exhibit, and a virtual introduction text of the landmark of the tourist spot can be displayed. Furthermore, virtual advertising texts and campaign information can be displayed for a predetermined product, and a virtual signboard can be displayed on a landmark of a building or a sightseeing spot.
1   仮想内容記憶サーバ
2   ユーザ端末
10  中央制御部
11  ユーザ送信情報取得部
12  仮想内容記憶データベース
13  物体特定部
14  仮想内容入力枠画像生成部
15  仮想内容抽出部
TBL テーブル
N   ネットワーク
 
DESCRIPTION OF SYMBOLS 1 Virtual content storage server 2 User terminal 10 Central control part 11 User transmission information acquisition part 12 Virtual content storage database 13 Object identification part 14 Virtual content input frame image generation part 15 Virtual content extraction part TBL Table N Network

Claims (3)

  1.  撮像手段を備えたユーザ端末と仮想内容記憶サーバとがネットワークを介して接続されてなる仮想内容記憶方法であって、
     ユーザが前記ユーザ端末の撮像手段を用いて所定の対象物を撮像し、該撮像した所定の対象物を前記仮想文字内容記憶サーバに送信するステップと、
     前記撮像した所定の対象物に対してユーザが前記ユーザ端末を用いて仮想内容を入力し、該入力した仮想内容を前記仮想内容記憶サーバに送信するステップと、
     前記撮像した所定の対象物に関連付けて前記入力した仮想内容を、前記仮想内容記憶サーバにて、該仮想内容記憶サーバに記憶するステップと、を含んでなる仮想内容記憶方法。
    A virtual content storage method in which a user terminal provided with an imaging means and a virtual content storage server are connected via a network,
    A user imaging a predetermined object using the imaging means of the user terminal, and transmitting the imaged predetermined object to the virtual character content storage server;
    A user inputs virtual content using the user terminal to the imaged predetermined object, and transmits the input virtual content to the virtual content storage server;
    Storing the input virtual content in association with the imaged predetermined object in the virtual content storage server in the virtual content storage server.
  2.  ユーザが前記ユーザ端末の撮像手段を用いて所定の対象物を新たに撮像し、該新たに撮像した所定の対象物を前記仮想内容記憶サーバに送信するステップと、
     前記仮想内容記憶サーバにて、前記送信された新たに撮像した所定の対象物に基づいて当該仮想内容記憶サーバに前記撮像した所定の対象物に関連付けて記憶されている仮想内容を読み出し、該読み出した仮想内容を前記ユーザ端末に送信するステップと、をさらに含んでなる請求項1に記載の仮想内容記憶方法。
    A user newly imaging a predetermined object using an imaging unit of the user terminal, and transmitting the newly captured predetermined object to the virtual content storage server;
    In the virtual content storage server, the virtual content stored in the virtual content storage server in association with the imaged predetermined object is read based on the transmitted newly imaged predetermined object, and the read The virtual content storage method according to claim 1, further comprising: transmitting the virtual content to the user terminal.
  3.  前記ユーザ端末は、位置情報を取得可能な位置情報取得手段を備えており、
     前記撮像した所定の対象物を前記仮想内容記憶サーバに送信する際、前記位置情報取得手段にて取得した位置情報も送信してなる請求項1に記載の仮想内容記憶方法。
     
     
    The user terminal includes position information acquisition means capable of acquiring position information,
    The virtual content storage method according to claim 1, wherein when transmitting the imaged predetermined object to the virtual content storage server, the positional information acquired by the positional information acquisition unit is also transmitted.

PCT/JP2017/032113 2016-12-23 2017-09-06 Virtual content storage method WO2018116538A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-250184 2016-12-23
JP2016250184A JP2018106307A (en) 2016-12-23 2016-12-23 Virtual content storage method

Publications (1)

Publication Number Publication Date
WO2018116538A1 true WO2018116538A1 (en) 2018-06-28

Family

ID=62626241

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/032113 WO2018116538A1 (en) 2016-12-23 2017-09-06 Virtual content storage method

Country Status (2)

Country Link
JP (1) JP2018106307A (en)
WO (1) WO2018116538A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004272592A (en) * 2003-03-07 2004-09-30 Matsushita Electric Ind Co Ltd Information processing system, server device, terminal equipment, information processing program for server device, and information processing program for terminal equipment
WO2011004608A1 (en) * 2009-07-09 2011-01-13 頓智ドット株式会社 System capable of displaying visibility information to which virtual information is added
JP2011146005A (en) * 2010-01-18 2011-07-28 Denso It Laboratory Inc Apparatus, system and method for constructing subject spot database
WO2011096561A1 (en) * 2010-02-08 2011-08-11 株式会社ニコン Imaging device, information acquisition system, and program
JP2014081770A (en) * 2012-10-16 2014-05-08 Sony Corp Terminal device, terminal control method, information processing device, information processing method and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004272592A (en) * 2003-03-07 2004-09-30 Matsushita Electric Ind Co Ltd Information processing system, server device, terminal equipment, information processing program for server device, and information processing program for terminal equipment
WO2011004608A1 (en) * 2009-07-09 2011-01-13 頓智ドット株式会社 System capable of displaying visibility information to which virtual information is added
JP2011146005A (en) * 2010-01-18 2011-07-28 Denso It Laboratory Inc Apparatus, system and method for constructing subject spot database
WO2011096561A1 (en) * 2010-02-08 2011-08-11 株式会社ニコン Imaging device, information acquisition system, and program
JP2014081770A (en) * 2012-10-16 2014-05-08 Sony Corp Terminal device, terminal control method, information processing device, information processing method and program

Also Published As

Publication number Publication date
JP2018106307A (en) 2018-07-05

Similar Documents

Publication Publication Date Title
CN108600632B (en) Photographing prompting method, intelligent glasses and computer readable storage medium
JP2013162487A (en) Image display apparatus and imaging apparatus
JP7209474B2 (en) Information processing program, information processing method and information processing system
EP3190581A1 (en) Interior map establishment device and method using cloud point
CN108765581B (en) Method and device for displaying label in virtual three-dimensional space
JP2006293912A (en) Information display system, information display method and portable terminal device
JP2011233005A (en) Object displaying device, system, and method
JP2022000795A (en) Information management device
KR20120126529A (en) ANALYSIS METHOD AND SYSTEM OF CORRELATION BETWEEN USERS USING Exchangeable Image File Format
KR101197126B1 (en) Augmented reality system and method of a printed matter and video
WO2011096343A1 (en) Photographic location recommendation system, photographic location recommendation device, photographic location recommendation method, and program for photographic location recommendation
JP2005244494A (en) Mobile communication terminal, control method thereof, program and remote control system by mail
WO2018146959A1 (en) System, information processing device, information processing method, program, and recording medium
KR20190047922A (en) System for sharing information using mixed reality
WO2018116538A1 (en) Virtual content storage method
KR102168110B1 (en) Camera system
KR20200100274A (en) Image Pattern MARK and Image Recognition System Using Augmented Reality
KR20130049220A (en) Method and the apparatus for the history culture digital guide by virture reality
KR20140018341A (en) Method, device and terminal equipment for message generation and processing
CN108092950B (en) AR or MR social method based on position
KR20160138823A (en) Method for providing information of traditional market using augmented reality
JP2011010157A (en) Video display system and video display method
KR101908991B1 (en) Apparatus for implementing augmented reality
JP5432000B2 (en) Information presentation system and program
JP5682170B2 (en) Display control apparatus, image distribution server, display terminal, image distribution system, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17882659

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17882659

Country of ref document: EP

Kind code of ref document: A1