WO2021256239A1 - Navigation device, navigation system, navigation method, program, and storage medium - Google Patents

Navigation device, navigation system, navigation method, program, and storage medium Download PDF

Info

Publication number
WO2021256239A1
WO2021256239A1 PCT/JP2021/020829 JP2021020829W WO2021256239A1 WO 2021256239 A1 WO2021256239 A1 WO 2021256239A1 JP 2021020829 W JP2021020829 W JP 2021020829W WO 2021256239 A1 WO2021256239 A1 WO 2021256239A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
visitor
information
specific area
terminal
Prior art date
Application number
PCT/JP2021/020829
Other languages
French (fr)
Japanese (ja)
Inventor
浩平 金澤
雄也 木元
陽平 水谷
Original Assignee
Necソリューションイノベータ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Necソリューションイノベータ株式会社 filed Critical Necソリューションイノベータ株式会社
Priority to JP2022532472A priority Critical patent/JP7294735B2/en
Publication of WO2021256239A1 publication Critical patent/WO2021256239A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • G09B29/10Map spot or coordinate position indicators; Map reading aids

Definitions

  • the present invention relates to a navigation device, a navigation system, a navigation method, a program, and a recording medium.
  • Patent Document 1 a technique for promoting a product or the like using AR (Augmented Reality) technology has been proposed (for example, Patent Document 1).
  • an object of the present invention is to provide a navigation device and a navigation method capable of transmitting information using AR in a large facility or the like of an exhibition hall and easily confirming the current position. And.
  • the navigation device of the present invention includes image capture information acquisition unit, virtual information generation unit, image information output unit, and map information management unit.
  • the image pickup information acquisition unit acquires an image captured by a terminal of a visitor in a specific area, and obtains an image captured.
  • the virtual information generation unit generates a virtual image relating to the specific area, and generates a virtual image.
  • the image information output unit outputs a composite image in which the virtual image is superimposed on the captured image.
  • the map information management unit manages the position code arranged in the specific area and the map information of the specific area in association with each other. When the position code is imaged by the terminal of the visitor, the captured image of the position code is acquired by the image pickup information acquisition unit.
  • the map information management unit is a device that identifies the position of the terminal of the visitor from the image of the position code based on the map information.
  • the navigation method of the present invention Includes imaging information acquisition process, virtual information generation process, image information output process, and map information management process.
  • image pickup information acquisition step an image captured by a terminal of a visitor in a specific area is acquired.
  • the virtual information generation step generates a virtual image relating to the specific area, and generates a virtual image.
  • image information output step a composite image in which the virtual image is superimposed on the captured image is output.
  • the map information management process manages the position code arranged in the specific area and the map information of the specific area in association with each other.
  • the image pickup information acquisition step acquires the image of the imaged position code.
  • the map information management step is a method of specifying the position of the terminal of the visitor from the image of the position code based on the map information.
  • information can be transmitted by AR, and the current position of a visitor in a large facility or the like can be easily confirmed.
  • the current position can be confirmed by, for example, GPS (Global Positioning System), but since GPS uses radio waves, there is a problem in confirming the position indoors.
  • GPS Global Positioning System
  • the present invention uses the position code, the position can be confirmed without any problem even indoors.
  • the present invention may be applied to outdoor position confirmation.
  • FIG. 1A is a block diagram showing a configuration of an example of the apparatus of the first embodiment
  • FIG. 1B is a schematic diagram showing a configuration of an example of the system of the fourth embodiment
  • FIG. 2 is a block diagram showing an example of the hardware configuration of the apparatus of the first embodiment.
  • FIG. 3 is a flowchart showing an example of processing in the apparatus of the first embodiment.
  • FIG. 4 is a schematic diagram showing an example of the relationship between the map of the exhibition hall and the position code.
  • FIG. 5 is a schematic diagram showing an example in which a terminal captures a position code.
  • FIG. 6 is a schematic diagram showing an example of displaying the position of the terminal on the display of the terminal of the visitor.
  • FIG. 1A is a block diagram showing a configuration of an example of the apparatus of the first embodiment
  • FIG. 1B is a schematic diagram showing a configuration of an example of the system of the fourth embodiment.
  • FIG. 2 is a block diagram showing an example of the hardware configuration of the apparatus of the
  • FIG. 7 is a schematic diagram showing an example of a composite image in which a virtual image relating to a specific area is superimposed on a captured image.
  • FIG. 8 is a schematic diagram showing an example of a composite image in which a virtual image relating to a specific area is superimposed on a captured image.
  • FIG. 9 is a schematic diagram showing an example of the relationship between the map of the exhibition hall and the direction of the position code.
  • FIG. 10 is a schematic diagram showing an example of the relationship between the orientation of the map of the exhibition hall and the orientation of the terminal.
  • FIG. 11 is a schematic diagram showing an example of the relationship between the tilt of the terminal and the image displayed on the display of the terminal.
  • the "visitor” means a person who comes within a specific area, and is also referred to as a user. Visitors can also be rephrased as visitors, visitors, and the like.
  • FIG. 1A is a block diagram showing a configuration of an example of the navigation device 10 of the present embodiment
  • FIG. 1B is a schematic showing a configuration of an example of a navigation system including the navigation device 10 of the present embodiment. It is a figure.
  • the apparatus 10 includes an image pickup information acquisition unit 11, a virtual information generation unit 12, an image information output unit 13, a map information management unit 14, a specific area information acquisition unit 15, and a visitor information acquisition. Including part 16.
  • the specific area information acquisition unit 15 and the visitor information acquisition unit 16 have an arbitrary configuration and may not be included in the present device 10.
  • the navigation system device will be described in the fourth embodiment.
  • the device 10 may be, for example, one device including the above-mentioned parts, or may be a device in which the above-mentioned parts can be connected via the communication network 30. Further, the present device 10 can be connected to an external device described later via the communication line network 30.
  • the communication network 30 is not particularly limited, and a known network can be used, and may be wired or wireless, for example. Examples of the communication line network 30 include an Internet line, WWW (World Wide Web), a telephone line, LAN (Local Area Network), SAN (Storage Area Network), DTN (Delay Traveler Network), and the like. Examples of wireless communication include WiFi (Wireless Fidelity), Bluetooth (registered trademark) and the like.
  • the wireless communication may be either a form in which each device directly communicates (Ad Hoc communication) or an indirect communication via an access point.
  • the apparatus 10 may be incorporated in the server as a system, for example. Further, the apparatus 10 is, for example, a personal computer (PC, for example, a desktop type, a notebook type), a terminal (for example, a smartphone, a mobile phone, a glasses type, a watch type, or the like) wearable terminal on which the program of the present invention is installed. ) Etc. may be used.
  • FIG. 2 illustrates a block diagram of the hardware configuration of the present device 10.
  • the apparatus 10 includes, for example, a central processing unit (CPU, GPU, etc.) 101, a memory 102, a bus 103, a storage device 104, an input device 105, a display device 106, a communication device 107, and the like. Each part of the apparatus 10 is connected to each other via a bus 103 by each interface (I / F).
  • I / F interface
  • the central processing unit (also referred to as a central processing unit or a central processing unit) 101 is responsible for controlling the entire device 10.
  • the program of the present invention and other programs are executed by the central processing unit 101, and various information is read and written.
  • the central processing unit 101 includes an imaging information acquisition unit 11, a virtual information generation unit 12, an image information output unit 13, a map information management unit 14, a specific area information acquisition unit 15, and a visitor information acquisition unit. Functions as 16.
  • the bus 103 can also be connected to, for example, an external device.
  • the external device include an external storage device (external database, etc.), a printer, and the like.
  • the device 10 can be connected to an external network (communication network) by, for example, a communication device 107 connected to the bus 103, and can also be connected to another device via the external network.
  • the memory 102 may be, for example, a main memory (main storage device).
  • main memory main storage device
  • the memory 102 reads various operation programs such as the program of the present invention stored in the storage device 104 described later, and the central processing unit 101 reads the memory. Receives data from 102 and executes the program.
  • the main memory is, for example, a RAM (random access memory).
  • the memory 102 may be, for example, a ROM (read-only memory).
  • the storage device 104 is also referred to as a so-called auxiliary storage device with respect to the main memory (main storage device), for example.
  • the storage device 104 stores an operation program including the program of the present invention.
  • the storage device 104 may be, for example, a combination of a recording medium and a drive for reading and writing to the recording medium.
  • the recording medium is not particularly limited, and may be an internal type or an external type, and examples thereof include HD (hard disk), CD-ROM, CD-R, CD-RW, MO, DVD, flash memory, and memory card. Be done.
  • the storage device 104 may be, for example, a hard disk drive (HDD) in which a recording medium and a drive are integrated, and a solid state drive (SSD).
  • HDD hard disk drive
  • SSD solid state drive
  • the memory 102 and the storage device 104 can also store user log information and information acquired from an external database (not shown).
  • the device 10 further includes, for example, an input device 105 and a display device 106.
  • the input device 105 is, for example, a touch panel, a keyboard, a mouse, or the like.
  • Examples of the display device 106 include an LED display, a liquid crystal display, and the like.
  • the navigation method of the present embodiment is carried out as follows, for example, by using the navigation device 10 of FIG.
  • the navigation method of the present embodiment is not limited to the use of the navigation device 10 of FIG.
  • the image pickup information acquisition unit 11 acquires an image captured by a terminal of a visitor in a specific area (S1).
  • the specific area is not particularly limited and may be indoors or outdoors.
  • the indoors include, for example, exhibition halls, event halls, department stores, schools such as universities, and the like.
  • the terminal of the visitor is not particularly limited, and is, for example, a wearable terminal such as a smartphone, a mobile phone, a spectacle type, and a wristwatch type.
  • the captured image may be, for example, a still image or a moving image.
  • the acquisition is performed, for example, via the communication network 30.
  • the virtual information generation unit 12 generates a virtual image related to the specific area (S2).
  • the virtual image relating to the specific area is, for example, a pin image showing a booth position or the like.
  • the image information output unit 13 outputs a composite image in which the virtual image is superimposed on the captured image (S3).
  • the output may be executed, for example, via the communication network 30.
  • the map information management unit 14 manages the position code arranged in the specific area and the map information of the specific area in association with each other (S4).
  • the position code includes, for example, a QR code (registered trademark), an AR marker, a barcode, a chameleon code (registered trademark), and the like.
  • FIG. 4 shows an example of the relationship between the map of the exhibition hall and the location code.
  • the exhibition hall 1 will be used as the specific area, but it is an example and is not limited to this as described above.
  • a plurality of exhibition booths 2 are arranged in the exhibition hall 1. Further, in the exhibition hall 1, for example, position codes (QR code (registered trademark)) 3 are arranged at regular intervals (for example, 15 m intervals).
  • the arrangement of the exhibition booth 2 and the position code 3 is not particularly limited.
  • the position code 3 is, for example, a marker for specifying a position in the exhibition hall 1.
  • the number of position codes 3 is not particularly limited, and may be one or two or more.
  • the installation location of the position code 3 is not particularly limited, and may be, for example, a floor, a wall, a ceiling, or a panel having the position code 3 may be installed by hanging it from the ceiling to the air. good.
  • the position of the position code 3 installed in the exhibition hall 1 is associated with the map information (map of the virtual space). Further, the location of the exhibition booth 2 is also linked to the map information by, for example, the map information management unit 14.
  • the map information is, for example, coordinate information.
  • FIG. 5 shows an example in which the terminal of the visitor captures the position code.
  • the visitor's terminal 20 captures, for example, a position code 3 arranged on the floor of the exhibition hall 1.
  • FIG. 6 shows an example of displaying the position (current position) of the specified terminal on the display of the visitor's terminal.
  • the apparatus 10 may transmit the position (that is, the current position) of the specified terminal 20 to the terminal 20 via, for example, the communication network 30.
  • the terminal 20 displays the current position of the terminal 20 in the specific area superimposed on the map on the display of the terminal 20.
  • the map displayed on the terminal 20 is, for example, a map of the exhibition hall 1, and is preferably schematically based on the coordinate information.
  • the "current position" means the position of the terminal of the visitor at the time when the position code is imaged.
  • the virtual information generation unit 12 may generate a virtual image related to the specific area based on the position of the terminal of the visitor specified by the map information management unit 14, for example.
  • FIG. 7 shows an example of a composite image in which a virtual image relating to the specific area is superimposed on the captured image. As shown in FIG. 7, on the terminal 20 to which the composite image is output, a composite image in which the pins 4a and 4b of the exhibition booth, which are virtual images, are superimposed on the captured image of the specific area captured by the terminal 20 is displayed. Is displayed. In FIG. 7, the pin 4a of the exhibition booth is displayed larger than the pin 4b of the exhibition booth.
  • the pin 4 of the exhibition booth is, for example, based on the position information of the specified terminal 20, the farther the position of the exhibition booth 2 is from the position information of the terminal 20, the more the pin 4 of the exhibition booth is located.
  • the size may be smaller than the pin 4 of the other exhibition booth, and the closer the position of the exhibition booth 2 is from the position information of the terminal 20, the larger the size of the pin 4 of the exhibition booth will be. It may be larger than 4.
  • the pin 4 of the exhibition booth may color the outer edge of the pin 4 of the exhibition booth according to the distance from the position information of the terminal 20 to the position of the exhibition booth 2, for example. Identification information may be assigned to each of the pins 4 of the exhibition booth by the virtual information generation unit 12.
  • the identification information is, for example, a number, a character, a symbol, or a combination of two or more thereof.
  • the present device 10 can generate a virtual image such as a pin 4 of the exhibition booth according to the position of the specified terminal 20, for example.
  • the specific area information acquisition unit 15 may acquire information regarding the specific area (S7).
  • the step (S7) may be processed in parallel with the step (S1), for example, as shown in FIG.
  • the acquisition may be executed, for example, via the communication network 30, or may be acquired by input by the input device 105.
  • the information regarding the specific area is, for example, information about an exhibition booth or the like, and more specifically, information about a company name, company introduction information, products, services, or the like.
  • the virtual information generation unit 12 may generate a virtual image regarding the specific area based on the information regarding the specific area, for example. As a result, as shown in FIG.
  • the visitor information acquisition unit 16 may acquire information about the visitor (S8).
  • the step (S8) may be processed in parallel with the step (S1), for example, as shown in FIG.
  • the acquisition may be executed, for example, via the communication network 30, or may be acquired by input by the input device 105.
  • the information about the visitors is, for example, the attributes of the visitors (gender, age, occupation), the purpose of the visit (inspection, business negotiations, etc.), the exhibition booth (area) of interest, and the like.
  • the image information output unit 13 outputs a composite image in which the virtual image is superimposed on the captured image according to the information about the visitor.
  • the virtual image includes recommendation information, schedule information, exhibition booth information, and the like.
  • the recommendation information is, for example, information recommended to the visitor based on the information about the visitor (for example, a recommended event, a recommended exhibition booth, etc.).
  • the schedule information is, for example, information indicating the schedule of an event to be held in the specific area. Examples of the event include demonstrations and seminars.
  • the event may be an event promoted by the organizer or an event promoted by the exhibitor.
  • the exhibition booth information is, for example, the same as the above-mentioned information regarding the exhibition booth.
  • the apparatus 10 may further include, for example, a route calculation unit.
  • the map information further includes, for example, the position information of an obstacle existing in the specific area.
  • the obstacle is not particularly limited, and is, for example, an object that obstructs passage such as a wall.
  • the route calculation unit calculates a route from the position of the terminal of the visitor to the input position based on the map information.
  • the input of the position may be input by, for example, acquiring from the terminal, or may be input by the input device 105.
  • the position is not particularly limited, and is, for example, the position of an exhibition booth or the like.
  • FIG. 8 shows an example of a composite image in which a virtual image relating to the specific area is superimposed on the captured image.
  • the visitor 6 selects a pin 4 (pin 4b in FIG. 8) of an arbitrary exhibition booth by tapping the display of the terminal 20.
  • the apparatus 10 acquires, for example, information indicating that the pin 4b of the exhibition booth is selected.
  • the route calculation unit calculates a route from the position (Start) of the visitor 6 to the position (Goal) of the exhibition booth on the map information associated with the pin 4b of the selected exhibition booth. Then, using the composite image output from the present device 10, the visitor 6 is guided to the position (Goal) of the exhibition booth corresponding to the pin 4b of the selected exhibition booth.
  • the composite image for guiding the visitor 6 is, for example, an image in which a guidance virtual object 5 such as an arrow and a character is superimposed on the captured image. As a result, for example, even a person who is not good at reading a map can go to a destination without stress.
  • the composite image may be, for example, an image in which a map (2D object) (not shown) of the specific area is superimposed on the captured image in place of or in addition to the guidance virtual object 5.
  • the current position may be superimposed on the map.
  • GPS Global Positioning System
  • the position of the terminal since the position of the terminal can be specified by using the position code 3, it depends on the environmental conditions such as the specific area (when it is indoors or when the communication state is not good). It is not possible to calculate the route.
  • FIG. 9 is a schematic diagram showing the relationship between the map of the exhibition hall 3 and the direction of the position code 3.
  • a direction mark for specifying the orientation of the terminal 20 may be arranged.
  • the direction mark may be, for example, an image whose direction (direction) can be confirmed on the map information, and may be a mark (code) different from the position code 3.
  • the position code 3 may also serve as the direction mark.
  • the shape of the marker in the position code 3 can be used to function as the direction mark.
  • the direction is, for example, north, south, east, and west.
  • the map information management unit 14 further manages the direction mark and the map information of the specific area (exhibition hall 1) in association with each other. Specifically, for example, each direction of the direction mark and each direction of the map information of the specific area (exhibition hall 1) are matched and managed.
  • the direction mark is imaged by the terminal 20, for example, the captured image of the direction mark (also referred to as a direction mark image) is acquired by the image pickup information acquisition unit 11.
  • the map information management unit 14 further specifies the direction of the visitor's terminal, which direction the visitor's terminal is facing, from the image of the direction mark based on the map information.
  • FIG. 10 is a schematic diagram showing the relationship between the direction of the map of the exhibition hall 3 and the direction of the terminal 20.
  • the virtual information generation unit 12 may generate a virtual image relating to the specific area, for example, based on the orientation of the visitor's terminal specified by the map information management unit 14. Then, the image information output unit 13 outputs, for example, a composite image in which the virtual image is superimposed on the captured image.
  • FIG. 10A the display of the terminal 20 displays a composite image in which the company name (A Corp.), which is a virtual image, is superimposed on the captured image obtained by capturing the exhibition booth of the company A from an oblique direction.
  • FIG. 10B the display of the terminal 20 displays a composite image in which the company name (virtual image) is superimposed on the captured image obtained by capturing the exhibition booth of the company A from the front direction.
  • the image pickup information acquisition unit 11 acquires the image of the image of the floor (also referred to as a floor image). Then, the map information management unit 14 identifies the inclination of the visitor's terminal from the floor image.
  • the floor in the specific area is preferably flat. Specifically, for example, a flat panel expands with respect to the real space shown in the floor image, and the tilt of the terminal is specified from the tilt of the panel (also referred to as a flat scan). That is, the inclination of the terminal 20 can be specified from the inclination state of the floor image.
  • the floor image may include, for example, at least one of the position code and the direction mark.
  • the map information management unit 14 may specify the inclination of the terminal 20 from, for example, the inclination of at least one of the position code and the direction mark.
  • FIG. 11 is a schematic diagram showing an example of the relationship between the tilt of the terminal 20 and the image displayed on the display of the terminal 20.
  • a composite image as shown in FIG. 11A is displayed on the display of the terminal 20.
  • the composite image of FIG. 11A is based on the tilt of the terminal 20.
  • the tilted (distorted) composite image is displayed on the display of the terminal 20.
  • the image shown in FIG. 11A is a composite image in which the company name (A Corp.), which is a virtual image, is superimposed on the captured image obtained by capturing the exhibition booth of company A from the front direction.
  • the image shown in b) is a composite image in which the company name (A Corp.), which is a virtual image, is superimposed on the captured image obtained by capturing the exhibition booth of company A from above.
  • the apparatus 10 may correct the captured image from the inclination of the specified terminal 20, for example. Then, the image information output unit 13 outputs, for example, a composite image in which the virtual image is superimposed on the corrected captured image. That is, for example, even when the terminal 20 is specified to be tilted, the present device 10 is not a composite image shown in FIG. 11 (b) but a composite image shown in FIG. 11 (a). to correct.
  • FIG. 1B is a schematic diagram showing a configuration of an example of the navigation system of the present embodiment.
  • the navigation system of the present embodiment includes the navigation device 10 according to any one of the first to third embodiments, and the terminal 20.
  • the terminal 20 is a terminal of a visitor in the specific area.
  • the navigation device 10 and the terminal 20 can communicate with each other via the communication network 30.
  • the navigation system is also referred to as a navigation system device.
  • the terminal 20 activates the image pickup device and prompts the user (for example, a visitor) to take a picture of the position code.
  • the user searches for the position code arranged in the specific area, and captures the position code on the image pickup device.
  • the position code is read by the navigation device 10, and the position of the user's terminal 20 is specified.
  • the orientation of the terminal 20 is also specified, for example. Specifically, for example, after the orientation of the terminal 20 is specified, the position of the terminal 20 is specified.
  • the program of the present embodiment is a program for causing a computer to execute each step of the method of the present invention as a procedure.
  • "procedure” may be read as "processing”.
  • the program of the present embodiment may be recorded on a computer-readable recording medium, for example.
  • the recording medium is, for example, a non-transitory computer-readable storage medium.
  • the recording medium is not particularly limited, and examples thereof include a read-only memory (ROM), a hard disk (HD), and an optical disk.
  • Appendix 1 Includes image capture information acquisition unit, virtual information generation unit, image information output unit, and map information management unit.
  • the image pickup information acquisition unit acquires an image captured by a terminal of a visitor in a specific area, and obtains an image captured.
  • the virtual information generation unit generates a virtual image relating to the specific area, and generates a virtual image.
  • the image information output unit outputs a composite image in which the virtual image is superimposed on the captured image.
  • the map information management unit manages the position code arranged in the specific area and the map information of the specific area in association with each other.
  • the map information management unit identifies the position of the visitor's terminal from the image of the position code based on the map information.
  • Navigation device (Appendix 2)
  • the virtual information generation unit generates a virtual image regarding the specific area based on the position of the terminal of the visitor specified by the map information management unit.
  • the navigation device according to Appendix 1.
  • Appendix 3 In the specific area, a direction mark for specifying the orientation of the visitor's terminal is arranged.
  • the map information management unit manages the direction mark and the map information of the specific area in association with each other.
  • the map information management unit identifies the direction of the visitor's terminal from the image of the direction mark based on the map information.
  • the navigation device according to Appendix 1 or 2. (Appendix 4)
  • the virtual information generation unit generates a virtual image relating to the specific area based on the orientation of the visitor's terminal specified by the map information management unit.
  • the navigation device according to Appendix 3. (Appendix 5)
  • the position code also serves as the direction mark.
  • the navigation device according to Appendix 3 or 4. (Appendix 6) When the floor of the specific area is imaged by the terminal of the visitor, the image of the imaged floor is acquired by the image pickup information acquisition unit.
  • the map information management unit identifies the inclination of the visitor's terminal from the image of the floor.
  • the navigation device according to any one of Supplementary Notes 1 to 5.
  • the image of the floor is an image including at least one of the position code and the direction mark.
  • the navigation device according to Appendix 6. (Appendix 8) In addition, it includes a specific area information acquisition section.
  • the specific area information acquisition unit acquires information about the specific area and obtains information about the specific area.
  • the virtual information generation unit generates a virtual image related to the specific area based on the information about the specific area.
  • the navigation device according to any one of Supplementary Provisions 1 to 7.
  • the visitor information acquisition unit acquires information about the visitor and obtains information about the visitor.
  • the image information output unit outputs a composite image in which the virtual image is superimposed on the captured image according to the information about the visitor.
  • the navigation device according to any one of Supplementary Provisions 1 to 8.
  • the navigation device is the navigation device according to any one of Supplementary Provisions 1 to 9.
  • the terminal is a terminal of a visitor in the specific area.
  • the navigation device and the terminal can communicate with each other via a communication network.
  • Navigation system. (Appendix 11) Includes imaging information acquisition process, virtual information generation process, image information output process, and map information management process.
  • the image pickup information acquisition step an image captured by a terminal of a visitor in a specific area is acquired.
  • the virtual information generation step generates a virtual image relating to the specific area, and generates a virtual image.
  • the image information output step a composite image in which the virtual image is superimposed on the captured image is output.
  • the map information management process manages the position code arranged in the specific area and the map information of the specific area in association with each other.
  • the image pickup information acquisition step acquires the image of the imaged position code.
  • the map information management step identifies the position of the visitor's terminal from the image of the position code based on the map information. Navigation method.
  • the virtual information generation step generates a virtual image relating to the specific area based on the position of the terminal of the visitor specified by the map information management step.
  • the navigation method according to Appendix 11. (Appendix 13) In the specific area, a direction mark for specifying the orientation of the visitor's terminal is arranged.
  • the map information management process manages the direction mark and the map information of the specific area in association with each other.
  • the image pickup information acquisition step acquires the image of the imaged direction mark.
  • the map information management step identifies the direction of the visitor's terminal, which direction the visitor's terminal is facing, from the image of the direction mark based on the map information.
  • the virtual information generation step generates a virtual image relating to the specific area based on the orientation of the visitor's terminal specified by the map information management step.
  • the navigation method according to Appendix 13. (Appendix 15) The position code also serves as the direction mark.
  • the image pickup information acquisition step acquires the image of the imaged floor.
  • the map information management step identifies the inclination of the visitor's terminal from the image of the floor.
  • the navigation method according to any one of Supplementary note 11 to 15. (Appendix 17)
  • the image of the floor is an image including at least one of the position code and the direction mark.
  • (Appendix 18) In addition, it includes a specific area information acquisition process.
  • the specific area information acquisition process acquires information about the specific area and obtains information.
  • the virtual information generation step generates a virtual image related to the specific area based on the information about the specific area.
  • the navigation method according to any one of Supplementary note 11 to 17.
  • (Appendix 19) In addition, it includes a visitor information acquisition process.
  • the visitor information acquisition process acquires information about the visitor and obtains information about the visitor.
  • the image information output step outputs a composite image in which the virtual image is superimposed on the captured image according to the information about the visitor.
  • the navigation method according to any one of Supplementary note 11 to 18.
  • Appendix 20 A program for causing a computer to perform a procedure including an imaging information acquisition procedure, a virtual information generation procedure, an image information output procedure, and a map information management procedure;
  • the image pickup information acquisition procedure an image captured by a terminal of a visitor in a specific area is acquired.
  • the virtual information generation procedure generates a virtual image relating to the specific area and generates a virtual image.
  • the image information output procedure a composite image in which the virtual image is superimposed on the captured image is output.
  • the map information management procedure manages the position code arranged in the specific area and the map information of the specific area in association with each other. When the position code is imaged by the terminal of the visitor, the image pickup information acquisition procedure acquires the image of the imaged position code.
  • the map information management procedure identifies the position of the visitor's terminal from the image of the position code based on the map information.
  • the virtual information generation procedure generates a virtual image relating to the specific area based on the position of the visitor's terminal specified by the map information management procedure.
  • the program described in Appendix 20 In the specific area, a direction mark for specifying the orientation of the visitor's terminal is arranged.
  • the map information management procedure manages the direction mark and the map information of the specific area in association with each other. When the direction mark is imaged by the terminal of the visitor, the imaged direction mark image is acquired by the image pickup information acquisition procedure.
  • the map information management procedure specifies the direction of the visitor's terminal, which direction the visitor's terminal is facing, from the image of the direction mark based on the map information.
  • the program described in Appendix 20 or 21. (Appendix 23)
  • the virtual information generation procedure generates a virtual image relating to the specific area based on the orientation of the visitor's terminal specified by the map information management procedure.
  • the program described in Appendix 22. (Appendix 24)
  • the position code also serves as the direction mark.
  • the program described in Appendix 22 or 23. (Appendix 25) When the floor of the specific area is imaged by the terminal of the visitor, the image of the imaged floor is acquired by the image pickup information acquisition procedure.
  • the map information management procedure identifies the inclination of the visitor's terminal from the image of the floor.
  • the program described in any of the appendices 20 to 24. The image of the floor is an image including at least one of the position code and the direction mark.
  • the program described in Appendix 25. (Appendix 27)
  • the specific area information acquisition procedure acquires information about the specific area and obtains information.
  • the virtual information generation procedure generates a virtual image related to the specific area based on the information about the specific area.
  • the visitor information acquisition procedure acquires information about the visitor and obtains information about the visitor.
  • the image information output procedure outputs a composite image in which the virtual image is superimposed on the captured image according to the information about the visitor.
  • the program described in any of the appendices 20 to 27. (Appendix 29) A computer-readable recording medium recording the program according to any one of the appendices 20 to 28.
  • the present invention it is possible to transmit information using AR in a large facility or the like of an exhibition hall, and it is possible to easily confirm the current position. Therefore, according to the present invention, it is useful in various fields to which AR technology can be applied.

Abstract

Provided is a navigation device which can use AR to provide information and which makes it possible to easily confirm one's present location in large-scale facilities at an exhibition site or the like. In a navigation device (10) according to the present invention, a captured image information acquisition unit (11) acquires an image captured by a terminal of a visitor in a specific area, a virtual information generation unit (12) generates a virtual image related to the specific area, an image information output unit (13) outputs a composite image in which the virtual image is superimposed on the captured image, and a map information management unit (14) manages a location code provided to the specific area and map information pertaining to the specific area while associating the location code with the map information. When an image of the location code is captured by the terminal, the captured location code image is acquired by the captured image information acquisition unit (11), and the map information management unit (14) identifies the location of the visitor's terminal from the location code image on the basis of the map information.

Description

ナビゲーション装置、ナビゲーションシステム、ナビゲーション方法、プログラム、及び、記録媒体Navigation devices, navigation systems, navigation methods, programs, and recording media
 本発明は、ナビゲーション装置、ナビゲーションシステム、ナビゲーション方法、プログラム、及び、記録媒体に関する。 The present invention relates to a navigation device, a navigation system, a navigation method, a program, and a recording medium.
 近年、AR(Augmented Reality:拡張現実感)技術を用いて、商品等を宣伝する技術が提案されている(例えば、特許文献1)。 In recent years, a technique for promoting a product or the like using AR (Augmented Reality) technology has been proposed (for example, Patent Document 1).
特開2018-147137号公報Japanese Unexamined Patent Publication No. 2018-147137
 最近では、展示会が催され、展示会場で商品等を宣伝することが多く実施されている。このため、展示会場において、ARを利用した商品等の宣伝や情報の発信が考えられる。一方、展示会場は広く、また、出展ブースも多いため、来場者は、現在の位置を確認しながら目的の出展ブースを探す必要がある。しかし、現状では、現在位置の確認は、会場に設置された地図(会場案内図等)によって行う必要があり、現在位置の確認の度に、地図が設置されている箇所に行く必要があり、不便であった。この問題は、展示会場に限らず、イベント会場、デパート、大学等の学校、大病院、ショッピングモールなどの大型施設でも問題になる。 Recently, exhibitions have been held, and products are often advertised at the exhibition hall. For this reason, it is conceivable to promote products and the like using AR and to disseminate information at the exhibition hall. On the other hand, since the exhibition hall is large and there are many exhibition booths, visitors need to find the desired exhibition booth while checking the current location. However, under the present circumstances, it is necessary to confirm the current position by using the map installed at the venue (venue guide map, etc.), and it is necessary to go to the place where the map is installed every time the current position is confirmed. It was inconvenient. This problem is not limited to exhibition venues, but also affects large facilities such as event venues, department stores, schools such as universities, large hospitals, and shopping malls.
 そこで、本発明は、展示会場の大型施設等において、ARを用いて情報を発信することが可能であり、かつ、現在位置を容易に確認することが可能なナビゲーション装置及びナビゲーション方法の提供を目的とする。 Therefore, an object of the present invention is to provide a navigation device and a navigation method capable of transmitting information using AR in a large facility or the like of an exhibition hall and easily confirming the current position. And.
 前記目的を達成するために、本発明のナビゲーション装置は、
撮像情報取得部、仮想情報生成部、画像情報出力部、及び、マップ情報管理部を含み、
前記撮像情報取得部は、特定エリアの来場者の端末で撮像された撮像画像を取得し、
前記仮想情報生成部は、前記特定エリアに関する仮想画像を生成し、
前記画像情報出力部は、前記撮像画像に前記仮想画像を重畳した合成画像を出力し、
前記マップ情報管理部は、前記特定エリアに配置された位置コード及び前記特定エリアのマップ情報を紐づけて管理し、
前記来場者の端末で前記位置コードが撮像されると、撮像された前記位置コードの画像が前記撮像情報取得部によって取得され、
前記マップ情報管理部は、前記マップ情報に基づき前記位置コードの画像から前記来場者の端末の位置を特定する、装置である。
In order to achieve the above object, the navigation device of the present invention is used.
Includes image capture information acquisition unit, virtual information generation unit, image information output unit, and map information management unit.
The image pickup information acquisition unit acquires an image captured by a terminal of a visitor in a specific area, and obtains an image captured.
The virtual information generation unit generates a virtual image relating to the specific area, and generates a virtual image.
The image information output unit outputs a composite image in which the virtual image is superimposed on the captured image.
The map information management unit manages the position code arranged in the specific area and the map information of the specific area in association with each other.
When the position code is imaged by the terminal of the visitor, the captured image of the position code is acquired by the image pickup information acquisition unit.
The map information management unit is a device that identifies the position of the terminal of the visitor from the image of the position code based on the map information.
 本発明のナビゲーション方法は、
撮像情報取得工程、仮想情報生成工程、画像情報出力工程、及び、マップ情報管理工程を含み、
前記撮像情報取得工程は、特定エリアの来場者の端末で撮像された撮像画像を取得し、
前記仮想情報生成工程は、前記特定エリアに関する仮想画像を生成し、
前記画像情報出力工程は、前記撮像画像に前記仮想画像を重畳した合成画像を出力し、
前記マップ情報管理工程は、前記特定エリアに配置された位置コード及び前記特定エリアのマップ情報を紐づけて管理し、
前記来場者の端末で前記位置コードが撮像されると、前記撮像情報取得工程は、撮像された位置コードの画像を取得し、
前記マップ情報管理工程は、前記マップ情報に基づき前記位置コードの画像から前記来場者の端末の位置を特定する、方法である。
The navigation method of the present invention
Includes imaging information acquisition process, virtual information generation process, image information output process, and map information management process.
In the image pickup information acquisition step, an image captured by a terminal of a visitor in a specific area is acquired.
The virtual information generation step generates a virtual image relating to the specific area, and generates a virtual image.
In the image information output step, a composite image in which the virtual image is superimposed on the captured image is output.
The map information management process manages the position code arranged in the specific area and the map information of the specific area in association with each other.
When the position code is imaged by the terminal of the visitor, the image pickup information acquisition step acquires the image of the imaged position code.
The map information management step is a method of specifying the position of the terminal of the visitor from the image of the position code based on the map information.
 本発明によれば、ARにより情報を発信することができ、かつ、大型施設等での来場者の現在位置を容易に確認することが可能である。なお、現在位置の確認は、例えば、GPS(Global Positioning System)でも可能であるが、GPSは電波を利用するため、屋内での位置確認には問題がある。一方、本発明は、前記位置コードを利用するため、屋内であっても問題なく位置確認が可能である。ただし、本発明は、屋外での位置確認に適用してもよい。 According to the present invention, information can be transmitted by AR, and the current position of a visitor in a large facility or the like can be easily confirmed. The current position can be confirmed by, for example, GPS (Global Positioning System), but since GPS uses radio waves, there is a problem in confirming the position indoors. On the other hand, since the present invention uses the position code, the position can be confirmed without any problem even indoors. However, the present invention may be applied to outdoor position confirmation.
図1(A)は、実施形態1の装置の一例の構成を示すブロック図であり、図1(B)は、実施形態4のシステムの一例の構成を示す模式図である。FIG. 1A is a block diagram showing a configuration of an example of the apparatus of the first embodiment, and FIG. 1B is a schematic diagram showing a configuration of an example of the system of the fourth embodiment. 図2は、実施形態1の装置のハードウエア構成の一例を示すブロック図である。FIG. 2 is a block diagram showing an example of the hardware configuration of the apparatus of the first embodiment. 図3は、実施形態1の装置における処理の一例を示すフローチャートである。FIG. 3 is a flowchart showing an example of processing in the apparatus of the first embodiment. 図4は、展示会場のマップと位置コードの関係の一例を示す模式図である。FIG. 4 is a schematic diagram showing an example of the relationship between the map of the exhibition hall and the position code. 図5は、端末が位置コードを撮像する一例を示す模式図である。FIG. 5 is a schematic diagram showing an example in which a terminal captures a position code. 図6は、来場者の端末のディスプレイに端末の位置を表示する一例を示す模式図である。FIG. 6 is a schematic diagram showing an example of displaying the position of the terminal on the display of the terminal of the visitor. 図7は、特定エリアに関する仮想画像を撮像画像に重畳した合成画像の一例を示す模式図である。FIG. 7 is a schematic diagram showing an example of a composite image in which a virtual image relating to a specific area is superimposed on a captured image. 図8は、特定エリアに関する仮想画像を撮像画像に重畳した合成画像の一例を示す模式図である。FIG. 8 is a schematic diagram showing an example of a composite image in which a virtual image relating to a specific area is superimposed on a captured image. 図9は、展示示会場のマップと位置コードの方向との関係の一例を示す模式図である。FIG. 9 is a schematic diagram showing an example of the relationship between the map of the exhibition hall and the direction of the position code. 図10は、展示示会場のマップの方向と端末の向きとの関係の一例を示す模式図である。FIG. 10 is a schematic diagram showing an example of the relationship between the orientation of the map of the exhibition hall and the orientation of the terminal. 図11は、端末の傾きと端末のディスプレイに表示される画像との関係の一例を示す模式図である。FIG. 11 is a schematic diagram showing an example of the relationship between the tilt of the terminal and the image displayed on the display of the terminal.
 本発明において、「来場者」とは、特定エリア内に来る人を意味し、ユーザともいう。また、来場者は、来館者、来校者等とも言い換えることができる。 In the present invention, the "visitor" means a person who comes within a specific area, and is also referred to as a user. Visitors can also be rephrased as visitors, visitors, and the like.
 本発明の実施形態について図を用いて説明する。本発明は、以下の実施形態には限定されない。以下の各図において、同一部分には、同一符号を付している。また、各実施形態の説明は、特に言及がない限り、互いの説明を援用でき、各実施形態の構成は、特に言及がない限り、組合せ可能である。 An embodiment of the present invention will be described with reference to the drawings. The present invention is not limited to the following embodiments. In each of the following figures, the same parts are designated by the same reference numerals. Further, the explanations of the respective embodiments can be referred to each other unless otherwise specified, and the configurations of the respective embodiments can be combined unless otherwise specified.
[実施形態1]
 図1(A)は、本実施形態のナビゲーション装置10の一例の構成を示すブロック図であり、図1(B)は、本実施形態のナビゲーション装置10を含むナビゲーションシステムの一例の構成を示す模式図である。図1(A)に示すように、本装置10は、撮像情報取得部11、仮想情報生成部12、画像情報出力部13、マップ情報管理部14、特定エリア情報取得部15及び来場者情報取得部16を含む。特定エリア情報取得部15及び来場者情報取得部16は、任意の構成であり、本装置10に含まれなくてもよい。ナビゲーションシステム装置については、実施形態4にて記載する。
[Embodiment 1]
FIG. 1A is a block diagram showing a configuration of an example of the navigation device 10 of the present embodiment, and FIG. 1B is a schematic showing a configuration of an example of a navigation system including the navigation device 10 of the present embodiment. It is a figure. As shown in FIG. 1A, the apparatus 10 includes an image pickup information acquisition unit 11, a virtual information generation unit 12, an image information output unit 13, a map information management unit 14, a specific area information acquisition unit 15, and a visitor information acquisition. Including part 16. The specific area information acquisition unit 15 and the visitor information acquisition unit 16 have an arbitrary configuration and may not be included in the present device 10. The navigation system device will be described in the fourth embodiment.
 本装置10は、例えば、前記各部を含む1つの装置でもよいし、前記各部が、通信回線網30を介して接続可能な装置でもよい。また、本装置10は、通信回線網30を介して、後述する外部装置と接続可能である。通信回線網30は、特に制限されず、公知のネットワークを使用でき、例えば、有線でも無線でもよい。通信回線網30は、例えば、インターネット回線、WWW(World Wide Web)、電話回線、LAN(Local Area Network)、SAN(Storage Area Network)、DTN(Delay Tolerant Networking)等があげられる。無線通信としては、例えば、WiFi(Wireless Fidelity)、Bluetooth(登録商標)等が挙げられる。前記無線通信としては、各装置が直接通信する形態(Ad Hoc通信)、アクセスポイントを介した間接通信のいずれであってもよい。本装置10は、例えば、システムとしてサーバに組み込まれていてもよい。また、本装置10は、例えば、本発明のプログラムがインストールされたパーソナルコンピュータ(PC、例えば、デスクトップ型、ノート型)、端末(例えば、スマートフォン、携帯電話、眼鏡型及び腕時計型等のウェアラブル端末等)等であってもよい。 The device 10 may be, for example, one device including the above-mentioned parts, or may be a device in which the above-mentioned parts can be connected via the communication network 30. Further, the present device 10 can be connected to an external device described later via the communication line network 30. The communication network 30 is not particularly limited, and a known network can be used, and may be wired or wireless, for example. Examples of the communication line network 30 include an Internet line, WWW (World Wide Web), a telephone line, LAN (Local Area Network), SAN (Storage Area Network), DTN (Delay Traveler Network), and the like. Examples of wireless communication include WiFi (Wireless Fidelity), Bluetooth (registered trademark) and the like. The wireless communication may be either a form in which each device directly communicates (Ad Hoc communication) or an indirect communication via an access point. The apparatus 10 may be incorporated in the server as a system, for example. Further, the apparatus 10 is, for example, a personal computer (PC, for example, a desktop type, a notebook type), a terminal (for example, a smartphone, a mobile phone, a glasses type, a watch type, or the like) wearable terminal on which the program of the present invention is installed. ) Etc. may be used.
 図2に、本装置10のハードウエア構成のブロック図を例示する。本装置10は、例えば、中央演算処理装置(CPU、GPU等)101、メモリ102、バス103、記憶装置104、入力装置105、表示装置106、通信デバイス107等を有する。本装置10の各部は、それぞれのインタフェース(I/F)により、バス103を介して相互に接続されている。 FIG. 2 illustrates a block diagram of the hardware configuration of the present device 10. The apparatus 10 includes, for example, a central processing unit (CPU, GPU, etc.) 101, a memory 102, a bus 103, a storage device 104, an input device 105, a display device 106, a communication device 107, and the like. Each part of the apparatus 10 is connected to each other via a bus 103 by each interface (I / F).
 中央演算処理装置(中央演算装置、中央処理装置ともいう)101は、本装置10の全体の制御を担う。本装置10において、中央演算処理装置101により、例えば、本発明のプログラムやその他のプログラムが実行され、また、各種情報の読み込みや書き込みが行われる。具体的には、例えば、中央演算処理装置101が、撮像情報取得部11、仮想情報生成部12、画像情報出力部13、マップ情報管理部14、特定エリア情報取得部15及び来場者情報取得部16として機能する。 The central processing unit (also referred to as a central processing unit or a central processing unit) 101 is responsible for controlling the entire device 10. In the present device 10, for example, the program of the present invention and other programs are executed by the central processing unit 101, and various information is read and written. Specifically, for example, the central processing unit 101 includes an imaging information acquisition unit 11, a virtual information generation unit 12, an image information output unit 13, a map information management unit 14, a specific area information acquisition unit 15, and a visitor information acquisition unit. Functions as 16.
 バス103は、例えば、外部装置とも接続できる。前記外部装置は、例えば、外部記憶装置(外部データベース等)、プリンター等があげられる。本装置10は、例えば、バス103に接続された通信デバイス107により、外部ネットワーク(通信回線網)に接続でき、外部ネットワークを介して、他の装置と接続することもできる。 The bus 103 can also be connected to, for example, an external device. Examples of the external device include an external storage device (external database, etc.), a printer, and the like. The device 10 can be connected to an external network (communication network) by, for example, a communication device 107 connected to the bus 103, and can also be connected to another device via the external network.
 メモリ102は、例えば、メインメモリ(主記憶装置)が挙げられる。中央演算処理装置101が処理を行う際には、例えば、後述する記憶装置104に記憶されている本発明のプログラム等の種々の動作プログラムを、メモリ102が読み込み、中央演算処理装置101は、メモリ102からデータを受け取って、プログラムを実行する。前記メインメモリは、例えば、RAM(ランダムアクセスメモリ)である。また、メモリ102は、例えば、ROM(読み出し専用メモリ)であってもよい。 The memory 102 may be, for example, a main memory (main storage device). When the central processing unit 101 performs processing, for example, the memory 102 reads various operation programs such as the program of the present invention stored in the storage device 104 described later, and the central processing unit 101 reads the memory. Receives data from 102 and executes the program. The main memory is, for example, a RAM (random access memory). Further, the memory 102 may be, for example, a ROM (read-only memory).
 記憶装置104は、例えば、前記メインメモリ(主記憶装置)に対して、いわゆる補助記憶装置ともいう。前述のように、記憶装置104には、本発明のプログラムを含む動作プログラムが格納されている。記憶装置104は、例えば、記録媒体と、記録媒体に読み書きするドライブとの組合せであってもよい。前記記録媒体は、特に制限されず、例えば、内蔵型でも外付け型でもよく、HD(ハードディスク)、CD-ROM、CD-R、CD-RW、MO、DVD、フラッシュメモリー、メモリーカード等が挙げられる。記憶装置104は、例えば、記録媒体とドライブとが一体化されたハードディスクドライブ(HDD)、及びソリッドステートドライブ(SSD)であってもよい。 The storage device 104 is also referred to as a so-called auxiliary storage device with respect to the main memory (main storage device), for example. As described above, the storage device 104 stores an operation program including the program of the present invention. The storage device 104 may be, for example, a combination of a recording medium and a drive for reading and writing to the recording medium. The recording medium is not particularly limited, and may be an internal type or an external type, and examples thereof include HD (hard disk), CD-ROM, CD-R, CD-RW, MO, DVD, flash memory, and memory card. Be done. The storage device 104 may be, for example, a hard disk drive (HDD) in which a recording medium and a drive are integrated, and a solid state drive (SSD).
 本装置10において、メモリ102及び記憶装置104は、ユーザのログ情報、並びに、外部データベース(図示せず)から取得した情報を記憶することも可能である。 In the device 10, the memory 102 and the storage device 104 can also store user log information and information acquired from an external database (not shown).
 本装置10は、例えば、さらに、入力装置105、表示装置106を含む。入力装置105は、例えば、タッチパネル、キーボード、マウス等である。表示装置106は、例えば、LEDディスプレイ、液晶ディスプレイ等が挙げられる。 The device 10 further includes, for example, an input device 105 and a display device 106. The input device 105 is, for example, a touch panel, a keyboard, a mouse, or the like. Examples of the display device 106 include an LED display, a liquid crystal display, and the like.
 つぎに、本実施形態のナビゲーション方法の一例を、図3のフローチャートに基づき説明する。本実施形態のナビゲーション方法は、例えば、図1のナビゲーション装置10を用いて、次のように実施する。なお、本実施形態のナビゲーション方法は、図1のナビゲーション装置10の使用には限定されない。 Next, an example of the navigation method of the present embodiment will be described based on the flowchart of FIG. The navigation method of the present embodiment is carried out as follows, for example, by using the navigation device 10 of FIG. The navigation method of the present embodiment is not limited to the use of the navigation device 10 of FIG.
 まず、撮像情報取得部11により、特定エリアの来場者の端末で撮像された撮像画像を取得する(S1)。前記特定エリアは、特に制限されず、屋内でもよいし、屋外でもよい。前記屋内としては、例えば、展示会場、イベント会場、デパート、大学等の学校等がある。前記来場者の端末は、特に制限されず、例えば、スマートフォン、携帯電話、眼鏡型及び腕時計型等のウェアラブル端末等である。前記撮像画像は、例えば、静止画でもよいし、動画でもよい。前記取得は、例えば、通信回線網30を介して実行される。 First, the image pickup information acquisition unit 11 acquires an image captured by a terminal of a visitor in a specific area (S1). The specific area is not particularly limited and may be indoors or outdoors. The indoors include, for example, exhibition halls, event halls, department stores, schools such as universities, and the like. The terminal of the visitor is not particularly limited, and is, for example, a wearable terminal such as a smartphone, a mobile phone, a spectacle type, and a wristwatch type. The captured image may be, for example, a still image or a moving image. The acquisition is performed, for example, via the communication network 30.
 次に、仮想情報生成部12により、前記特定エリアに関する仮想画像を生成する(S2)。前記特定エリアに関する仮想画像とは、例えば、ブース位置を示すピン画像等である。 Next, the virtual information generation unit 12 generates a virtual image related to the specific area (S2). The virtual image relating to the specific area is, for example, a pin image showing a booth position or the like.
 次に、画像情報出力部13により、前記撮像画像に前記仮想画像を重畳した合成画像を出力する(S3)。前記出力は、例えば、通信回線網30を介して、実行されてもよい。 Next, the image information output unit 13 outputs a composite image in which the virtual image is superimposed on the captured image (S3). The output may be executed, for example, via the communication network 30.
 次に、マップ情報管理部14により、前記特定エリアに配置された位置コード及び前記特定エリアのマップ情報を紐づけて管理する(S4)。前記位置コードは、例えば、QRコード(登録商標)、ARマーカー、バーコード、カメレオンコード(登録商標)等がある。図4に、展示会場のマップと位置コードの関係の一例を示す。以下、特定エリアとして、展示会場1を用いるが、例示であって、前述のように、これに限定されない。展示会場1には、展示ブース2が複数配置されている。また、展示会場1には、例えば、一定の間隔(例えば、15m間隔等)で、位置コード(QRコード(登録商標))3が配置されている。展示ブース2及び位置コード3の配置は、特に制限されない。位置コード3は、例えば、展示会場1における位置を特定するためのマーカーである。位置コード3の数は、特に制限されず、1つでもよいし、2つ以上でもよい。位置コード3の設置場所は、特に制限されず、例えば、床でもよいし、壁でもよいし、天井でもよいし、位置コード3を記載したパネルを天井から空中にぶら下げることにより、設置してもよい。前記マップ情報(仮想空間のマップ)上には、図4に示すように、展示会場1に設置された位置コード3の位置が紐づけられている。また、前記マップ情報上には、例えば、マップ情報管理部14により、展示ブース2の位置も紐づけられている。前記マップ情報は、例えば、座標情報である。 Next, the map information management unit 14 manages the position code arranged in the specific area and the map information of the specific area in association with each other (S4). The position code includes, for example, a QR code (registered trademark), an AR marker, a barcode, a chameleon code (registered trademark), and the like. FIG. 4 shows an example of the relationship between the map of the exhibition hall and the location code. Hereinafter, the exhibition hall 1 will be used as the specific area, but it is an example and is not limited to this as described above. A plurality of exhibition booths 2 are arranged in the exhibition hall 1. Further, in the exhibition hall 1, for example, position codes (QR code (registered trademark)) 3 are arranged at regular intervals (for example, 15 m intervals). The arrangement of the exhibition booth 2 and the position code 3 is not particularly limited. The position code 3 is, for example, a marker for specifying a position in the exhibition hall 1. The number of position codes 3 is not particularly limited, and may be one or two or more. The installation location of the position code 3 is not particularly limited, and may be, for example, a floor, a wall, a ceiling, or a panel having the position code 3 may be installed by hanging it from the ceiling to the air. good. As shown in FIG. 4, the position of the position code 3 installed in the exhibition hall 1 is associated with the map information (map of the virtual space). Further, the location of the exhibition booth 2 is also linked to the map information by, for example, the map information management unit 14. The map information is, for example, coordinate information.
 次に、撮像情報取得部11により、前記来場者の端末で前記位置コードが撮像されると、撮像された前記位置コードの画像(位置コード画像ともいう)が取得される(S5)。図5に、前記来場者の端末が前記位置コードを撮像する一例を示す。図5に示すように、来場者の端末20は、例えば、展示会場1の床に配置された位置コード3を撮像する。 Next, when the position code is imaged by the terminal of the visitor by the image pickup information acquisition unit 11, an image of the imaged position code (also referred to as a position code image) is acquired (S5). FIG. 5 shows an example in which the terminal of the visitor captures the position code. As shown in FIG. 5, the visitor's terminal 20 captures, for example, a position code 3 arranged on the floor of the exhibition hall 1.
 次に、マップ情報管理部14により、前記マップ情報に基づき前記位置コードの画像から前記来場者の端末の位置を特定し(S6)、終了する(END)。来場者の端末のディスプレイに、特定した端末の位置(現在位置)を表示する一例を図6に示す。本装置10は、例えば、通信回線網30を介して、特定した端末20の位置(すなわち、現在位置)を端末20に送信してもよい。そして、端末20は、例えば、図6に示すように、端末20のディスプレイに特定エリア内における端末20の現在位置がマップ上に重畳して表示される。図6に示すように、端末20上に表示されるマップは、例えば、展示会場1のマップであり、座標情報に基づいて模式化されていることが好ましい。なお、ここで、「現在位置」とは、前記位置コードを撮像した時点における前記来場者の端末の位置をいう。 Next, the map information management unit 14 identifies the position of the visitor's terminal from the image of the position code based on the map information (S6), and ends (END). FIG. 6 shows an example of displaying the position (current position) of the specified terminal on the display of the visitor's terminal. The apparatus 10 may transmit the position (that is, the current position) of the specified terminal 20 to the terminal 20 via, for example, the communication network 30. Then, for example, as shown in FIG. 6, the terminal 20 displays the current position of the terminal 20 in the specific area superimposed on the map on the display of the terminal 20. As shown in FIG. 6, the map displayed on the terminal 20 is, for example, a map of the exhibition hall 1, and is preferably schematically based on the coordinate information. Here, the "current position" means the position of the terminal of the visitor at the time when the position code is imaged.
 仮想情報生成部12は、例えば、マップ情報管理部14が特定した前記来場者の端末の位置に基づき、前記特定エリアに関する仮想画像を生成してもよい。図7に、前記特定エリアに関する仮想画像を前記撮像画像に重畳した合成画像の一例を示す。図7に示すように、前記合成画像が出力された端末20には、端末20が撮像した特定エリアの撮像画像上に前記仮想画像である出展ブースのピン4a及び4bが重畳された合成画像が表示される。図7において、出展ブースのピン4aは、出展ブースのピン4bよりも大きく表示されている。このように、出展ブースのピン4は、例えば、特定した端末20の位置情報に基づき、端末20の位置情報から展示ブース2の位置が離れていれば離れているほど、出展ブースのピン4の大きさを他の出展ブースのピン4よりも小さくしてもよく、端末20の位置情報から展示ブース2の位置が近ければ近いほど、出展ブースのピン4の大きさを他の出展ブースのピン4よりも大きくしてもよい。一方で、出展ブースのピン4は、例えば、端末20の位置情報から展示ブース2の位置までの距離に応じて、出展ブースのピン4の外縁を色付けてもよい。出展ブースのピン4には、仮想情報生成部12により、それぞれ識別情報が付与されてもよい。前記識別情報は、例えば、数字、文字、記号及びこれらの2つ以上の組み合わせ等である。これにより、本装置10は、例えば、特定した端末20の位置に応じて、出展ブースのピン4等の仮想画像を生成できる。 The virtual information generation unit 12 may generate a virtual image related to the specific area based on the position of the terminal of the visitor specified by the map information management unit 14, for example. FIG. 7 shows an example of a composite image in which a virtual image relating to the specific area is superimposed on the captured image. As shown in FIG. 7, on the terminal 20 to which the composite image is output, a composite image in which the pins 4a and 4b of the exhibition booth, which are virtual images, are superimposed on the captured image of the specific area captured by the terminal 20 is displayed. Is displayed. In FIG. 7, the pin 4a of the exhibition booth is displayed larger than the pin 4b of the exhibition booth. As described above, the pin 4 of the exhibition booth is, for example, based on the position information of the specified terminal 20, the farther the position of the exhibition booth 2 is from the position information of the terminal 20, the more the pin 4 of the exhibition booth is located. The size may be smaller than the pin 4 of the other exhibition booth, and the closer the position of the exhibition booth 2 is from the position information of the terminal 20, the larger the size of the pin 4 of the exhibition booth will be. It may be larger than 4. On the other hand, the pin 4 of the exhibition booth may color the outer edge of the pin 4 of the exhibition booth according to the distance from the position information of the terminal 20 to the position of the exhibition booth 2, for example. Identification information may be assigned to each of the pins 4 of the exhibition booth by the virtual information generation unit 12. The identification information is, for example, a number, a character, a symbol, or a combination of two or more thereof. As a result, the present device 10 can generate a virtual image such as a pin 4 of the exhibition booth according to the position of the specified terminal 20, for example.
 また、特定エリア情報取得部15により、前記特定エリアに関する情報を取得してもよい(S7)。前記工程(S7)は、例えば、図3に示すように、前記工程(S1)と並行して処理してもよい。前記取得は、例えば、通信回線網30を介して、実行されてもよいし、入力装置105による入力により取得してもよい。前記特定エリアに関する情報は、例えば、出展ブース等に関する情報であり、より具体的には、会社名、会社紹介情報、商品及びサービス等の情報等である。そして、仮想情報生成部12は、例えば、前記特定エリアに関する情報に基づき前記特定エリアに関する仮想画像を生成してもよい。これにより、端末20のディスプレイには、図7に示すように、出展ブースのピン4に加えて、出展ブースに関する情報(会社名(A Corp.)、会社紹介情報、商品及びサービス等の情報等)を前記撮像画像に重畳した合成画像が表示される。図中において、会社紹介情報、商品及びサービス等の情報等を表示する欄を「Window」として示すが、前記欄の位置は、例示であって、これに限定されない。 Further, the specific area information acquisition unit 15 may acquire information regarding the specific area (S7). The step (S7) may be processed in parallel with the step (S1), for example, as shown in FIG. The acquisition may be executed, for example, via the communication network 30, or may be acquired by input by the input device 105. The information regarding the specific area is, for example, information about an exhibition booth or the like, and more specifically, information about a company name, company introduction information, products, services, or the like. Then, the virtual information generation unit 12 may generate a virtual image regarding the specific area based on the information regarding the specific area, for example. As a result, as shown in FIG. 7, on the display of the terminal 20, in addition to the pin 4 of the exhibition booth, information about the exhibition booth (company name (A Corp.), company introduction information, information on products, services, etc., etc. ) Is superimposed on the captured image, and a composite image is displayed. In the figure, a column for displaying company introduction information, information such as products and services, etc. is shown as "Windows", but the position of the column is an example and is not limited thereto.
 また、来場者情報取得部16により、前記来場者に関する情報を取得してもよい(S8)。前記工程(S8)は、例えば、図3に示すように、前記工程(S1)と並行して処理してもよい。前記取得は、例えば、通信回線網30を介して、実行されてもよいし、入力装置105による入力により取得してもよい。前記来場者に関する情報とは、例えば、来場者の属性(性別、年齢、職業)、来場の目的(視察、商談等)、興味ある出展ブース(エリア)等である。そして、画像情報出力部13は、前記来場者に関する情報に応じて、前記撮像画像に前記仮想画像を重畳した合成画像を出力する。具体的には、例えば、前記仮想画像として、リコメンド情報、スケジュール情報、出展ブース情報等がある。リコメンド情報とは、例えば、前記来場者に関する情報に基づいて、前記来場者に推奨する情報(例えば、おすすめのイベント、おすすめの出展ブース等である)。スケジュール情報は、例えば、前記特定エリア内で行われるイベントのスケジュールを示す情報である。前記イベントとしては、例えば、デモンストレーション、セミナー等が挙げられる。前記イベントは、主催者が進行するイベントでもよいし、出展者が進行するイベントでもよい。出展ブース情報は、例えば、前述の出展ブースに関する情報と同様である。 Further, the visitor information acquisition unit 16 may acquire information about the visitor (S8). The step (S8) may be processed in parallel with the step (S1), for example, as shown in FIG. The acquisition may be executed, for example, via the communication network 30, or may be acquired by input by the input device 105. The information about the visitors is, for example, the attributes of the visitors (gender, age, occupation), the purpose of the visit (inspection, business negotiations, etc.), the exhibition booth (area) of interest, and the like. Then, the image information output unit 13 outputs a composite image in which the virtual image is superimposed on the captured image according to the information about the visitor. Specifically, for example, the virtual image includes recommendation information, schedule information, exhibition booth information, and the like. The recommendation information is, for example, information recommended to the visitor based on the information about the visitor (for example, a recommended event, a recommended exhibition booth, etc.). The schedule information is, for example, information indicating the schedule of an event to be held in the specific area. Examples of the event include demonstrations and seminars. The event may be an event promoted by the organizer or an event promoted by the exhibitor. The exhibition booth information is, for example, the same as the above-mentioned information regarding the exhibition booth.
 本装置10は、例えば、さらに、経路算出部を含んでもよい。この場合において、前記マップ情報は、例えば、さらに、前記特定エリア内に存在する障害物の位置情報を含む。前記障害物は、特に制限されず、例えば、壁等の通行を阻む物体である。前記経路算出部は、例えば、図3に示す前記工程(S6)の後に、前記マップ情報に基づき、前記来場者の端末の位置から入力された位置までの経路を算出する。前記位置の入力は、例えば、前記端末から取得することで、入力されてもよいし、入力装置105により入力されてもよい。前記位置は、特に制限されず、例えば、出展ブース等の位置である。そして、仮想情報生成部12は、前記算出した経路に基づき、前記特定エリアに関する仮想画像を生成し、画像情報出力部13により、前記撮像画像に前記仮想画像を重畳した合成画像を出力し、終了する(END)。図8に、前記特定エリアに関する仮想画像を前記撮像画像に重畳した合成画像の一例を示す。図8において、例えば、来場者6は、端末20のディスプレイをタップすることで、任意の展示ブースのピン4(図8において、ピン4b)を選択する。その後、本装置10は、例えば、展示ブースのピン4bが選択されたことを示す情報を取得する。前記経路算出部は、来場者6の位置(Start)から前記選択された展示ブースのピン4bに紐づけられたマップ情報上の展示ブースの位置(Goal)までの経路を算出する。そして、本装置10から出力された前記合成画像を用いて、来場者6は、選択された展示ブースのピン4bに対応した展示ブースの位置(Goal)まで誘導される。来場者6を誘導するための前記合成画像は、例えば、矢印及び文字等の誘導用仮想オブジェクト5を前記撮像画像上に重畳した画像である。これにより、例えば、マップを読むことが苦手な人であっても、ストレスなく、目的地まで行くことが可能である。また、前記合成画像は、例えば、誘導用仮想オブジェクト5に代えて、又は加えて前記特定エリアのマップ(2Dオブジェクト)(図示せず)を前記撮像画像上に重畳した画像であってもよい。前記マップには、例えば、前述のように、前記現在位置が重畳されていてもよい。一般的に、前記特定エリアが室内である場合や通信状態がよくない場合は、例えば、GPS(Global Positioning System)等を使用できないため、自己位置の特定や経路算出をすることができない。しかしながら、本装置10によれば、位置コード3を用いることにより、端末の位置を特定することができるため、前記特定エリア等の環境条件(室内である場合や通信状態がよくない場合)に左右されず、経路を算出できる。 The apparatus 10 may further include, for example, a route calculation unit. In this case, the map information further includes, for example, the position information of an obstacle existing in the specific area. The obstacle is not particularly limited, and is, for example, an object that obstructs passage such as a wall. For example, after the step (S6) shown in FIG. 3, the route calculation unit calculates a route from the position of the terminal of the visitor to the input position based on the map information. The input of the position may be input by, for example, acquiring from the terminal, or may be input by the input device 105. The position is not particularly limited, and is, for example, the position of an exhibition booth or the like. Then, the virtual information generation unit 12 generates a virtual image relating to the specific area based on the calculated path, and the image information output unit 13 outputs a composite image in which the virtual image is superimposed on the captured image, and ends. (END). FIG. 8 shows an example of a composite image in which a virtual image relating to the specific area is superimposed on the captured image. In FIG. 8, for example, the visitor 6 selects a pin 4 (pin 4b in FIG. 8) of an arbitrary exhibition booth by tapping the display of the terminal 20. After that, the apparatus 10 acquires, for example, information indicating that the pin 4b of the exhibition booth is selected. The route calculation unit calculates a route from the position (Start) of the visitor 6 to the position (Goal) of the exhibition booth on the map information associated with the pin 4b of the selected exhibition booth. Then, using the composite image output from the present device 10, the visitor 6 is guided to the position (Goal) of the exhibition booth corresponding to the pin 4b of the selected exhibition booth. The composite image for guiding the visitor 6 is, for example, an image in which a guidance virtual object 5 such as an arrow and a character is superimposed on the captured image. As a result, for example, even a person who is not good at reading a map can go to a destination without stress. Further, the composite image may be, for example, an image in which a map (2D object) (not shown) of the specific area is superimposed on the captured image in place of or in addition to the guidance virtual object 5. For example, as described above, the current position may be superimposed on the map. Generally, when the specific area is indoors or the communication state is not good, for example, GPS (Global Positioning System) or the like cannot be used, so that the self-position cannot be specified or the route can be calculated. However, according to the present device 10, since the position of the terminal can be specified by using the position code 3, it depends on the environmental conditions such as the specific area (when it is indoors or when the communication state is not good). It is not possible to calculate the route.
[実施形態2]
 図9及び図10を用いて、来場者の端末の方向を特定する形態について説明する。
[Embodiment 2]
A mode for specifying the direction of the terminal of the visitor will be described with reference to FIGS. 9 and 10.
 図9は、展示会場3のマップと位置コード3の方向との関係を示す模式図である。展示会場1には、例えば、図9下部に示すように、端末20の向きを特定するための方向マークが配置されてもよい。前記方向マークは、例えば、マップ情報上で向き(方向)を確認できる画像であればよく、位置コード3とは別のマーク(コード)であってもよい。また、位置コード3が、前記方向マークを兼ねてもよい。具体的には、例えば、位置コード3内のマーカーの形を利用して、前記方向マークとして機能することができる。前記方向は、例えば、東西南北等である。マップ情報管理部14は、例えば、さらに、前記方向マーク及び前記特定エリア(展示会場1)のマップ情報を紐づけて管理する。具体的には、例えば、前記方向マークの各方向と、前記特定エリア(展示会場1)のマップ情報の各方向とを一致させて、管理する。端末20で前記方向マークが撮像されると、例えば、さらに、撮像された前記方向マークの画像(方向マーク画像ともいう)が撮像情報取得部11によって取得される。そして、マップ情報管理部14は、例えば、さらに、前記マップ情報に基づき前記方向マークの画像から、前記来場者の前記端末がどの方向を向いているかという前記来場者の端末の向きを特定する。 FIG. 9 is a schematic diagram showing the relationship between the map of the exhibition hall 3 and the direction of the position code 3. In the exhibition hall 1, for example, as shown in the lower part of FIG. 9, a direction mark for specifying the orientation of the terminal 20 may be arranged. The direction mark may be, for example, an image whose direction (direction) can be confirmed on the map information, and may be a mark (code) different from the position code 3. Further, the position code 3 may also serve as the direction mark. Specifically, for example, the shape of the marker in the position code 3 can be used to function as the direction mark. The direction is, for example, north, south, east, and west. For example, the map information management unit 14 further manages the direction mark and the map information of the specific area (exhibition hall 1) in association with each other. Specifically, for example, each direction of the direction mark and each direction of the map information of the specific area (exhibition hall 1) are matched and managed. When the direction mark is imaged by the terminal 20, for example, the captured image of the direction mark (also referred to as a direction mark image) is acquired by the image pickup information acquisition unit 11. Then, for example, the map information management unit 14 further specifies the direction of the visitor's terminal, which direction the visitor's terminal is facing, from the image of the direction mark based on the map information.
 図10は、展示会場3のマップの方向と端末20の向きとの関係を示す模式図である。仮想情報生成部12は、例えば、マップ情報管理部14が特定した前記来場者の端末の向きに基づき、前記特定エリアに関する仮想画像を生成してもよい。そして、画像情報出力部13は、例えば、前記撮像画像に前記仮想画像を重畳した合成画像を出力する。図10(a)において、端末20のディスプレイには、会社Aの展示ブースを斜め方向から撮像した撮像画像に仮想画像である会社名(A Corp.)を重畳した合成画像を表示している。図10(b)において、端末20のディスプレイには、会社Aの展示ブースを正面方向から撮像した撮像画像に会社名(仮想画像)を重畳した合成画像を表示している。 FIG. 10 is a schematic diagram showing the relationship between the direction of the map of the exhibition hall 3 and the direction of the terminal 20. The virtual information generation unit 12 may generate a virtual image relating to the specific area, for example, based on the orientation of the visitor's terminal specified by the map information management unit 14. Then, the image information output unit 13 outputs, for example, a composite image in which the virtual image is superimposed on the captured image. In FIG. 10A, the display of the terminal 20 displays a composite image in which the company name (A Corp.), which is a virtual image, is superimposed on the captured image obtained by capturing the exhibition booth of the company A from an oblique direction. In FIG. 10B, the display of the terminal 20 displays a composite image in which the company name (virtual image) is superimposed on the captured image obtained by capturing the exhibition booth of the company A from the front direction.
[実施形態3]
 図11を用いて、来場者の端末の傾きを特定する形態について説明する。
[Embodiment 3]
A mode for specifying the inclination of the terminal of the visitor will be described with reference to FIG.
 撮像情報取得部11は、さらに、前記来場者の端末で前記特定エリアの床が撮像されると、撮像された前記床の画像(床画像ともいう)を取得する。そして、マップ情報管理部14により、前記床画像から前記来場者の端末の傾きを特定する。特定エリアの前記床は、平面であることが好ましい。具体的には、例えば、前記床画像に写っている現実空間に対して、平面のパネルが広がり、前記パネルの傾きから、前記端末の傾きを特定する(平面スキャンともいう)。すなわち、端末20の傾きは、床画像の傾斜状態から特定可能である。前記床画像は、例えば、前記位置コード及び前記方向マークの少なくとも一方を含んでもよい。マップ情報管理部14は、例えば、前記位置コード及び前記方向マークの少なくとも一方の傾きから、端末20の傾きを特定してもよい。図11は、端末20の傾きと端末20のディスプレイに表示される画像との関係の一例を示す模式図である。マップ情報管理部14により、端末20が傾いていないと特定された場合に、図11(a)に示すような合成画像が端末20のディスプレイに表示される。一方で、例えば、マップ情報管理部14により、端末20が傾いていると特定された場合に、図11(b)に示すように、端末20の傾きに基づき、図11(a)の合成画像が傾いている(歪んでいる)合成画像が端末20のディスプレイに表示される。具体的に、図11(a)に示す画像は、会社Aの展示ブースを正面方向から撮像した撮像画像に仮想画像である会社名(A Corp.)を重畳した合成画像であり、図11(b)に示す画像は、会社Aの展示ブースを上方向から撮像した撮像画像に仮想画像である会社名(A Corp.)を重畳した合成画像である。 Further, when the floor of the specific area is imaged by the terminal of the visitor, the image pickup information acquisition unit 11 acquires the image of the image of the floor (also referred to as a floor image). Then, the map information management unit 14 identifies the inclination of the visitor's terminal from the floor image. The floor in the specific area is preferably flat. Specifically, for example, a flat panel expands with respect to the real space shown in the floor image, and the tilt of the terminal is specified from the tilt of the panel (also referred to as a flat scan). That is, the inclination of the terminal 20 can be specified from the inclination state of the floor image. The floor image may include, for example, at least one of the position code and the direction mark. The map information management unit 14 may specify the inclination of the terminal 20 from, for example, the inclination of at least one of the position code and the direction mark. FIG. 11 is a schematic diagram showing an example of the relationship between the tilt of the terminal 20 and the image displayed on the display of the terminal 20. When the map information management unit 14 identifies that the terminal 20 is not tilted, a composite image as shown in FIG. 11A is displayed on the display of the terminal 20. On the other hand, for example, when the map information management unit 14 identifies that the terminal 20 is tilted, as shown in FIG. 11B, the composite image of FIG. 11A is based on the tilt of the terminal 20. The tilted (distorted) composite image is displayed on the display of the terminal 20. Specifically, the image shown in FIG. 11A is a composite image in which the company name (A Corp.), which is a virtual image, is superimposed on the captured image obtained by capturing the exhibition booth of company A from the front direction. The image shown in b) is a composite image in which the company name (A Corp.), which is a virtual image, is superimposed on the captured image obtained by capturing the exhibition booth of company A from above.
 また、本装置10は、例えば、特定した端末20の傾きから、前記撮像画像を補正してもよい。そして、画像情報出力部13は、例えば、補正した前記撮像画像に前記仮想画像を重畳した合成画像を出力する。すなわち、本装置10は、例えば、端末20が傾いていると特定された場合であっても、図11(b)に示す合成画像でなく、図11(a)に示す合成画像となるように補正する。 Further, the apparatus 10 may correct the captured image from the inclination of the specified terminal 20, for example. Then, the image information output unit 13 outputs, for example, a composite image in which the virtual image is superimposed on the corrected captured image. That is, for example, even when the terminal 20 is specified to be tilted, the present device 10 is not a composite image shown in FIG. 11 (b) but a composite image shown in FIG. 11 (a). to correct.
[実施形態4]
 図1(B)は、本実施形態のナビゲーションシステムの一例の構成を示す模式図である。図1(B)に示すように、本実施形態のナビゲーションシステムは、実施形態1~3のいずれかに記載のナビゲーション装置10と、端末20と、を含む。端末20は、前記特定エリアの来場者の端末である。ナビゲーション装置10及び端末20は、通信回線網30を介して通信可能である。ナビゲーションシステムは、ナビゲーションシステム装置ともいう。
[Embodiment 4]
FIG. 1B is a schematic diagram showing a configuration of an example of the navigation system of the present embodiment. As shown in FIG. 1 (B), the navigation system of the present embodiment includes the navigation device 10 according to any one of the first to third embodiments, and the terminal 20. The terminal 20 is a terminal of a visitor in the specific area. The navigation device 10 and the terminal 20 can communicate with each other via the communication network 30. The navigation system is also referred to as a navigation system device.
 具体的に、本実施形態のナビゲーションシステムにおける処理の流れについて説明する。まず、端末20において、撮像装置を起動させ、前記位置コードの撮像をユーザ(例えば、来場者)に促す。前記ユーザは、特定エリア内に配置されている前記位置コードを探し、撮像装置に前記位置コードを撮像する。そして、ナビゲーション装置10により、前記位置コードが読み取られ、前記ユーザの端末20の位置が特定される。また、前記位置コードが前記方向マークを兼ねる場合は、例えば、端末20の向きも特定される。具体的には、例えば、端末20の向きが特定されてから、端末20の位置が特定される。 Specifically, the flow of processing in the navigation system of this embodiment will be described. First, the terminal 20 activates the image pickup device and prompts the user (for example, a visitor) to take a picture of the position code. The user searches for the position code arranged in the specific area, and captures the position code on the image pickup device. Then, the position code is read by the navigation device 10, and the position of the user's terminal 20 is specified. Further, when the position code also serves as the direction mark, the orientation of the terminal 20 is also specified, for example. Specifically, for example, after the orientation of the terminal 20 is specified, the position of the terminal 20 is specified.
[実施形態5]
 本実施形態のプログラムは、本発明の方法の各工程を、手順として、コンピュータに実行させるためのプログラムである。本発明において、「手順」は、「処理」と読み替えてもよい。また、本実施形態のプログラムは、例えば、コンピュータ読み取り可能な記録媒体に記録されていてもよい。前記記録媒体は、例えば、非一時的なコンピュータ可読記録媒体(non-transitory computer-readable storage medium)である。前記記録媒体としては、特に限定されず、例えば、読み出し専用メモリ(ROM)、ハードディスク(HD)、光ディスク等が挙げられる。
[Embodiment 5]
The program of the present embodiment is a program for causing a computer to execute each step of the method of the present invention as a procedure. In the present invention, "procedure" may be read as "processing". Further, the program of the present embodiment may be recorded on a computer-readable recording medium, for example. The recording medium is, for example, a non-transitory computer-readable storage medium. The recording medium is not particularly limited, and examples thereof include a read-only memory (ROM), a hard disk (HD), and an optical disk.
 以上、実施形態を参照して本発明を説明したが、本発明は、上記実施形態に限定されるものではない。本発明の構成や詳細には、本発明のスコープ内で当業者が理解しうる様々な変更をできる。 Although the present invention has been described above with reference to the embodiments, the present invention is not limited to the above embodiments. Various changes that can be understood by those skilled in the art can be made to the structure and details of the present invention within the scope of the present invention.
 この出願は、2020年6月15日に出願された日本出願特願2020-102758を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority on the basis of Japanese application Japanese Patent Application No. 2020-102758 filed on June 15, 2020, and incorporates all of its disclosures herein.
<付記>
 上記の実施形態の一部または全部は、以下の付記のように記載されうるが、以下には限られない。
(付記1)
撮像情報取得部、仮想情報生成部、画像情報出力部、及び、マップ情報管理部を含み、
前記撮像情報取得部は、特定エリアの来場者の端末で撮像された撮像画像を取得し、
前記仮想情報生成部は、前記特定エリアに関する仮想画像を生成し、
前記画像情報出力部は、前記撮像画像に前記仮想画像を重畳した合成画像を出力し、
前記マップ情報管理部は、前記特定エリアに配置された位置コード及び前記特定エリアのマップ情報を紐づけて管理し、
前記来場者の端末で前記位置コードが撮像されると、撮像された前記位置コードの画像が前記撮像情報取得部によって取得され、
前記マップ情報管理部は、前記マップ情報に基づき前記位置コードの画像から前記来場者の端末の位置を特定する、
ナビゲーション装置。
(付記2)
前記仮想情報生成部は、前記マップ情報管理部が特定した前記来場者の端末の位置に基づき、前記特定エリアに関する仮想画像を生成する、
付記1記載のナビゲーション装置。
(付記3)
前記特定エリアには、前記来場者の端末の向きを特定するための方向マークが配置され、
前記マップ情報管理部は、前記方向マーク及び前記特定エリアのマップ情報を紐づけて管理し、
前記来場者の端末で前記方向マークが撮像されると、撮像された前記方向マークの画像が前記撮像情報取得部によって取得され、
前記マップ情報管理部は、前記マップ情報に基づき前記方向マークの画像から、前記来場者の端末の向きを特定する、
付記1又は2記載のナビゲーション装置。
(付記4)
前記仮想情報生成部は、前記マップ情報管理部が特定した前記来場者の端末の向きに基づき、前記特定エリアに関する仮想画像を生成する、
付記3記載のナビゲーション装置。
(付記5)
前記位置コードが、前記方向マークを兼ねる、
付記3又は4記載のナビゲーション装置。
(付記6)
前記来場者の端末で前記特定エリアの床が撮像されると、撮像された床の画像が前記撮像情報取得部によって取得され、
前記マップ情報管理部は、前記床の画像から前記来場者の端末の傾きを特定する、
付記1から5のいずれかに記載のナビゲーション装置。
(付記7)
前記床の画像は、前記位置コード及び前記方向マークの少なくとも一方を含む画像である、
付記6記載のナビゲーション装置。
(付記8)
さらに、特定エリア情報取得部を含み、
前記特定エリア情報取得部は、前記特定エリアに関する情報を取得し、
前記仮想情報生成部は、前記特定エリアに関する情報に基づき前記特定エリアに関する仮想画像を生成する、
付記1から7のいずれかに記載のナビゲーション装置。
(付記9)
さらに、来場者情報取得部を含み、
前記来場者情報取得部は、前記来場者に関する情報を取得し、
前記画像情報出力部は、前記来場者に関する情報に応じて、前記撮像画像に前記仮想画像を重畳した合成画像を出力する、
付記1から8のいずれかに記載のナビゲーション装置。
(付記10)
ナビゲーション装置、及び、端末を含み、
前記ナビゲーション装置は、付記1から9のいずれかに記載のナビゲーション装置であり、
前記端末は、前記特定エリアの来場者の端末であり、
前記ナビゲーション装置及び前記端末は、通信回線網を介して通信可能である、
ナビゲーションシステム。
(付記11)
撮像情報取得工程、仮想情報生成工程、画像情報出力工程、及び、マップ情報管理工程を含み、
前記撮像情報取得工程は、特定エリアの来場者の端末で撮像された撮像画像を取得し、
前記仮想情報生成工程は、前記特定エリアに関する仮想画像を生成し、
前記画像情報出力工程は、前記撮像画像に前記仮想画像を重畳した合成画像を出力し、
前記マップ情報管理工程は、前記特定エリアに配置された位置コード及び前記特定エリアのマップ情報を紐づけて管理し、
前記来場者の端末で前記位置コードが撮像されると、前記撮像情報取得工程は、撮像された位置コードの画像を取得し、
前記マップ情報管理工程は、前記マップ情報に基づき前記位置コードの画像から前記来場者の端末の位置を特定する、
ナビゲーション方法。
(付記12)
前記仮想情報生成工程は、前記マップ情報管理工程により特定した前記来場者の端末の位置に基づき、前記特定エリアに関する仮想画像を生成する、
付記11記載のナビゲーション方法。
(付記13)
前記特定エリアには、前記来場者の端末の向きを特定するための方向マークが配置され、
前記マップ情報管理工程は、前記方向マーク及び前記特定エリアのマップ情報を紐づけて管理し、
前記来場者の端末で前記方向マークが撮像されると、前記撮像情報取得工程は、撮像された前記方向マークの画像を取得し、
前記マップ情報管理工程は、前記マップ情報に基づき前記方向マークの画像から、前記来場者の端末がどの方向を向いているかという前記来場者の端末の向きを特定する、
付記11又は12記載のナビゲーション方法。
(付記14)
前記仮想情報生成工程は、前記マップ情報管理工程により特定した前記来場者の端末の向きに基づき、前記特定エリアに関する仮想画像を生成する、
付記13記載のナビゲーション方法。
(付記15)
前記位置コードが、前記方向マークを兼ねる、
付記13又は14記載のナビゲーション方法。
(付記16)
前記来場者の端末で前記特定エリアの床が撮像されると、前記撮像情報取得工程は、撮像された床の画像を取得し、
前記マップ情報管理工程は、前記床の画像から前記来場者の端末の傾きを特定する、
付記11から15のいずれかに記載のナビゲーション方法。
(付記17)
前記床の画像は、前記位置コード及び前記方向マークの少なくとも一方を含む画像である、
付記16記載のナビゲーション方法。
(付記18)
さらに、特定エリア情報取得工程を含み、
前記特定エリア情報取得工程は、前記特定エリアに関する情報を取得し、
前記仮想情報生成工程は、前記特定エリアに関する情報に基づき前記特定エリアに関する仮想画像を生成する、
付記11から17のいずれかに記載のナビゲーション方法。
(付記19)
さらに、来場者情報取得工程を含み、
前記来場者情報取得工程は、前記来場者に関する情報を取得し、
前記画像情報出力工程は、前記来場者に関する情報に応じて、前記撮像画像に前記仮想画像を重畳した合成画像を出力する、
付記11から18のいずれかに記載のナビゲーション方法。
(付記20)
コンピュータに、撮像情報取得手順、仮想情報生成手順、画像情報出力手順、及び、マップ情報管理手順を含む手順を実行させるためのプログラム;
前記撮像情報取得手順は、特定エリアの来場者の端末で撮像された撮像画像を取得し、
前記仮想情報生成手順は、前記特定エリアに関する仮想画像を生成し、
前記画像情報出力手順は、前記撮像画像に前記仮想画像を重畳した合成画像を出力し、
前記マップ情報管理手順は、前記特定エリアに配置された位置コード及び前記特定エリアのマップ情報を紐づけて管理し、
前記来場者の端末で前記位置コードが撮像されると、前記撮像情報取得手順は、撮像された前記位置コードの画像を取得し、
前記マップ情報管理手順は、前記マップ情報に基づき前記位置コードの画像から前記来場者の端末の位置を特定する。
(付記21)
前記仮想情報生成手順は、前記マップ情報管理手順により特定した前記来場者の端末の位置に基づき、前記特定エリアに関する仮想画像を生成する、
付記20記載のプログラム。
(付記22)
前記特定エリアには、前記来場者の端末の向きを特定するための方向マークが配置され、
前記マップ情報管理手順は、前記方向マーク及び前記特定エリアのマップ情報を紐づけて管理し、
前記来場者の端末で前記方向マークが撮像されると、撮像された前記方向マーク画像が前記撮像情報取得手順によって取得され、
前記マップ情報管理手順は、前記マップ情報に基づき前記方向マークの画像から、前記来場者の端末がどの方向を向いているかという前記来場者の端末の向きを特定する、
付記20又は21記載のプログラム。
(付記23)
前記仮想情報生成手順は、前記マップ情報管理手順により特定した前記来場者の端末の向きに基づき、前記特定エリアに関する仮想画像を生成する、
付記22記載のプログラム。
(付記24)
前記位置コードが、前記方向マークを兼ねる、
付記22又は23記載のプログラム。
(付記25)
前記来場者の端末で前記特定エリアの床が撮像されると、撮像された床の画像が前記撮像情報取得手順によって取得され、
前記マップ情報管理手順は、前記床の画像から前記来場者の端末の傾きを特定する、
付記20から24のいずれかに記載のプログラム。
(付記26)
前記床の画像は、前記位置コード及び前記方向マークの少なくとも一方を含む画像である、
付記25記載のプログラム。
(付記27)
さらに、特定エリア情報取得手順を含み、
前記特定エリア情報取得手順は、前記特定エリアに関する情報を取得し、
前記仮想情報生成手順は、前記特定エリアに関する情報に基づき前記特定エリアに関する仮想画像を生成する、
付記20から26のいずれかに記載のプログラム。
(付記28)
さらに、来場者情報取得手順を含み、
前記来場者情報取得手順は、前記来場者に関する情報を取得し、
前記画像情報出力手順は、前記来場者に関する情報に応じて、前記撮像画像に前記仮想画像を重畳した合成画像を出力する、
付記20から27のいずれかに記載のプログラム。
(付記29)
付記20から28のいずれかに記載のプログラムを記録しているコンピュータ読み取り可能な記録媒体。
<Additional Notes>
Some or all of the above embodiments may be described as, but not limited to, the following appendixes.
(Appendix 1)
Includes image capture information acquisition unit, virtual information generation unit, image information output unit, and map information management unit.
The image pickup information acquisition unit acquires an image captured by a terminal of a visitor in a specific area, and obtains an image captured.
The virtual information generation unit generates a virtual image relating to the specific area, and generates a virtual image.
The image information output unit outputs a composite image in which the virtual image is superimposed on the captured image.
The map information management unit manages the position code arranged in the specific area and the map information of the specific area in association with each other.
When the position code is imaged by the terminal of the visitor, the captured image of the position code is acquired by the image pickup information acquisition unit.
The map information management unit identifies the position of the visitor's terminal from the image of the position code based on the map information.
Navigation device.
(Appendix 2)
The virtual information generation unit generates a virtual image regarding the specific area based on the position of the terminal of the visitor specified by the map information management unit.
The navigation device according to Appendix 1.
(Appendix 3)
In the specific area, a direction mark for specifying the orientation of the visitor's terminal is arranged.
The map information management unit manages the direction mark and the map information of the specific area in association with each other.
When the direction mark is imaged by the terminal of the visitor, the captured image of the direction mark is acquired by the image pickup information acquisition unit.
The map information management unit identifies the direction of the visitor's terminal from the image of the direction mark based on the map information.
The navigation device according to Appendix 1 or 2.
(Appendix 4)
The virtual information generation unit generates a virtual image relating to the specific area based on the orientation of the visitor's terminal specified by the map information management unit.
The navigation device according to Appendix 3.
(Appendix 5)
The position code also serves as the direction mark.
The navigation device according to Appendix 3 or 4.
(Appendix 6)
When the floor of the specific area is imaged by the terminal of the visitor, the image of the imaged floor is acquired by the image pickup information acquisition unit.
The map information management unit identifies the inclination of the visitor's terminal from the image of the floor.
The navigation device according to any one of Supplementary Notes 1 to 5.
(Appendix 7)
The image of the floor is an image including at least one of the position code and the direction mark.
The navigation device according to Appendix 6.
(Appendix 8)
In addition, it includes a specific area information acquisition section.
The specific area information acquisition unit acquires information about the specific area and obtains information about the specific area.
The virtual information generation unit generates a virtual image related to the specific area based on the information about the specific area.
The navigation device according to any one of Supplementary Provisions 1 to 7.
(Appendix 9)
In addition, including the visitor information acquisition department,
The visitor information acquisition unit acquires information about the visitor and obtains information about the visitor.
The image information output unit outputs a composite image in which the virtual image is superimposed on the captured image according to the information about the visitor.
The navigation device according to any one of Supplementary Provisions 1 to 8.
(Appendix 10)
Including navigation devices and terminals
The navigation device is the navigation device according to any one of Supplementary Provisions 1 to 9.
The terminal is a terminal of a visitor in the specific area.
The navigation device and the terminal can communicate with each other via a communication network.
Navigation system.
(Appendix 11)
Includes imaging information acquisition process, virtual information generation process, image information output process, and map information management process.
In the image pickup information acquisition step, an image captured by a terminal of a visitor in a specific area is acquired.
The virtual information generation step generates a virtual image relating to the specific area, and generates a virtual image.
In the image information output step, a composite image in which the virtual image is superimposed on the captured image is output.
The map information management process manages the position code arranged in the specific area and the map information of the specific area in association with each other.
When the position code is imaged by the terminal of the visitor, the image pickup information acquisition step acquires the image of the imaged position code.
The map information management step identifies the position of the visitor's terminal from the image of the position code based on the map information.
Navigation method.
(Appendix 12)
The virtual information generation step generates a virtual image relating to the specific area based on the position of the terminal of the visitor specified by the map information management step.
The navigation method according to Appendix 11.
(Appendix 13)
In the specific area, a direction mark for specifying the orientation of the visitor's terminal is arranged.
The map information management process manages the direction mark and the map information of the specific area in association with each other.
When the direction mark is imaged by the terminal of the visitor, the image pickup information acquisition step acquires the image of the imaged direction mark.
The map information management step identifies the direction of the visitor's terminal, which direction the visitor's terminal is facing, from the image of the direction mark based on the map information.
The navigation method according to Appendix 11 or 12.
(Appendix 14)
The virtual information generation step generates a virtual image relating to the specific area based on the orientation of the visitor's terminal specified by the map information management step.
The navigation method according to Appendix 13.
(Appendix 15)
The position code also serves as the direction mark.
The navigation method according to Appendix 13 or 14.
(Appendix 16)
When the floor of the specific area is imaged by the terminal of the visitor, the image pickup information acquisition step acquires the image of the imaged floor.
The map information management step identifies the inclination of the visitor's terminal from the image of the floor.
The navigation method according to any one of Supplementary note 11 to 15.
(Appendix 17)
The image of the floor is an image including at least one of the position code and the direction mark.
The navigation method according to Appendix 16.
(Appendix 18)
In addition, it includes a specific area information acquisition process.
The specific area information acquisition process acquires information about the specific area and obtains information.
The virtual information generation step generates a virtual image related to the specific area based on the information about the specific area.
The navigation method according to any one of Supplementary note 11 to 17.
(Appendix 19)
In addition, it includes a visitor information acquisition process.
The visitor information acquisition process acquires information about the visitor and obtains information about the visitor.
The image information output step outputs a composite image in which the virtual image is superimposed on the captured image according to the information about the visitor.
The navigation method according to any one of Supplementary note 11 to 18.
(Appendix 20)
A program for causing a computer to perform a procedure including an imaging information acquisition procedure, a virtual information generation procedure, an image information output procedure, and a map information management procedure;
In the image pickup information acquisition procedure, an image captured by a terminal of a visitor in a specific area is acquired.
The virtual information generation procedure generates a virtual image relating to the specific area and generates a virtual image.
In the image information output procedure, a composite image in which the virtual image is superimposed on the captured image is output.
The map information management procedure manages the position code arranged in the specific area and the map information of the specific area in association with each other.
When the position code is imaged by the terminal of the visitor, the image pickup information acquisition procedure acquires the image of the imaged position code.
The map information management procedure identifies the position of the visitor's terminal from the image of the position code based on the map information.
(Appendix 21)
The virtual information generation procedure generates a virtual image relating to the specific area based on the position of the visitor's terminal specified by the map information management procedure.
The program described in Appendix 20.
(Appendix 22)
In the specific area, a direction mark for specifying the orientation of the visitor's terminal is arranged.
The map information management procedure manages the direction mark and the map information of the specific area in association with each other.
When the direction mark is imaged by the terminal of the visitor, the imaged direction mark image is acquired by the image pickup information acquisition procedure.
The map information management procedure specifies the direction of the visitor's terminal, which direction the visitor's terminal is facing, from the image of the direction mark based on the map information.
The program described in Appendix 20 or 21.
(Appendix 23)
The virtual information generation procedure generates a virtual image relating to the specific area based on the orientation of the visitor's terminal specified by the map information management procedure.
The program described in Appendix 22.
(Appendix 24)
The position code also serves as the direction mark.
The program described in Appendix 22 or 23.
(Appendix 25)
When the floor of the specific area is imaged by the terminal of the visitor, the image of the imaged floor is acquired by the image pickup information acquisition procedure.
The map information management procedure identifies the inclination of the visitor's terminal from the image of the floor.
The program described in any of the appendices 20 to 24.
(Appendix 26)
The image of the floor is an image including at least one of the position code and the direction mark.
The program described in Appendix 25.
(Appendix 27)
In addition, including the procedure for acquiring specific area information,
The specific area information acquisition procedure acquires information about the specific area and obtains information.
The virtual information generation procedure generates a virtual image related to the specific area based on the information about the specific area.
The program described in any of Appendix 20-26.
(Appendix 28)
In addition, including the procedure for acquiring visitor information,
The visitor information acquisition procedure acquires information about the visitor and obtains information about the visitor.
The image information output procedure outputs a composite image in which the virtual image is superimposed on the captured image according to the information about the visitor.
The program described in any of the appendices 20 to 27.
(Appendix 29)
A computer-readable recording medium recording the program according to any one of the appendices 20 to 28.
 本発明によれば、展示会場の大型施設等において、ARを用いて情報を発信することが可能であり、かつ、現在位置を容易に確認することが可能である。このため、本発明によれば、AR技術を適用可能な様々な分野に有用である。 According to the present invention, it is possible to transmit information using AR in a large facility or the like of an exhibition hall, and it is possible to easily confirm the current position. Therefore, according to the present invention, it is useful in various fields to which AR technology can be applied.
1     展示会場
2     展示ブース
3     位置コード
4     出展ブースのピン
5     誘導用仮想オブジェクト
6     来場者
10    ナビゲーション装置
11    撮像情報取得部
12    仮想情報生成部
13    画像情報出力部
14    マップ情報管理部
15    特定エリア情報取得部
16    来場者情報取得部
20    端末
30    通信回線網
101   中央演算処理装置
102   メモリ
103   バス
104   記憶装置
105   入力装置
106   表示装置
107   通信デバイス

 
1 Exhibition hall 2 Exhibition booth 3 Position code 4 Exhibition booth pin 5 Guidance virtual object 6 Visitors 10 Navigation device 11 Imaging information acquisition unit 12 Virtual information generation unit 13 Image information output unit 14 Map information management unit 15 Specific area information acquisition Part 16 Visitor information acquisition part 20 Terminal 30 Communication line network 101 Central arithmetic processing device 102 Memory 103 Bus 104 Storage device 105 Input device 106 Display device 107 Communication device

Claims (29)

  1. 撮像情報取得部、仮想情報生成部、画像情報出力部、及び、マップ情報管理部を含み、
    前記撮像情報取得部は、特定エリアの来場者の端末で撮像された撮像画像を取得し、
    前記仮想情報生成部は、前記特定エリアに関する仮想画像を生成し、
    前記画像情報出力部は、前記撮像画像に前記仮想画像を重畳した合成画像を出力し、
    前記マップ情報管理部は、前記特定エリアに配置された位置コード及び前記特定エリアのマップ情報を紐づけて管理し、
    前記来場者の端末で前記位置コードが撮像されると、撮像された前記位置コードの画像が前記撮像情報取得部によって取得され、
    前記マップ情報管理部は、前記マップ情報に基づき前記位置コードの画像から前記来場者の端末の位置を特定する、
    ナビゲーション装置。
    Includes image capture information acquisition unit, virtual information generation unit, image information output unit, and map information management unit.
    The image pickup information acquisition unit acquires an image captured by a terminal of a visitor in a specific area, and obtains an image captured.
    The virtual information generation unit generates a virtual image relating to the specific area, and generates a virtual image.
    The image information output unit outputs a composite image in which the virtual image is superimposed on the captured image.
    The map information management unit manages the position code arranged in the specific area and the map information of the specific area in association with each other.
    When the position code is imaged by the terminal of the visitor, the captured image of the position code is acquired by the image pickup information acquisition unit.
    The map information management unit identifies the position of the visitor's terminal from the image of the position code based on the map information.
    Navigation device.
  2. 前記仮想情報生成部は、前記マップ情報管理部が特定した前記来場者の端末の位置に基づき、前記特定エリアに関する仮想画像を生成する、
    請求項1記載のナビゲーション装置。
    The virtual information generation unit generates a virtual image regarding the specific area based on the position of the terminal of the visitor specified by the map information management unit.
    The navigation device according to claim 1.
  3. 前記特定エリアには、前記来場者の端末の向きを特定するための方向マークが配置され、
    前記マップ情報管理部は、前記方向マーク及び前記特定エリアのマップ情報を紐づけて管理し、
    前記来場者の端末で前記方向マークが撮像されると、撮像された前記方向マークの画像が前記撮像情報取得部によって取得され、
    前記マップ情報管理部は、前記マップ情報に基づき前記方向マークの画像から、前記来場者の端末の向きを特定する、
    請求項1又は2記載のナビゲーション装置。
    In the specific area, a direction mark for specifying the orientation of the visitor's terminal is arranged.
    The map information management unit manages the direction mark and the map information of the specific area in association with each other.
    When the direction mark is imaged by the terminal of the visitor, the captured image of the direction mark is acquired by the image pickup information acquisition unit.
    The map information management unit identifies the direction of the visitor's terminal from the image of the direction mark based on the map information.
    The navigation device according to claim 1 or 2.
  4. 前記仮想情報生成部は、前記マップ情報管理部が特定した前記来場者の端末の向きに基づき、前記特定エリアに関する仮想画像を生成する、
    請求項3記載のナビゲーション装置。
    The virtual information generation unit generates a virtual image relating to the specific area based on the orientation of the visitor's terminal specified by the map information management unit.
    The navigation device according to claim 3.
  5. 前記位置コードが、前記方向マークを兼ねる、
    請求項3又は4記載のナビゲーション装置。
    The position code also serves as the direction mark.
    The navigation device according to claim 3 or 4.
  6. 前記来場者の端末で前記特定エリアの床が撮像されると、撮像された床の画像が前記撮像情報取得部によって取得され、
    前記マップ情報管理部は、前記床の画像から前記来場者の端末の傾きを特定する、
    請求項1から5のいずれか一項に記載のナビゲーション装置。
    When the floor of the specific area is imaged by the terminal of the visitor, the image of the imaged floor is acquired by the image pickup information acquisition unit.
    The map information management unit identifies the inclination of the visitor's terminal from the image of the floor.
    The navigation device according to any one of claims 1 to 5.
  7. 前記床の画像は、前記位置コード及び前記方向マークの少なくとも一方を含む画像である、
    請求項6記載のナビゲーション装置。
    The image of the floor is an image including at least one of the position code and the direction mark.
    The navigation device according to claim 6.
  8. さらに、特定エリア情報取得部を含み、
    前記特定エリア情報取得部は、前記特定エリアに関する情報を取得し、
    前記仮想情報生成部は、前記特定エリアに関する情報に基づき前記特定エリアに関する仮想画像を生成する、
    請求項1から7のいずれか一項に記載のナビゲーション装置。
    In addition, it includes a specific area information acquisition section.
    The specific area information acquisition unit acquires information about the specific area and obtains information about the specific area.
    The virtual information generation unit generates a virtual image related to the specific area based on the information about the specific area.
    The navigation device according to any one of claims 1 to 7.
  9. さらに、来場者情報取得部を含み、
    前記来場者情報取得部は、前記来場者に関する情報を取得し、
    前記画像情報出力部は、前記来場者に関する情報に応じて、前記撮像画像に前記仮想画像を重畳した合成画像を出力する、
    請求項1から8のいずれか一項に記載のナビゲーション装置。
    In addition, including the visitor information acquisition department,
    The visitor information acquisition unit acquires information about the visitor and obtains information about the visitor.
    The image information output unit outputs a composite image in which the virtual image is superimposed on the captured image according to the information about the visitor.
    The navigation device according to any one of claims 1 to 8.
  10. ナビゲーション装置、及び、端末を含み、
    前記ナビゲーション装置は、請求項1から9のいずれか一項に記載のナビゲーション装置であり、
    前記端末は、前記特定エリアの来場者の端末であり、
    前記ナビゲーション装置及び前記端末は、通信回線網を介して通信可能である、
    ナビゲーションシステム。
    Including navigation devices and terminals
    The navigation device is the navigation device according to any one of claims 1 to 9.
    The terminal is a terminal of a visitor in the specific area.
    The navigation device and the terminal can communicate with each other via a communication network.
    Navigation system.
  11. 撮像情報取得工程、仮想情報生成工程、画像情報出力工程、及び、マップ情報管理工程を含み、
    前記撮像情報取得工程は、特定エリアの来場者の端末で撮像された撮像画像を取得し、
    前記仮想情報生成工程は、前記特定エリアに関する仮想画像を生成し、
    前記画像情報出力工程は、前記撮像画像に前記仮想画像を重畳した合成画像を出力し、
    前記マップ情報管理工程は、前記特定エリアに配置された位置コード及び前記特定エリアのマップ情報を紐づけて管理し、
    前記来場者の端末で前記位置コードが撮像されると、前記撮像情報取得工程は、撮像された位置コードの画像を取得し、
    前記マップ情報管理工程は、前記マップ情報に基づき前記位置コードの画像から前記来場者の端末の位置を特定する、
    ナビゲーション方法。
    Includes imaging information acquisition process, virtual information generation process, image information output process, and map information management process.
    In the image pickup information acquisition step, an image captured by a terminal of a visitor in a specific area is acquired.
    The virtual information generation step generates a virtual image relating to the specific area, and generates a virtual image.
    In the image information output step, a composite image in which the virtual image is superimposed on the captured image is output.
    The map information management process manages the position code arranged in the specific area and the map information of the specific area in association with each other.
    When the position code is imaged by the terminal of the visitor, the image pickup information acquisition step acquires the image of the imaged position code.
    The map information management step identifies the position of the visitor's terminal from the image of the position code based on the map information.
    Navigation method.
  12. 前記仮想情報生成工程は、前記マップ情報管理工程により特定した前記来場者の端末の位置に基づき、前記特定エリアに関する仮想画像を生成する、
    請求項11記載のナビゲーション方法。
    The virtual information generation step generates a virtual image relating to the specific area based on the position of the terminal of the visitor specified by the map information management step.
    The navigation method according to claim 11.
  13. 前記特定エリアには、前記来場者の端末の向きを特定するための方向マークが配置され、
    前記マップ情報管理工程は、前記方向マーク及び前記特定エリアのマップ情報を紐づけて管理し、
    前記来場者の端末で前記方向マークが撮像されると、前記撮像情報取得工程は、撮像された前記方向マークの画像を取得し、
    前記マップ情報管理工程は、前記マップ情報に基づき前記方向マークの画像から、前記来場者の端末がどの方向を向いているかという前記来場者の端末の向きを特定する、
    請求項11又は12記載のナビゲーション方法。
    In the specific area, a direction mark for specifying the orientation of the visitor's terminal is arranged.
    The map information management process manages the direction mark and the map information of the specific area in association with each other.
    When the direction mark is imaged by the terminal of the visitor, the image pickup information acquisition step acquires the image of the imaged direction mark.
    The map information management step identifies the direction of the visitor's terminal, which direction the visitor's terminal is facing, from the image of the direction mark based on the map information.
    The navigation method according to claim 11 or 12.
  14. 前記仮想情報生成工程は、前記マップ情報管理工程により特定した前記来場者の端末の向きに基づき、前記特定エリアに関する仮想画像を生成する、
    請求項13記載のナビゲーション方法。
    The virtual information generation step generates a virtual image relating to the specific area based on the orientation of the visitor's terminal specified by the map information management step.
    13. The navigation method according to claim 13.
  15. 前記位置コードが、前記方向マークを兼ねる、
    請求項13又は14記載のナビゲーション方法。
    The position code also serves as the direction mark.
    The navigation method according to claim 13 or 14.
  16. 前記来場者の端末で前記特定エリアの床が撮像されると、前記撮像情報取得工程は、撮像された床の画像を取得し、
    前記マップ情報管理工程は、前記床の画像から前記来場者の端末の傾きを特定する、
    請求項11から15のいずれか一項に記載のナビゲーション方法。
    When the floor of the specific area is imaged by the terminal of the visitor, the image pickup information acquisition step acquires the image of the imaged floor.
    The map information management step identifies the inclination of the visitor's terminal from the image of the floor.
    The navigation method according to any one of claims 11 to 15.
  17. 前記床の画像は、前記位置コード及び前記方向マークの少なくとも一方を含む画像である、
    請求項16記載のナビゲーション方法。
    The image of the floor is an image including at least one of the position code and the direction mark.
    The navigation method according to claim 16.
  18. さらに、特定エリア情報取得工程を含み、
    前記特定エリア情報取得工程は、前記特定エリアに関する情報を取得し、
    前記仮想情報生成工程は、前記特定エリアに関する情報に基づき前記特定エリアに関する仮想画像を生成する、
    請求項11から17のいずれか一項に記載のナビゲーション方法。
    In addition, it includes a specific area information acquisition process.
    The specific area information acquisition process acquires information about the specific area and obtains information.
    The virtual information generation step generates a virtual image related to the specific area based on the information about the specific area.
    The navigation method according to any one of claims 11 to 17.
  19. さらに、来場者情報取得工程を含み、
    前記来場者情報取得工程は、前記来場者に関する情報を取得し、
    前記画像情報出力工程は、前記来場者に関する情報に応じて、前記撮像画像に前記仮想画像を重畳した合成画像を出力する、
    請求項11から18のいずれか一項に記載のナビゲーション方法。
    In addition, it includes a visitor information acquisition process.
    The visitor information acquisition process acquires information about the visitor and obtains information about the visitor.
    The image information output step outputs a composite image in which the virtual image is superimposed on the captured image according to the information about the visitor.
    The navigation method according to any one of claims 11 to 18.
  20. コンピュータに、撮像情報取得手順、仮想情報生成手順、画像情報出力手順、及び、マップ情報管理手順を含む手順を実行させるためのプログラム;
    前記撮像情報取得手順は、特定エリアの来場者の端末で撮像された撮像画像を取得し、
    前記仮想情報生成手順は、前記特定エリアに関する仮想画像を生成し、
    前記画像情報出力手順は、前記撮像画像に前記仮想画像を重畳した合成画像を出力し、
    前記マップ情報管理手順は、前記特定エリアに配置された位置コード及び前記特定エリアのマップ情報を紐づけて管理し、
    前記来場者の端末で前記位置コードが撮像されると、前記撮像情報取得手順は、撮像された前記位置コードの画像を取得し、
    前記マップ情報管理手順は、前記マップ情報に基づき前記位置コードの画像から前記来場者の端末の位置を特定する。
    A program for causing a computer to perform a procedure including an imaging information acquisition procedure, a virtual information generation procedure, an image information output procedure, and a map information management procedure;
    In the image pickup information acquisition procedure, an image captured by a terminal of a visitor in a specific area is acquired.
    The virtual information generation procedure generates a virtual image relating to the specific area and generates a virtual image.
    In the image information output procedure, a composite image in which the virtual image is superimposed on the captured image is output.
    The map information management procedure manages the position code arranged in the specific area and the map information of the specific area in association with each other.
    When the position code is imaged by the terminal of the visitor, the image pickup information acquisition procedure acquires the image of the imaged position code.
    The map information management procedure identifies the position of the visitor's terminal from the image of the position code based on the map information.
  21. 前記仮想情報生成手順は、前記マップ情報管理手順により特定した前記来場者の端末の位置に基づき、前記特定エリアに関する仮想画像を生成する、
    請求項20記載のプログラム。
    The virtual information generation procedure generates a virtual image relating to the specific area based on the position of the visitor's terminal specified by the map information management procedure.
    The program according to claim 20.
  22. 前記特定エリアには、前記来場者の端末の向きを特定するための方向マークが配置され、
    前記マップ情報管理手順は、前記方向マーク及び前記特定エリアのマップ情報を紐づけて管理し、
    前記来場者の端末で前記方向マークが撮像されると、撮像された前記方向マーク画像が前記撮像情報取得手順によって取得され、
    前記マップ情報管理手順は、前記マップ情報に基づき前記方向マークの画像から、前記来場者の端末がどの方向を向いているかという前記来場者の端末の向きを特定する、
    請求項20又は21記載のプログラム。
    In the specific area, a direction mark for specifying the orientation of the visitor's terminal is arranged.
    The map information management procedure manages the direction mark and the map information of the specific area in association with each other.
    When the direction mark is imaged by the terminal of the visitor, the imaged direction mark image is acquired by the image pickup information acquisition procedure.
    The map information management procedure specifies the direction of the visitor's terminal, which direction the visitor's terminal is facing, from the image of the direction mark based on the map information.
    The program according to claim 20 or 21.
  23. 前記仮想情報生成手順は、前記マップ情報管理手順により特定した前記来場者の端末の向きに基づき、前記特定エリアに関する仮想画像を生成する、
    請求項22記載のプログラム。
    The virtual information generation procedure generates a virtual image relating to the specific area based on the orientation of the visitor's terminal specified by the map information management procedure.
    22. The program of claim 22.
  24. 前記位置コードが、前記方向マークを兼ねる、
    請求項22又は23記載のプログラム。
    The position code also serves as the direction mark.
    The program according to claim 22 or 23.
  25. 前記来場者の端末で前記特定エリアの床が撮像されると、撮像された床の画像が前記撮像情報取得手順によって取得され、
    前記マップ情報管理手順は、前記床の画像から前記来場者の端末の傾きを特定する、
    請求項20から24のいずれか一項に記載のプログラム。
    When the floor of the specific area is imaged by the terminal of the visitor, the image of the imaged floor is acquired by the image pickup information acquisition procedure.
    The map information management procedure identifies the inclination of the visitor's terminal from the image of the floor.
    The program according to any one of claims 20 to 24.
  26. 前記床の画像は、前記位置コード及び前記方向マークの少なくとも一方を含む画像である、
    請求項25記載のプログラム。
    The image of the floor is an image including at least one of the position code and the direction mark.
    25. The program of claim 25.
  27. さらに、特定エリア情報取得手順を含み、
    前記特定エリア情報取得手順は、前記特定エリアに関する情報を取得し、
    前記仮想情報生成手順は、前記特定エリアに関する情報に基づき前記特定エリアに関する仮想画像を生成する、
    請求項20から26のいずれか一項に記載のプログラム。
    In addition, including the procedure for acquiring specific area information,
    The specific area information acquisition procedure acquires information about the specific area and obtains information.
    The virtual information generation procedure generates a virtual image related to the specific area based on the information about the specific area.
    The program according to any one of claims 20 to 26.
  28. さらに、来場者情報取得手順を含み、
    前記来場者情報取得手順は、前記来場者に関する情報を取得し、
    前記画像情報出力手順は、前記来場者に関する情報に応じて、前記撮像画像に前記仮想画像を重畳した合成画像を出力する、
    請求項20から27のいずれか一項に記載のプログラム。
    In addition, including the procedure for acquiring visitor information,
    The visitor information acquisition procedure acquires information about the visitor and obtains information about the visitor.
    The image information output procedure outputs a composite image in which the virtual image is superimposed on the captured image according to the information about the visitor.
    The program according to any one of claims 20 to 27.
  29. 請求項20から28のいずれか一項に記載のプログラムを記録しているコンピュータ読み取り可能な記録媒体。 A computer-readable recording medium recording the program according to any one of claims 20 to 28.
PCT/JP2021/020829 2020-06-15 2021-06-01 Navigation device, navigation system, navigation method, program, and storage medium WO2021256239A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022532472A JP7294735B2 (en) 2020-06-15 2021-06-01 Navigation device, navigation system, navigation method, program, and recording medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020102758 2020-06-15
JP2020-102758 2020-06-15

Publications (1)

Publication Number Publication Date
WO2021256239A1 true WO2021256239A1 (en) 2021-12-23

Family

ID=79267901

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/020829 WO2021256239A1 (en) 2020-06-15 2021-06-01 Navigation device, navigation system, navigation method, program, and storage medium

Country Status (2)

Country Link
JP (1) JP7294735B2 (en)
WO (1) WO2021256239A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116242339A (en) * 2023-05-11 2023-06-09 天津市安定医院 5G-based hospital outpatient navigation system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009033366A (en) * 2007-07-25 2009-02-12 Advanced Telecommunication Research Institute International Optical marker system
JP2019114078A (en) * 2017-12-25 2019-07-11 ソニー株式会社 Information processing device, information processing method and program
WO2019234936A1 (en) * 2018-06-08 2019-12-12 マクセル株式会社 Mobile terminal, camera position estimation system, camera position estimation method, and signboard

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009033366A (en) * 2007-07-25 2009-02-12 Advanced Telecommunication Research Institute International Optical marker system
JP2019114078A (en) * 2017-12-25 2019-07-11 ソニー株式会社 Information processing device, information processing method and program
WO2019234936A1 (en) * 2018-06-08 2019-12-12 マクセル株式会社 Mobile terminal, camera position estimation system, camera position estimation method, and signboard

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116242339A (en) * 2023-05-11 2023-06-09 天津市安定医院 5G-based hospital outpatient navigation system
CN116242339B (en) * 2023-05-11 2023-10-03 天津市安定医院 5G-based hospital outpatient navigation system

Also Published As

Publication number Publication date
JPWO2021256239A1 (en) 2021-12-23
JP7294735B2 (en) 2023-06-20

Similar Documents

Publication Publication Date Title
US7840343B2 (en) Mobile terminal having map display function, map display system, information distribution server and program
US20190086214A1 (en) Image processing device, image processing method, and program
KR101510623B1 (en) An exhibition guide system in an exhibition center and the method thereof
US20110148922A1 (en) Apparatus and method for mixed reality content operation based on indoor and outdoor context awareness
US11830249B2 (en) Augmented reality, computer vision, and digital ticketing systems
KR20160090198A (en) Exhibition guide apparatus, exhibition display apparatus, mobile terminal and method for guiding exhibition
US9607094B2 (en) Information communication method and information communication apparatus
KR101896236B1 (en) Method for providing commercial service based on digital signage using wireless communication
US20140278097A1 (en) Systems and methods for guidance
WO2021256239A1 (en) Navigation device, navigation system, navigation method, program, and storage medium
Stähli et al. Evaluation of pedestrian navigation in smart cities
KR102451012B1 (en) System and method for station information service using augmented reality
KR20120087269A (en) Method for serving route map information and system therefor
WO2021256241A1 (en) Guide device, guide system, guide method, program, and recording medium
US20220374941A1 (en) Information sharing apparatus, event support system, information sharing method, and event support system production method
JP2014109539A (en) Guidance information providing device and guidance information providing method
US20240127270A1 (en) Demand level calculation apparatus, event support system, demand level calculation method, and event support system production method
JP2014164572A (en) Information processing device and program
WO2021256242A1 (en) Guide device, guide system, guide method, program, and storage medium
WO2021256240A1 (en) Guide device, guide system, guide method, program, and recording medium
WO2021079821A1 (en) Advertisement information generation device, event assistance system, advertisement information generation method, and event assistance system production method
JP7368048B2 (en) Display device, event support system, display method, and production method of event support system
JP5221580B2 (en) Image display system, portable information terminal, and image display program
CN112565165A (en) Interaction method and system based on optical communication device
KR101785030B1 (en) Online system for providing gallery information

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21825826

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022532472

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21825826

Country of ref document: EP

Kind code of ref document: A1