WO2020230262A1 - Image display device, image display system, and image display method - Google Patents

Image display device, image display system, and image display method Download PDF

Info

Publication number
WO2020230262A1
WO2020230262A1 PCT/JP2019/019112 JP2019019112W WO2020230262A1 WO 2020230262 A1 WO2020230262 A1 WO 2020230262A1 JP 2019019112 W JP2019019112 W JP 2019019112W WO 2020230262 A1 WO2020230262 A1 WO 2020230262A1
Authority
WO
WIPO (PCT)
Prior art keywords
course
image
space
user
display device
Prior art date
Application number
PCT/JP2019/019112
Other languages
French (fr)
Japanese (ja)
Inventor
奥山 宣隆
吉澤 和彦
橋本 康宣
眞弓 中出
Original Assignee
マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by マクセル株式会社 filed Critical マクセル株式会社
Priority to PCT/JP2019/019112 priority Critical patent/WO2020230262A1/en
Priority to JP2021519096A priority patent/JP7295224B2/en
Publication of WO2020230262A1 publication Critical patent/WO2020230262A1/en
Priority to JP2023094479A priority patent/JP2023130347A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present invention relates to an image display device, an image display system, and an image display method for displaying an image in a different space whose position is different from the real space in which the user is actually located.
  • VR Virtual Reality
  • HMD head-mounted displays
  • Patent Document 1 discloses an entertainment system in which a player actually moves in the real world and acquires actual information from an existing store or facility by using a mobile terminal.
  • a map of the area around the user is displayed on the user's mobile terminal, and the user selects an event spot.
  • the event spot is selected, a quiz or a mini game is presented to the user. The user relies on the map. Go to the actual event spot, get hints, play quizzes and mini games, and clear these to earn points.
  • Patent Document 1 a user plays a game in virtual space using a mobile terminal, but the configuration is such that a user moves in real space and obtains a hint of a quiz by relying on a map displayed on the mobile terminal. It can be regarded as a game that combines space and real space.
  • the configuration described in Patent Document 1 is not a fusion of a real space and another space, that is, a combination of a space actually located by the user and a space at a different position. Therefore, the user does not experience the image of another space with a sense of reality, and does not eliminate the feeling of boredom when moving in the real space.
  • An object of the present invention is to provide an image display device, an image display system, and an image display method that enable a user to view an image in another space with a more realistic feeling or eliminate the feeling of boredom when moving in a real space. It is in.
  • the present invention is an image display device that is worn by the user and displays an image in a different space whose position is different from the actual space in which the user is actually located.
  • the position detection unit that detects the position of the user and the front direction of the user.
  • the orientation / orientation detector that detects the orientation and orientation (the orientation angle and elevation angle in the user's line-of-sight direction), the communication interface that acquires course information including image data and shooting position in another space from the outside, and the communication interface.
  • the control unit Based on the course information in another space, the control unit associates a course consisting of multiple points in another space with a course consisting of multiple points in real space, and when the user moves the course in real space, the user It is characterized in that an image of a point in another space associated with the image is displayed according to the position.
  • a landscape at a corresponding position in another space can be viewed as a still image or a moving image. Therefore, there is an effect of experiencing the feeling of being in a different space as if it were actually moving, or eliminating the feeling of boredom when moving in a real space.
  • FIG. The figure which shows the structure of the image display system 1 which concerns on Example 1.
  • FIG. The figure which shows the appearance of the user who attached the image display device 2 (head-mounted display 19).
  • the figure which shows the processing process which registers the course information of another space.
  • an image display system 1 that displays an image taken in another space when the user moves to a corresponding position in the real space will be described.
  • FIG. 1 is a diagram showing a configuration of an image display system 1 according to a first embodiment.
  • the image display device 2 the image pickup device 3, and the Web server 4 are connected via a communication network 5.
  • the communication network 5 includes a wireless communication network and the Internet.
  • HMD head-mounted display
  • the image display device 2 includes an image sensor 6a, a position detection unit 7a, a memory 8a, a communication interface 9a, a control unit 10a, a direction / orientation detection unit 11a, a display unit 12a, and an audio output unit 13.
  • the image sensor 6a includes an optical lens (not shown) and is a portion for capturing a still image or a moving image, and the captured image data is stored in the memory 8a.
  • the position detection unit 7a measures latitude, longitude, and altitude as position information using, for example, GPS (Global Positioning System).
  • the azimuth / attitude detection unit 11a includes a geomagnetic sensor and a gyro sensor (not shown), and measures an azimuth angle (an angle measured clockwise from the north of the azimuth) and an elevation angle (an angle measured upward from a horizontal plane).
  • the display unit 12a captures or generates an image taken in another space, a composite image with the image, a course, or the like, and displays the image.
  • the communication interface 9a is connected to the communication network 5 and transmits / receives data to / from the Web server 4 and the image pickup device 3.
  • the voice output unit 13 includes a speaker (not shown), generates a voice signal by synthesis or the like, and outputs the voice.
  • the control unit 10a includes a CPU and controls all the functional blocks of the image display device 2.
  • a display control application, a Web search application, a map application, and the like are stored in the memory 8a.
  • the image display device 2 is provided with the image pickup device 6a and is configured to be able to also serve as the photographing function of the image pickup device 3 described below, but when it is used only as the image display function, the image pickup element 6a may be omitted.
  • the image pickup device 3 includes an image pickup element 6b, a position detection unit 7b, a memory 8b, a communication interface 9b, a control unit 10b, a direction / orientation detection unit 11b, and a display unit 12b.
  • the description of the functional block having the same name as the image display device 2 described above will be omitted because of the same configuration and function.
  • the image pickup device 3 is for taking a picture of another space, and the picture pickup element 6 takes a still image or a moving image, and stores these image data in the memory 8b.
  • the position detection unit 7b and the orientation / attitude detection unit 11b acquire information (position, orientation, posture, etc.) at the time of shooting and store it in the memory 8b as attached information (metadata) of the image data.
  • the saved image file is transmitted to the Web server 4 and the image display device 2 via the communication interface 9b and the communication network 5.
  • it can be passed to the Web server 4 or the image display device 2 via a removable recording medium such as an SD card.
  • the display unit 12b reproduces and displays an image at the time of shooting and stored data in the memory 8b.
  • the Web server 4 has a communication interface 9c, a control unit 10c, and a memory 8c, and an image file and a Web page document are stored in the memory 8c.
  • the image file including the image data captured by the image pickup apparatus 3 and its accessory information is received via the communication interface 9c and taken into the memory 8c.
  • the control unit 10c generates course information based on the acquired data and publishes it on the net as a Web page document.
  • a method of disclosing the course information a method of directly indicating the latitude, longitude, and altitude as the point information on the course, a method of designating the shooting location as a point on the course from the attached information of the image file, and a combination thereof may be used. ..
  • the above course is referred to as a course in another space in this embodiment. By publishing the course in another space as a Web page document in this way, the image data in another space can be used not only by the photographer but also by other interested users.
  • the outline of the operation of the image display system 1 of this embodiment is as follows.
  • the image pickup device 3 captures an image at each point on the course in another space, and the image data and the attached information of the capture point are transmitted to the Web server 4 and registered.
  • This registration process may be performed by the user who owns the image display device 2, or may be performed by another user.
  • the image sensor 6a in the image display device 2 can also be used, in which case the image sensor 3 becomes unnecessary.
  • the image of another space registered in the Web server 4 is displayed on the image display device 2 according to the moving position.
  • the course in the real space whose distance and direction are approximated by a predetermined geometric relationship with respect to the course in another space is obtained.
  • the determined and registered image of another space is displayed at the moving point of the real space corresponding to the shooting point of another space.
  • FIG. 2A shows a state in which the user 20 wears the transmissive HMD 19 on the head.
  • the display unit 12a of the transmissive HMD 19 has a configuration in which a semitransparent prism 22 is arranged in front of one eye (right eye) 21a, and the prism 22 is irradiated with a reproduced image of a side display element 23.
  • an image sensor 6a is arranged on the front surface of the transmissive HMD 19, and a control unit (drive unit) 10a is arranged on the upper surface.
  • FIG. 2B shows an example of a display screen (display unit 12a) when the user 20 looks at the prism 22.
  • a landscape image 25 (indicated by a broken line) in front of the real space can be seen by superimposing a reproduced image 24 (indicated by a solid line) in another space by the display element 23 through the prism 22.
  • the display position (display direction) is, for example, the center of the display screen 12a.
  • the image pickup device 3 is photographed in a space different from the user's line-of-sight direction in the real space. It is also possible to display the corresponding reproduced image 24 when the orientation of the direction matches.
  • the prism 22 and the display element 23 are arranged in one eye (right eye) 21a, but they can also be arranged in the other eye (left eye) 21b. .. Further, the transmittance of the prism 22 is appropriately set according to whether it is a monocular type or a binocular type so that both the landscape image 25 and the reproduced image 24 can be visually recognized in a well-balanced manner.
  • FIG. 3 is a diagram showing an example of acquiring an image of each point on a course in another space.
  • the photographer (the user of the image display device 2 or another user) is moving around the course 26a (indicated by the broken line arrow) in another space, and the surroundings that should be recommended at each point on the course by the image pickup device 3. Take a picture of the scenery (a scenery that will be a memorable one for the user).
  • the route that can be moved without duplication to shoot each recommended item is named the recommended course.
  • the gate 28 as a mark is photographed, and at the point P112, the building 29 is photographed.
  • a landscape including a branching road is photographed.
  • the tower 30 is photographed, and at the end point P115 of the course, the torii 31 is photographed.
  • the stone monument 32 is photographed at point P116.
  • the image data taken at these points and the attached information are stored in the memory 8b as an image file and transmitted to the Web server 4. The data at each point is saved with a distinction as to whether or not it exists on the recommended course 26a.
  • FIG. 4 is a diagram showing an example of displaying an image taken in a different space from the course setting in the real space.
  • a course 27a (indicated by a solid arrow) is set in the real space, and points P211 to P215 on the course are shown.
  • the outside of the course is indicated by broken lines 39a and 39b.
  • the procedure for setting the course will be explained.
  • a user who intends to move in the real space displays a map of the real space on the display unit 12a of the image display device 2 using a map application or the like, and designates a start point P211 and an end point P215 of the course.
  • the start point point P211 is associated with the start point point P111 in another space shown in FIG.
  • a course that can be moved in the real space is determined so as to have a predetermined geometric relationship (deviation of orientation and scale) approximately with the course of another space.
  • a predetermined geometric relationship device of orientation and scale
  • the conditions of roads and buildings are different between the real space and another space, so the shape of the course that the user can move in the real space does not exactly match the shape of the course in the other space. Therefore, set directions and intermediate points that have similar geometric relationships. Details of the course setting will be described later in PAD.
  • the points P212 to P214 of the midpoint of the real space are determined so as to correspond to the points P112 to P114 of the midpoint of the course of another space shown in FIG. 3, and the course 27a of the real space connecting these is set. Will be done.
  • the captured images 24a to 24e of the corresponding recommended objects (structures and scenery of the branch point) taken in another space are images. It is displayed on the display unit 12a of the display device 2.
  • the orientation / posture detection unit 11a detects the user's line-of-sight direction, and when the direction of the registered recommended items is in the user's line-of-sight direction, these recommended items are displayed to further enhance the sense of presence. it can.
  • point P213 corresponds to point P113, which is a branch point in another space.
  • the intermediate points P112 and P114 on the separate space course 26a are not branch points, but at the corresponding points P212 and P214 of the real space course 27a, the course must be changed (turn right) at the immediately preceding intersection. Therefore, the position detection unit 7a of the image display device 2 detects that the intersection has been reached, and the voice output unit 13 gives a voice instruction to the user to turn right. As a result, the user can move without departing from the course 27a set in the real space. When the user moves the courses of the broken lines 39a and 39b that deviate from the recommended course, the image 24f of the stone monument is displayed at the position of the point P216.
  • the landscape image of another space which is a place different from the real space, can be seen according to the moving position, and the different space can be experienced with a sense of reality. Furthermore, at the branch point, the branch guide map combined with the landscape image can be seen, which has the effect of being able to prepare for the course at the branch point.
  • FIG. 5A is a diagram showing a processing process for registering course information in another space, and is shown by a PAD (Problem Analysis Diagram) in which the processing structure is layered.
  • the control unit 10b of the image pickup apparatus 3 reads the captured image data and its accessory information from the memory 8b and transmits the captured image data to the Web server 4 via the communication interface 9b. Since a plurality of courses are registered in the Web server 4, a course number is assigned to each of them.
  • step S11 the image data of another space taken by the imaging device 3 and its attached information (position information, etc.) are registered as an image file.
  • an image file that is not on the recommended course may be included (for example, the data of the stone monument 32 in FIG. 3).
  • step S12 the following processing is repeated for each point (P111 to P115) on the recommended course.
  • step S12a the image files to be registered for the designated points (images 28 to 31 in P111 to P115 in FIG. 3) are selected.
  • step S12b point information is set from the attached information of the selected image file. At this time, if there is shooting position information, it is used as point information, and if there is no shooting position information, it is specified from a map application or the like and the point information is registered.
  • related figures / symbols to be displayed superimposed on the image such as the arrow of the branch guide are arbitrarily registered as one of the attached information. In the examples of FIGS. 3 and 4, the arrow symbol 24c'which serves as a route guide at the branch point P213 (P113) is registered as attached information.
  • FIG. 5B is a diagram showing the contents of the course information registered by the process of FIG. 5A.
  • the course information 44 includes a course number, course point information, and image file information.
  • the course number is a number for identifying the course.
  • the course point information is position information (latitude, longitude, altitude) of points from the start point to the end point of the course. Altitude can be omitted if it is not covered as a function of the position detector.
  • the image file information is a list (or link) of image files 45 of images taken on or around the course. Further, the image file 45 referred to from the above image file information includes image data 16 and its accessory information 17 (generally also referred to as metadata).
  • the attached information 17 includes a shooting position, related figures / symbols, and the like.
  • FIG. 6 is a diagram showing a processing process of moving a course in real space and displaying a registered image, and shows the processing structure as a layered PAD.
  • the control unit 10a of the image display device 2 takes in image data and its accessory information from the Web server 4 via the communication interface 9a, sets a course in real space, and displays the corresponding image according to the movement position.
  • Step S21 specifies a course in another space in which the image to be displayed is stored.
  • the course number for example, No. 1
  • the corresponding course information and the image file are fetched from the Web server 4 in step S21b.
  • step S22 the user specifies a start point and a goal point in the real space.
  • a method of designating the position there are a method of designating from the map application and a method of designating the current position of the user from the position detection unit 7a of the image display device 2.
  • step S23 the following processing is repeated for each section defined between adjacent points on the course. That is, the end point (target point) of the section is set at the start point of the section, and when the user reaches the end point of the section, the registered image is displayed and the end point of the next section is set.
  • step S23a in order to set a course in a real space that approximates a course in another space, the end point of the section is determined based on the geometric relationship between the two courses. Therefore, in different space and real space, compare the linear orientation and distance when the end point Pg (goal point) of the course is expected from the start point Ps (start point at the first time) of the section, and compare the orientation difference and scale between the two.
  • the linear direction from the section start point Ps to the goal point Pg in another space is in the east direction, and the distance is 5 km.
  • the linear direction from the section start point Ps'to the goal point Pg'in the real space is in the north direction, and the distance is 2 km.
  • the directional difference in the real space with respect to another space is rotated 90 degrees counterclockwise, and the scale is 0.4 times.
  • the section end point Pe'in the real space corresponding to the section end point Pe in another space is obtained.
  • the position of the section end point Pe'obtained in this way is not necessarily a movable place in the real space, and may overlap with, for example, an area where a building exists. Therefore, with reference to the map application in the real space, a movable position (for example, on the road) close to the position Pe'is obtained and set as the target section end point Pe ”, and the section start point Ps is referred to with reference to the map application. Determine the route from'to section end point Pe'and set the course in real space.
  • step S23b it is determined whether or not the user has approached the section end point Pe ".
  • the user's movement position in the real space is detected by the position detection unit 7a. Then, the movement position is near the section end point Pe", for example, the section. When it reaches within a range of 10% of the section distance from the end point Pe ", it is determined that the section end point Pe" has been approached.
  • step S23b As a result of the determination in step S23b, when the end point of the section is approached, the registered image at the corresponding point in another space is displayed on the display unit 12a in step S23c. After that, in step S23d, it is repeatedly determined whether or not the section end point Pe ”on the course in the real space has been reached. If it is determined that the section ends, that position is set as the start point of the next section in the real space in step S23. Return to.
  • the branch point is regarded as the end point of the section.
  • Point P213 in FIG. 4 corresponds to this case.
  • other conditions for determining the end of the section include a case where the section distance in the real space reaches a predetermined distance even if it is not a branch point, or a case where the point is a point specified by the user. Points P212 and P214 in FIG. 4 correspond to this case.
  • a simulated run may be performed on the map application.
  • the real space course automatically created by the process of FIG. 6 is created before or during the movement. , You may fix it manually. For example, when arriving at point P213, a round-trip course 39a to point P216 outside the automatically generated course 27a may be added.
  • the voice output unit 13 notifies by voice or a message is sent to the display unit 12a. It may be displayed. As a result, the user can move the course while being aware that the difference between the course in another space and the course in the real space is large due to the geographical restrictions of the real space.
  • the voice output unit 13 emits an alarm sound or displays a warning message on the display unit 12a. You may. As a result, the user can move the course 27a in the real space without any mistake.
  • the image captured by the imaging device 3 is registered in the Web server 4, and the image display device 2 is configured to capture and display the image registered from the Web server 4.
  • the image display device 2 can shoot / save. Therefore, the imaging device 3 and the Web server 4 can be omitted.
  • the scenery at the corresponding position in the other space can be seen as a still image or a moving image, so that it is as if the user is actually moving in the other space.
  • the course in the real space where the user moves is not only similar to the course in another space, but also flexibly set according to the geographical conditions such as the road in the real space, so that it is highly practical. It has become.
  • the setting of the course in the real space will be described when there is a difference in altitude between the courses in another space and the course in the real space is flat.
  • FIG. 7 is a diagram showing an example of a course in another space with different altitudes.
  • the other space has undulations, and the altitude difference is represented by contour lines 34 (indicated by dotted lines), and the altitude [m] is described in each contour line.
  • Course 26b (indicated by a solid arrow) in another space is a course with an altitude difference from the start point point P121 to the end point point P126.
  • Point P122 has the highest altitude and point P123 has the lowest altitude.
  • Points P124 and P125 are highlands. Therefore, the walking speed decreases in the uphill section (points P121 to P122, points P123 to P124). On the contrary, the walking speed increases in the downhill section (points P122 to P123, points P125 to P126).
  • There is a hut 33 at the point P124 and the photographed image is registered. The same applies to the object to be photographed at other points, but the description thereof is omitted here.
  • the image data taken at each point is registered in the Web server 4 as in the case of the first embodiment, but as shown in the attached information 17 of FIG. 5B, the information of the shooting position obtained from the position detection unit 7b at the time of shooting.
  • the altitude is added to.
  • Altitude information can also be measured by altitude sensors. Alternatively, it is also possible to read the altitude value of the position from the map information.
  • FIG. 8 shows an example of a flat course 27b with no difference in altitude in the real space, and is associated with the course 26b in another space of FIG. Since there is no difference in altitude in the real space, the notation of contour lines is omitted.
  • the course 27b in the real space is from the start point point P221 to the end point point P226, and the orientation of each section between adjacent points corresponds to the orientation of each section of the course 26b in another space with a predetermined orientation difference. ..
  • the distance of each section in the real space is set in consideration of the fact that the walking speed changes due to the difference in altitude in another space.
  • the position information and the altitude information of each point of the course 26b in another space are acquired from the Web server 4, and the walking time of each section is assumed. Then, an intermediate point is set on the course 27b so that the walking time for each section of the course 27b in the real space corresponds to the walking time for each section of the course 26b in another space at a predetermined scale.
  • the section corresponding to the uphill increases the distance
  • the section corresponding to the downhill decreases the distance.
  • the position detection unit 7a detects it, and similarly to the first embodiment, the registered image of another space (for example, 24 g) is displayed on the display unit 12a.
  • the required time is set by correcting the distance between the points in the real space so as to cancel the difference in the movement time of the user due to the altitude difference in consideration of the altitude difference in another space.
  • the required time is set by correcting the distance between the points in the real space so as to cancel the difference in the movement time of the user due to the altitude difference in consideration of the altitude difference in another space.
  • the real space course can be set in consideration of the walking time.
  • an image display device 2 capable of running while looking at an image of a course in another space when running a circuit course in a real space will be described.
  • the smartphone 40 is used as the image display device 2 will be described.
  • FIG. 9 is a diagram showing a course 26c in another space.
  • point P131 is started and point P135 is the goal.
  • a building 35 exists at the intermediate point P132, and a photographed image thereof is registered in the Web server 4.
  • FIG. 10 is a diagram showing a course 27c in real space.
  • the shape of the course 27c in the real space has nothing to do with the shape of the course 26c in another space, and points are set at positions corresponding to the mileage.
  • the point P231 on the start line 38 is started, and the same point P235 that makes two laps of the track is set as the goal.
  • the scale is calculated from the ratio of the total lengths of the course 26c in another space and the course 27c in the real space, and the passing point in the middle of the course 27c in the real space is the distance from the start of the course 26c in the other space to the passing point multiplied by the scale. It is automatically determined as the position of the distance.
  • beacons 36a to 36d were placed at the start / goal and each passing point on the course 27c in the real space so that the position information could be measured accurately.
  • the receivable range in which the signal from the beacon can be received above a certain threshold value is indicated by the broken line 37.
  • the user starts the point P231 (beacon 36a), proceeds counterclockwise, passes the point P232 (beacon 36c), and then makes one round at the same point as the start (beacon 36a). Further, it passes through the point P233 (beacon 36b) and the point P234 (beacon 36d), and makes two laps at the point P235 (beacon 36a) drawn by the goal line 38 to reach the goal.
  • FIG. 11 is a diagram showing a state of a user wearing an image display device.
  • the smartphone 40 which is the image display device 2
  • the smartphone 40 can detect the signals emitted by the beacons 36a to 36d and know the current position by using the Bluetooth (registered trademark) function as the position detection unit 7a.
  • the Bluetooth registered trademark
  • the smartphone 40 install an application that associates points P231 to P235 on the course 27c in the real space with points P131 to P135 on the course 26c in another space. Further, the image data on the course 26c in another space is fetched in advance from the Web server 4 via the communication network 5, or is in a state where it can be fetched from the Web server 4 at any time in response to a request from the application.
  • the first group of athletes After launching the application of the smartphone 40, the first group of athletes waits inside the receivable range 37 of the signal from the beacon 36a in front of the point P231 of the starting point, and the race starts at 00 seconds per minute. To do. After this start, the second group of players also stands by in front of point P231. According to this method, it is possible to start a wave by dividing into groups of several players. The starter may simply signal the start according to the posting time of 00 seconds per minute or just watch over the athletes from flying.
  • the smartphone 40 measures the time when the signal received from the beacon 36a becomes maximum as the goal time.
  • the goal time of each athlete is transmitted to, for example, a race organizer's totaling device (not shown) via a wireless communication network 5, and is used for totaling the race ranking.
  • FIGS. 12A to 12C are diagrams showing an example of a display screen of the smartphone 40.
  • the display when the user approaches the point P232 in the real space corresponding to the point P132 in another space is shown.
  • On the course 27c in the real space go straight near the point P232, but on the course 26c in another space, turn right at the point P132.
  • the display unit 12a changes from the image 24h of FIG. 12A turning right in another space to the image 24i of FIG. 12B.
  • These images include buildings 35a, 35b and roads 42a, 42b taken in different spaces.
  • it turns right there, but in real space, the direction of travel is always displayed as an image that can be seen in front.
  • the fixed correspondence in the traveling direction means, for example, a state in which the correspondence is always shifted clockwise by 30 degrees.
  • FIG. 12C shows a display example reflecting the user's frontal orientation and posture (azimuth and elevation in the user's line-of-sight direction).
  • FIG. 12B shows a display image 24j when a user twists his / her upper body to the left and turns his / her line of sight downward in order to see the image 35b of a building in another space.
  • the image has a wide viewing angle such as a 360 ° angle of view, only the image in the range of the angle of view in the direction of the smartphone that matches the traveling direction is displayed.
  • the orientation / posture detection unit 11a of the smartphone 40 can detect the movement of the user's upper body and display an image in the range of the angle of view in the line-of-sight direction.
  • the display image 24j the image 35c of the building is displayed in front and the image 42c of the road is displayed in the lower part thereof.
  • the present invention can be applied to a competition or the like that goes around the same course while looking at the scenery of another space.
  • long-distance running with many laps makes it easy for runners to get bored, but since they can run while looking at different scenery from time to time, it is effective in eliminating the feeling of boredom.
  • FIG. 11 shows an example in which the smartphone 40 is attached to the body using a fixing device 41 so that it can be seen hands-free.
  • the smartphone 40 is put in a pocket such as a zack and the smartphone is approached to each point. You may set the 40 to emit an alarm sound, and when you hear the alarm sound, you may pick up the smartphone 40 and look at the screen of the smartphone 40 because you are approaching the next point.
  • the beacon is used, but at a minimum, it is sufficient if the video can be played only in the range where the beacon is received.
  • the running speed of the user between adjacent beacons may be calculated, and the playback speed of the recorded moving image may be changed accordingly.
  • the moving image is aligned based on the position coordinates obtained from the position detection unit. You can play the video on the course. In that case, the position information may be added to the captured moving image at regular time intervals or when passing through points.
  • the case where the user moves by walking or running is shown, but it may be a bicycle, a car that the user does not drive, or a public means of transportation.
  • the user may change the intermediate points and goals of the course during the movement.
  • when displaying an image on the image display device while moving in each section even if it is an image such as a view of a car window registered in a state in which another space is photographed in advance and can be accessed from the image display device. Good.
  • the present invention is not limited to the above-mentioned examples, and includes various modifications.
  • the above-described embodiment has been described in detail in order to explain the present invention in an easy-to-understand manner, and is not necessarily limited to the one including all the described configurations. It is also possible to add the configuration of another embodiment to the configuration of one embodiment. Further, it is possible to add / delete / replace other configurations for a part of each embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An image display device 2 that is worn by a user and displays an image of a separate space differing from a real space in which the user is actually located, comprising: a position detection unit 7a for detecting the position of the user; a communication interface 9a for acquiring course information from the outside that includes the image data of the separate space and an imaged position; a memory 8a for preserving the image data and course data acquired by the communication interface; a display unit 12a for displaying the image of the separate space preserved in the memory; and a control unit 10a for controlling the display of an image of the separate space on the display unit. The control unit 10a maps a course 26a composed of a plurality of separate space points to a course 27a composed of a plurality of real space points on the basis of separate space course information and displays images 24a-24f of mapped separate space points in accordance with the user's position when the user moves along the real space course 27a.

Description

画像表示装置、画像表示システムおよび画像表示方法Image display device, image display system and image display method
 本発明は、ユーザが実際に位置する実空間とは位置が異なる別空間の画像を表示する画像表示装置、画像表示システムおよび画像表示方法に関する。 The present invention relates to an image display device, an image display system, and an image display method for displaying an image in a different space whose position is different from the real space in which the user is actually located.
 コンピュータなどで仮想世界の画像コンテンツを作成して表示し、ユーザが、まるでその仮想世界にいるような感覚を体験できるVR(Virtual Reality:仮想現実)技術や、現実の風景にコンピュータなどで作成した画像コンテンツを重畳させて表示するAR(Augmented Reality:拡張現実)技術などが知られ、ヘッドマウントディスプレイ(Head Mounted Display、以下、HMDという)やスマートフォンなどの端末により実用化されている。 VR (Virtual Reality) technology that allows users to create and display image content of a virtual world on a computer, etc., and experience the feeling of being in that virtual world, or created on a computer in a real landscape AR (Augmented Reality) technology that superimposes and displays image contents is known, and has been put into practical use by terminals such as head-mounted displays (hereinafter referred to as HMDs) and smartphones.
 上記技術を応用したゲームとして特許文献1には、携帯端末を使用し、プレーヤーが実際に現実世界を移動し、実在する店舗や施設などから実際の情報を取得するエンターテイメントシステムが開示されている。このゲームでは、「ユーザのいる周辺の地図をユーザ携帯端末に表示させ、ユーザにイベントスポットの選択をさせる。イベントスポットを選択するとクイズやミニゲームがユーザに提示される。ユーザは、地図を頼りに実際のイベントスポットに移動し、ヒントを得てクイズやミニゲームを行い、これらをクリアするとポイントを獲得する。」という構成になっている。 As a game to which the above technology is applied, Patent Document 1 discloses an entertainment system in which a player actually moves in the real world and acquires actual information from an existing store or facility by using a mobile terminal. In this game, "a map of the area around the user is displayed on the user's mobile terminal, and the user selects an event spot. When the event spot is selected, a quiz or a mini game is presented to the user. The user relies on the map. Go to the actual event spot, get hints, play quizzes and mini games, and clear these to earn points. "
特開2002-49681号公報JP-A-2002-49681
 VR技術やAR技術によれば、ユーザが実際に位置する空間(以下、「実空間」という)に居ながら、実空間とは異なる位置の空間(以下、「別空間」という)の画像を見ることで、別空間に居るような体験をすることができる。 According to VR technology and AR technology, while the user is in the space where he / she is actually located (hereinafter referred to as “real space”), he / she sees an image of a space (hereinafter referred to as “another space”) at a position different from the real space. By doing so, you can experience as if you were in another space.
 別空間の画像として、例えば、ユーザが過去に訪れた思い出の場所の風景やまだ訪れたことのない興味のある場所の風景を見るとき、単にその場所の画像を再生するだけでは、臨場感をもってその場所を体験することができない。臨場感を高めるためには、実空間と別空間を融合させること、例えば、ユーザ自身が現在の場所を実際に移動しながら他の場所の画像を見ることで、あたかも他の場所を実際に移動している感覚で他の場所を体験することができる。あるいは、ユーザが実空間の決まったコースを移動(散策、またはランニング)するとき、実空間のいつもの決まった風景を見るだけでは退屈になることがある。そのような場合、移動に伴って別空間の画像を見ることができれば、散策やランニングに新鮮味を与えユーザの退屈感を解消することができる。 As an image of another space, for example, when looking at the scenery of a place of memory that the user visited in the past or the scenery of a place of interest that the user has not visited yet, simply playing back the image of that place gives a sense of reality. I can't experience the place. In order to enhance the sense of presence, the real space and another space are fused, for example, the user actually moves to another place by seeing the image of another place while actually moving to the current place. You can experience other places as if you were doing it. Alternatively, when a user moves (walks or runs) a fixed course in real space, it may be boring just to see the usual fixed scenery in real space. In such a case, if an image of another space can be seen as the user moves, it is possible to give a fresh taste to walking and running and eliminate the feeling of boredom of the user.
 特許文献1では、ユーザが携帯端末を使用して仮想空間のゲームを行う訳であるが、携帯端末に表示された地図を頼りに現実空間を移動してクイズのヒントを得る構成であり、仮想空間と現実空間とを組み合わせたゲームとみなすことができる。しかしながら特許文献1に記載の構成は、実空間と別空間を融合させたもの、すなわち、ユーザが実際に位置する空間と、それと異なる位置の空間を組み合わせたものではない。よってユーザは、別空間の画像を臨場感をもって体験することや、実空間での移動の際に退屈感を解消することにはならない。 In Patent Document 1, a user plays a game in virtual space using a mobile terminal, but the configuration is such that a user moves in real space and obtains a hint of a quiz by relying on a map displayed on the mobile terminal. It can be regarded as a game that combines space and real space. However, the configuration described in Patent Document 1 is not a fusion of a real space and another space, that is, a combination of a space actually located by the user and a space at a different position. Therefore, the user does not experience the image of another space with a sense of reality, and does not eliminate the feeling of boredom when moving in the real space.
 本発明の目的は、ユーザが別空間の画像をより臨場感をもって見ること、あるいは実空間を移動するときの退屈感を解消できるような画像表示装置、画像表示システムおよび画像表示方法を提供することにある。 An object of the present invention is to provide an image display device, an image display system, and an image display method that enable a user to view an image in another space with a more realistic feeling or eliminate the feeling of boredom when moving in a real space. It is in.
 本発明は、ユーザが装着し、ユーザが実際に位置する実空間とは位置が異なる別空間の画像を表示する画像表示装置において、ユーザの位置を検出する位置検出部と、ユーザの正面方向の方位と姿勢(ユーザの視線方向の方位角と仰角)を検出する方位・姿勢検出部と、外部から別空間の画像データと撮影位置を含むコース情報を取得する通信インタフェースと、通信インタフェースで取得した画像データとコース情報を保存するメモリと、メモリに保存されている別空間の画像を表示する表示部と、音声を出力する音声出力部と、表示部に対し別空間の画像の表示を制御する制御部と、を備える。制御部は、別空間のコース情報を基に、別空間の複数のポイントからなるコースを実空間の複数のポイントからなるコースに対応付けし、ユーザが実空間のコースを移動するとき、ユーザの位置に応じて、対応付けされた別空間のポイントの画像を表示することを特徴とする。 The present invention is an image display device that is worn by the user and displays an image in a different space whose position is different from the actual space in which the user is actually located. The position detection unit that detects the position of the user and the front direction of the user. The orientation / orientation detector that detects the orientation and orientation (the orientation angle and elevation angle in the user's line-of-sight direction), the communication interface that acquires course information including image data and shooting position in another space from the outside, and the communication interface. A memory for storing image data and course information, a display unit for displaying an image in another space stored in the memory, an audio output unit for outputting audio, and a display unit for controlling the display of an image in another space. It includes a control unit. Based on the course information in another space, the control unit associates a course consisting of multiple points in another space with a course consisting of multiple points in real space, and when the user moves the course in real space, the user It is characterized in that an image of a point in another space associated with the image is displayed according to the position.
 本発明によれば、ユーザが実空間のコースを移動するとき、別空間の対応する位置の風景を静止画や動画として見ることができる。よって、あたかも別空間を実際に移動しているような臨場感をもって体験し、あるいは実空間を移動するときの退屈感を解消する効果がある。 According to the present invention, when a user moves on a course in a real space, a landscape at a corresponding position in another space can be viewed as a still image or a moving image. Therefore, there is an effect of experiencing the feeling of being in a different space as if it were actually moving, or eliminating the feeling of boredom when moving in a real space.
実施例1に係る画像表示システム1の構成を示す図。The figure which shows the structure of the image display system 1 which concerns on Example 1. FIG. 画像表示装置2(ヘッドマウントディスプレイ19)を装着したユーザの外観を示す図。The figure which shows the appearance of the user who attached the image display device 2 (head-mounted display 19). 画像表示装置2の表示画面の例を示す図。The figure which shows the example of the display screen of the image display device 2. 別空間のコース上で各ポイントの画像を取得する例を示す図。The figure which shows the example which acquires the image of each point on the course of another space. 実空間のコース設定と別空間で撮影した画像を表示する例を示す図。The figure which shows the example which displays the course setting of a real space and the image taken in another space. 別空間のコース情報を登録する処理工程を示す図。The figure which shows the processing process which registers the course information of another space. 図5Aの処理により登録されたコース情報の内容を示す図。The figure which shows the content of the course information registered by the process of FIG. 5A. 実空間のコースを移動して登録された画像を表示する処理工程を示す図。The figure which shows the processing process which moves the course of a real space and displays the registered image. 実施例2に係る高度差のある別空間のコースの例を示す図。The figure which shows the example of the course of another space with the altitude difference which concerns on Example 2. 実空間の高度差のない平坦なコースの例を示す図。The figure which shows the example of the flat course with no difference in altitude in the real space. 実施例3に係る別空間のコースを示す図。The figure which shows the course of another space which concerns on Example 3. 実空間のコースとしてトラックを周回するコースを示す図。The figure which shows the course which goes around a track as a course of a real space. 画像表示装置2(スマートフォン40)を装着したユーザを示す図。The figure which shows the user who attached the image display device 2 (smartphone 40). スマートフォン40の表示画面の例を示す図。The figure which shows the example of the display screen of the smartphone 40. スマートフォン40の表示画面の例を示す図。The figure which shows the example of the display screen of the smartphone 40. スマートフォン40の表示画面の例を示す図。The figure which shows the example of the display screen of the smartphone 40.
 以下、本発明の実施例を図面に基づいて詳細に説明する。なお、各図面において同一部には原則として同一符号を付し、その繰り返しの説明は省略する。 Hereinafter, examples of the present invention will be described in detail with reference to the drawings. In principle, the same parts of the drawings are designated by the same reference numerals, and the repeated description thereof will be omitted.
 実施例1では、別空間で撮影した画像を、ユーザが実空間の対応する位置に移動したときに表示する画像表示システム1について説明する。 In the first embodiment, an image display system 1 that displays an image taken in another space when the user moves to a corresponding position in the real space will be described.
 [画像表示システム]
  図1は、実施例1に係る画像表示システム1の構成を示す図である。画像表示システム1は、画像表示装置2と撮像装置3とWebサーバ4が通信網5を介して接続されている。通信網5は、無線通信網やインターネットを含む。ここでは画像表示装置2としてヘッドマウントディスプレイ(HMD)を用いる場合について説明する。以下、各装置の内部構成について説明する。
[Image display system]
FIG. 1 is a diagram showing a configuration of an image display system 1 according to a first embodiment. In the image display system 1, the image display device 2, the image pickup device 3, and the Web server 4 are connected via a communication network 5. The communication network 5 includes a wireless communication network and the Internet. Here, a case where a head-mounted display (HMD) is used as the image display device 2 will be described. Hereinafter, the internal configuration of each device will be described.
 画像表示装置2は、撮像素子6a、位置検出部7a、メモリ8a、通信インタフェース9a、制御部10a、方位・姿勢検出部11a、表示部12a、音声出力部13を有する。撮像素子6aは図示しない光学レンズを含み、静止画もしくは動画を撮影する部分であり、撮影された画像データはメモリ8aに保存される。位置検出部7aは、例えばGPS(Global Positioning System)を用いて位置情報として緯度、経度、高度を計測するものである。方位・姿勢検出部11aは、図示しない地磁気センサとジャイロセンサを含み、方位角(方位の北から時計回りに測った角度)と仰角(水平面から上向きに測った角度)を計測するものである。表示部12aは、別空間での撮影画像やそれとの合成画像、コース等の画像を取り込み又は生成して表示する。通信インタフェース9aは通信網5に接続され、Webサーバ4や撮像装置3との間でデータの送受信を行う。音声出力部13は図示しないスピーカを含み、音声信号を合成等により生成して音声を出力する。制御部10aはCPUからなり、画像表示装置2の全ての機能ブロックを制御する。メモリ8aには、図示しない表示制御アプリ、Web検索アプリ、地図アプリなどを格納している。 The image display device 2 includes an image sensor 6a, a position detection unit 7a, a memory 8a, a communication interface 9a, a control unit 10a, a direction / orientation detection unit 11a, a display unit 12a, and an audio output unit 13. The image sensor 6a includes an optical lens (not shown) and is a portion for capturing a still image or a moving image, and the captured image data is stored in the memory 8a. The position detection unit 7a measures latitude, longitude, and altitude as position information using, for example, GPS (Global Positioning System). The azimuth / attitude detection unit 11a includes a geomagnetic sensor and a gyro sensor (not shown), and measures an azimuth angle (an angle measured clockwise from the north of the azimuth) and an elevation angle (an angle measured upward from a horizontal plane). The display unit 12a captures or generates an image taken in another space, a composite image with the image, a course, or the like, and displays the image. The communication interface 9a is connected to the communication network 5 and transmits / receives data to / from the Web server 4 and the image pickup device 3. The voice output unit 13 includes a speaker (not shown), generates a voice signal by synthesis or the like, and outputs the voice. The control unit 10a includes a CPU and controls all the functional blocks of the image display device 2. A display control application, a Web search application, a map application, and the like (not shown) are stored in the memory 8a.
 ここでは、画像表示装置2が撮像素子6aを備え、次に説明する撮像装置3の撮影機能を兼用できる構成としたが、画像表示機能のみとして用いる場合は撮像素子6aを省略しても良い。 Here, the image display device 2 is provided with the image pickup device 6a and is configured to be able to also serve as the photographing function of the image pickup device 3 described below, but when it is used only as the image display function, the image pickup element 6a may be omitted.
 撮像装置3は、撮像素子6b、位置検出部7b、メモリ8b、通信インタフェース9b、制御部10b、方位・姿勢検出部11b、表示部12bを有する。上記した画像表示装置2と同じ名称の機能ブロックは、同様の構成や機能につき説明を省略する。 The image pickup device 3 includes an image pickup element 6b, a position detection unit 7b, a memory 8b, a communication interface 9b, a control unit 10b, a direction / orientation detection unit 11b, and a display unit 12b. The description of the functional block having the same name as the image display device 2 described above will be omitted because of the same configuration and function.
 撮像装置3は、別空間を撮影するためのものであり、撮像素子6により静止画や動画を撮影し、これらの画像データをメモリ8bに保存する。また、位置検出部7bと方位・姿勢検出部11bにより、撮影時の情報(位置、方位、姿勢等)を取得し、上記画像データの付属情報(メタデータ)としてメモリ8bに保存する。その際、画像データと上記付属情報を対とし、1つの画像ファイルとして保存することで、関連付けが容易になる。保存した画像ファイルは、通信インタフェース9bと通信網5を介してWebサーバ4や画像表示装置2に送信する。または図示しないがSDカードなどの着脱できる記録媒体を介して、Webサーバ4や画像表示装置2に渡すことができる。表示部12bは、撮影時の画像やメモリ8bの保存データを再生して表示する。 The image pickup device 3 is for taking a picture of another space, and the picture pickup element 6 takes a still image or a moving image, and stores these image data in the memory 8b. In addition, the position detection unit 7b and the orientation / attitude detection unit 11b acquire information (position, orientation, posture, etc.) at the time of shooting and store it in the memory 8b as attached information (metadata) of the image data. At that time, by pairing the image data and the attached information and saving the image data as one image file, the association becomes easy. The saved image file is transmitted to the Web server 4 and the image display device 2 via the communication interface 9b and the communication network 5. Alternatively, although not shown, it can be passed to the Web server 4 or the image display device 2 via a removable recording medium such as an SD card. The display unit 12b reproduces and displays an image at the time of shooting and stored data in the memory 8b.
 Webサーバ4は、通信インタフェース9c、制御部10c、メモリ8cを有し、メモリ8cには画像ファイルやWebページ文書が保存される。撮像装置3で撮影した画像データとその付属情報を含む画像ファイルは、通信インタフェース9cを介して受信し、メモリ8cに取り込む。制御部10cは取得したデータを基にコース情報を生成し、Webページ文書としてネット上に公開する。コース情報を開示する方法としては、コース上のポイント情報として緯度、経度、高度を直接示す方法と、画像ファイルの付属情報から撮影場所をコース上のポイントに指定する方法、およびそれらを組み合わせでもよい。上記コースを、本実施例では別空間のコースと呼ぶ。このように、別空間のコースをWebページ文書として公開することで、別空間の画像データは、その撮影者だけでなく興味を持った他のユーザも利用することができる。 The Web server 4 has a communication interface 9c, a control unit 10c, and a memory 8c, and an image file and a Web page document are stored in the memory 8c. The image file including the image data captured by the image pickup apparatus 3 and its accessory information is received via the communication interface 9c and taken into the memory 8c. The control unit 10c generates course information based on the acquired data and publishes it on the net as a Web page document. As a method of disclosing the course information, a method of directly indicating the latitude, longitude, and altitude as the point information on the course, a method of designating the shooting location as a point on the course from the attached information of the image file, and a combination thereof may be used. .. The above course is referred to as a course in another space in this embodiment. By publishing the course in another space as a Web page document in this way, the image data in another space can be used not only by the photographer but also by other interested users.
 本実施例の画像表示システム1の動作概要は以下のようになる。まず、撮像装置3で別空間のコース上の各ポイントでの画像を撮影し、画像データと撮影ポイントの付属情報とをWebサーバ4に送信して登録する。この登録処理は、画像表示装置2を有するユーザ本人が行っても良いし、それ以外の他のユーザが行っても良い。ユーザ本人が登録する場合は、画像表示装置2内の撮像素子6aを使用することもでき、その場合は撮像装置3が不要となる。次に、ユーザは画像表示装置2を装着して実空間のコースを移動すると、その移動位置に合わせてWebサーバ4に登録された別空間の画像が画像表示装置2に表示される。その際、ユーザが実空間で移動する所望の位置(始点と終点)を地図上で指定すると、別空間のコースに対し距離や方向が所定の幾何学的関係で近似された実空間のコースが決定され、別空間の撮影ポイントに対応する実空間の移動ポイントにおいて、登録された別空間の画像が表示される。 The outline of the operation of the image display system 1 of this embodiment is as follows. First, the image pickup device 3 captures an image at each point on the course in another space, and the image data and the attached information of the capture point are transmitted to the Web server 4 and registered. This registration process may be performed by the user who owns the image display device 2, or may be performed by another user. When the user himself / herself registers, the image sensor 6a in the image display device 2 can also be used, in which case the image sensor 3 becomes unnecessary. Next, when the user wears the image display device 2 and moves the course in the real space, the image of another space registered in the Web server 4 is displayed on the image display device 2 according to the moving position. At that time, when the user specifies a desired position (start point and end point) to move in the real space on the map, the course in the real space whose distance and direction are approximated by a predetermined geometric relationship with respect to the course in another space is obtained. The determined and registered image of another space is displayed at the moving point of the real space corresponding to the shooting point of another space.
 [画像表示装置]
  図2Aと図2Bは、画像表示装置2の外観と表示画面の例を示す図である。画像表示装置2は、透過形HMD19の場合である。図2Aはユーザ20が透過形HMD19を頭部に装着した状態を示す。透過形HMD19の表示部12aは、一方の目(右目)21aの前方に半透過のプリズム22を配置し、そのプリズム22に側方の表示素子23の再生画像を照射する構成となっている。また、透過形HMD19の前面には撮像素子6aを、上部には制御部(駆動部)10aを配置している。
[Image display device]
2A and 2B are diagrams showing an example of the appearance and display screen of the image display device 2. The image display device 2 is the case of the transmissive HMD19. FIG. 2A shows a state in which the user 20 wears the transmissive HMD 19 on the head. The display unit 12a of the transmissive HMD 19 has a configuration in which a semitransparent prism 22 is arranged in front of one eye (right eye) 21a, and the prism 22 is irradiated with a reproduced image of a side display element 23. Further, an image sensor 6a is arranged on the front surface of the transmissive HMD 19, and a control unit (drive unit) 10a is arranged on the upper surface.
 図2Bは、ユーザ20がプリズム22を見たときの表示画面(表示部12a)の例を示す。プリズム22を透過して実空間の前方の風景画像25(破線で示す)に、表示素子23による別空間の再生画像24(実線で示す)が重畳されて見ることができる。なお、再生画像24を表示する場合、その表示位置(表示方向)は例えば表示画面12aの中央とする。さらには、画像表示装置2と撮像装置3の撮影方向の方位・姿勢検出部11a,11bによる方位情報を比較することにより、実空間でのユーザの視線方向が別空間での撮像装置3の撮影方向の方位に一致したときに、該当する再生画像24を表示させることもできる。 FIG. 2B shows an example of a display screen (display unit 12a) when the user 20 looks at the prism 22. A landscape image 25 (indicated by a broken line) in front of the real space can be seen by superimposing a reproduced image 24 (indicated by a solid line) in another space by the display element 23 through the prism 22. When displaying the reproduced image 24, the display position (display direction) is, for example, the center of the display screen 12a. Furthermore, by comparing the orientation information of the image display device 2 and the image pickup device 3 with the direction / orientation detection units 11a and 11b, the image pickup device 3 is photographed in a space different from the user's line-of-sight direction in the real space. It is also possible to display the corresponding reproduced image 24 when the orientation of the direction matches.
 この例の透過形HMD19では、プリズム22と表示素子23を一方の目(右目)21aに配置する単眼式としたが、他方の目(左目)21bにも配置する両眼式とすることもできる。またプリズム22の透過度は、風景画像25と再生画像24の両者がバランスよく視認できるように、単眼式か両眼式かに応じて適宜設定するものとする。 In the transmissive HMD19 of this example, the prism 22 and the display element 23 are arranged in one eye (right eye) 21a, but they can also be arranged in the other eye (left eye) 21b. .. Further, the transmittance of the prism 22 is appropriately set according to whether it is a monocular type or a binocular type so that both the landscape image 25 and the reproduced image 24 can be visually recognized in a well-balanced manner.
 [別空間のコース]
  図3は、別空間のコース上で各ポイントの画像を取得する例を示す図である。撮影者(画像表示装置2のユーザ本人、あるいは他のユーザ)は、別空間のコース26a(破線の矢印で示す)を移動しながら、撮像装置3にてコース上の各ポイントで推奨すべき周囲の風景(ユーザ本人であれば思い出となる風景)を撮影する。なお、各推奨物を撮影するために重複せずに移動できる道順を推奨コースと名付ける。
[Course in another space]
FIG. 3 is a diagram showing an example of acquiring an image of each point on a course in another space. The photographer (the user of the image display device 2 or another user) is moving around the course 26a (indicated by the broken line arrow) in another space, and the surroundings that should be recommended at each point on the course by the image pickup device 3. Take a picture of the scenery (a scenery that will be a memorable one for the user). In addition, the route that can be moved without duplication to shoot each recommended item is named the recommended course.
 具体的には、推奨コース26aの始点P111では、目印となるゲート28を撮影し、ポイントP112では建造物29を撮影する。分岐点P113では分岐している道路を入れた風景を撮影する。ポイントP114では塔30を撮影し、コースの終点P115では鳥居31を撮影する。また、推奨コース26a(ポイントP111~P115)から外れた位置に他の推奨物があれば、例えばポイントP116にて石碑32を撮影する。これらのポイントで撮影した画像データとその付属情報(位置情報など)は画像ファイルとしてメモリ8bに保存し、Webサーバ4に送信する。各ポイントでのデータについては、推奨コース26a上に存在するか否かの区別をつけて保存する。 Specifically, at the starting point P111 of the recommended course 26a, the gate 28 as a mark is photographed, and at the point P112, the building 29 is photographed. At the branch point P113, a landscape including a branching road is photographed. At point P114, the tower 30 is photographed, and at the end point P115 of the course, the torii 31 is photographed. If there is another recommended item at a position outside the recommended course 26a (points P111 to P115), for example, the stone monument 32 is photographed at point P116. The image data taken at these points and the attached information (position information, etc.) are stored in the memory 8b as an image file and transmitted to the Web server 4. The data at each point is saved with a distinction as to whether or not it exists on the recommended course 26a.
 [実空間のコース]
  図4は、実空間のコース設定と別空間で撮影した画像を表示する例を示す図である。実空間にコース27a(実線の矢印で示す)を設定し、コース上のポイントP211~P215を示す。コース外は破線39a、39bで示す。コース設定の手順を説明する。実空間を移動しようとするユーザは、画像表示装置2の表示部12aに地図アプリ等を用いて実空間の地図を表示し、コースの始点P211と終点P215を指定する。これにより、始点ポイントP211は図3に示した別空間の始点ポイントP111に、終点ポイントP215は別空間のポイントP115に対応付けられる。次に、別空間のコースと近似的に所定の幾何学的関係(方位の偏差と縮尺)となるように実空間で移動可能なコースを決定する。当然ながら、実空間と別空間では道路や建物の状況が異なるので、実空間でユーザが移動可能なコースの形状は別空間のコースの形状と厳密には一致しない。よって、幾何学的関係が類似するような道順と中間ポイントを設定する。コース設定の詳細は後述するPADで説明する。その結果、図3に示した別空間のコースの中間点のポイントP112~P114に対応するように、実空間の中間点のポイントP212~P214が決定され、これらを結ぶ実空間のコース27aが設定される。
[Real space course]
FIG. 4 is a diagram showing an example of displaying an image taken in a different space from the course setting in the real space. A course 27a (indicated by a solid arrow) is set in the real space, and points P211 to P215 on the course are shown. The outside of the course is indicated by broken lines 39a and 39b. The procedure for setting the course will be explained. A user who intends to move in the real space displays a map of the real space on the display unit 12a of the image display device 2 using a map application or the like, and designates a start point P211 and an end point P215 of the course. As a result, the start point point P211 is associated with the start point point P111 in another space shown in FIG. 3, and the end point point P215 is associated with the point P115 in another space. Next, a course that can be moved in the real space is determined so as to have a predetermined geometric relationship (deviation of orientation and scale) approximately with the course of another space. As a matter of course, the conditions of roads and buildings are different between the real space and another space, so the shape of the course that the user can move in the real space does not exactly match the shape of the course in the other space. Therefore, set directions and intermediate points that have similar geometric relationships. Details of the course setting will be described later in PAD. As a result, the points P212 to P214 of the midpoint of the real space are determined so as to correspond to the points P112 to P114 of the midpoint of the course of another space shown in FIG. 3, and the course 27a of the real space connecting these is set. Will be done.
 ユーザが実空間のコース27aを移動し、設定された各ポイントP211~P215に近づくと、別空間で撮影された対応する推奨物(構造物や分岐点の風景)の撮影画像24a~24eが画像表示装置2の表示部12aに表示される。なお、方位・姿勢検出部11aによりユーザの視線方向を検出し、登録された推奨物の方向がユーザの視線方向にあるときにこれらの推奨物を表示することで、より臨場感を高めることができる。このうちポイントP213は別空間の分岐点であるポイントP113に対応する。この分岐ポイントP213に近づくと、対応する撮影画像24cだけでなく、予め登録された分岐方向を示す矢印24c’を画像合成した分岐案内図を表示する。 When the user moves on the course 27a in the real space and approaches each set point P211 to P215, the captured images 24a to 24e of the corresponding recommended objects (structures and scenery of the branch point) taken in another space are images. It is displayed on the display unit 12a of the display device 2. The orientation / posture detection unit 11a detects the user's line-of-sight direction, and when the direction of the registered recommended items is in the user's line-of-sight direction, these recommended items are displayed to further enhance the sense of presence. it can. Of these, point P213 corresponds to point P113, which is a branch point in another space. When approaching the branch point P213, not only the corresponding captured image 24c but also a branch guide map in which a pre-registered arrow 24c'indicating the branch direction is image-combined is displayed.
 別空間コース26a上の中間ポイントP112、P114は分岐点ではないが、これに対応する実空間コース27aのポイントP212,P214では、直前の交差点で進路変更(右折)せねばならない。そこで、画像表示装置2の位置検出部7aはその交差点に来たことを検出し、音声出力部13はユーザに右折するように音声で指示する。これによりユーザは、実空間に設定されたコース27aから外れずに移動することができる。なお、ユーザが推奨コースから外れた破線39a、39bのコースを移動した場合、ポイントP216の位置にて石碑の画像24fが表示される。 The intermediate points P112 and P114 on the separate space course 26a are not branch points, but at the corresponding points P212 and P214 of the real space course 27a, the course must be changed (turn right) at the immediately preceding intersection. Therefore, the position detection unit 7a of the image display device 2 detects that the intersection has been reached, and the voice output unit 13 gives a voice instruction to the user to turn right. As a result, the user can move without departing from the course 27a set in the real space. When the user moves the courses of the broken lines 39a and 39b that deviate from the recommended course, the image 24f of the stone monument is displayed at the position of the point P216.
 この結果、ユーザが実空間のコース27aを散策したとき、実空間と異なる場所である別空間の風景画像を移動位置に応じて見ることができ、別空間を臨場感をもって体験することができる。さらに分岐点では、風景画像に合成された分岐案内図を見ることができ、分岐点での進路を予習できるという効果がある。 As a result, when the user walks through the course 27a in the real space, the landscape image of another space, which is a place different from the real space, can be seen according to the moving position, and the different space can be experienced with a sense of reality. Furthermore, at the branch point, the branch guide map combined with the landscape image can be seen, which has the effect of being able to prepare for the course at the branch point.
 [処理のPADとコース情報]
  図5Aは、別空間のコース情報を登録する処理工程を示す図で、処理構造を階層化したPAD(Problem Analysis Diagram)で示している。撮像装置3の制御部10bは、撮影した画像データとその付属情報をメモリ8bから読み出し、通信インタフェース9bを介してWebサーバ4へ送信する。Webサーバ4には複数のコースが登録されるので、それぞれにコース番号が付与される。
[Processing PAD and course information]
FIG. 5A is a diagram showing a processing process for registering course information in another space, and is shown by a PAD (Problem Analysis Diagram) in which the processing structure is layered. The control unit 10b of the image pickup apparatus 3 reads the captured image data and its accessory information from the memory 8b and transmits the captured image data to the Web server 4 via the communication interface 9b. Since a plurality of courses are registered in the Web server 4, a course number is assigned to each of them.
 ステップS11では、撮像装置3で撮影した別空間の画像データおよびその付属情報(位置情報など)を画像ファイルとして登録する。このとき、推奨コース上にない画像ファイルを含んでもよい(例えば、図3における石碑32のデータ)。 In step S11, the image data of another space taken by the imaging device 3 and its attached information (position information, etc.) are registered as an image file. At this time, an image file that is not on the recommended course may be included (for example, the data of the stone monument 32 in FIG. 3).
 ステップS12では、推奨コース上のポイント(P111~P115)毎に以下の処理を繰り返す。ステップS12aでは、指定したポイントに対し登録する画像ファイル(図3のP111~P115での画像28~31)を選択する。ステップS12bでは、選択した画像ファイルの付属情報からポイント情報を設定する。このとき、撮影位置の情報があればそれをポイント情報とし、撮影位置の情報がない場合には地図アプリ等から指定してポイント情報を登録する。また、分岐案内の矢印等の画像に重畳して表示すべき関連図形・記号を付属情報の1つとして任意に登録する。図3、図4の例では、分岐ポイントP213(P113)での道案内となる矢印記号24c’を付属情報として登録している。 In step S12, the following processing is repeated for each point (P111 to P115) on the recommended course. In step S12a, the image files to be registered for the designated points (images 28 to 31 in P111 to P115 in FIG. 3) are selected. In step S12b, point information is set from the attached information of the selected image file. At this time, if there is shooting position information, it is used as point information, and if there is no shooting position information, it is specified from a map application or the like and the point information is registered. In addition, related figures / symbols to be displayed superimposed on the image such as the arrow of the branch guide are arbitrarily registered as one of the attached information. In the examples of FIGS. 3 and 4, the arrow symbol 24c'which serves as a route guide at the branch point P213 (P113) is registered as attached information.
 図5Bは、図5Aの処理により登録されたコース情報の内容を示す図である。コース情報44は、コース番号とコースポイント情報と画像ファイル情報を含む。コース番号はコースを識別するための番号である。コースポイント情報は、コースの始点から終点までのポイントの位置情報(緯度、経度、高度)である。高度は、位置検出部の機能としてカバーされない場合は省略可能である。画像ファイル情報は、コース上もしくはコース周辺のポイントでの撮影画像の画像ファイル45のリスト(もしくはリンク)である。また、上記の画像ファイル情報から参照される画像ファイル45は、画像データ16とその付属情報17(一般的にはメタデータとも言う)を含む。付属情報17には、撮影位置や関連図形・記号などが含まれる。 FIG. 5B is a diagram showing the contents of the course information registered by the process of FIG. 5A. The course information 44 includes a course number, course point information, and image file information. The course number is a number for identifying the course. The course point information is position information (latitude, longitude, altitude) of points from the start point to the end point of the course. Altitude can be omitted if it is not covered as a function of the position detector. The image file information is a list (or link) of image files 45 of images taken on or around the course. Further, the image file 45 referred to from the above image file information includes image data 16 and its accessory information 17 (generally also referred to as metadata). The attached information 17 includes a shooting position, related figures / symbols, and the like.
 図6は、実空間のコースを移動して登録された画像を表示する処理工程を示す図で、処理構造を階層化したPADで示す。画像表示装置2の制御部10aは、通信インタフェース9aを介してWebサーバ4から画像データとその付属情報を取り込み、実空間のコースを設定し、移動位置に応じて対応する画像を表示する。 FIG. 6 is a diagram showing a processing process of moving a course in real space and displaying a registered image, and shows the processing structure as a layered PAD. The control unit 10a of the image display device 2 takes in image data and its accessory information from the Web server 4 via the communication interface 9a, sets a course in real space, and displays the corresponding image according to the movement position.
 ステップS21は、表示したい画像が収納されている別空間のコースを指定するものである。ステップS21aでユーザがコース番号(例えばNo.1)を入力すると、ステップS21bで、Webサーバ4から該当するコース情報と画像ファイルが取り込まれる。 Step S21 specifies a course in another space in which the image to be displayed is stored. When the user inputs the course number (for example, No. 1) in step S21a, the corresponding course information and the image file are fetched from the Web server 4 in step S21b.
 ステップS22では、ユーザが実空間のスタート地点とゴール地点を指定する。位置の指定方法としては、地図アプリから指定する方法や、ユーザの現在地点を画像表示装置2の位置検出部7aから指定する方法がある。 In step S22, the user specifies a start point and a goal point in the real space. As a method of designating the position, there are a method of designating from the map application and a method of designating the current position of the user from the position detection unit 7a of the image display device 2.
 ステップS23は、コース上の隣接するポイント間で定義される区間毎に、以下の処理を繰り返して行う。つまり、区間始点においてその区間の終点(目標ポイント)を設定し、ユーザが区間終点に到達すると登録された画像が表示されるとともに、次の区間の終点を設定するようにする。 In step S23, the following processing is repeated for each section defined between adjacent points on the course. That is, the end point (target point) of the section is set at the start point of the section, and when the user reaches the end point of the section, the registered image is displayed and the end point of the next section is set.
 ステップS23aでは、別空間のコースに近似した実空間のコースを設定するため、両者のコースの幾何学的な関係を基に区間終点を決定する。そのため、別空間と実空間における、区間の始点Ps(初回はスタート地点)からコースの終点Pg(ゴール地点)を見込んだときの直線的な方位と距離を比較し、両者の方位差と縮尺を求める。例えば、別空間における区間始点Psからゴール地点Pgまでの直線的な方位が東方向で、距離が5kmとする。一方、実空間における区間始点Ps’からゴール地点Pg’の直線的な方位が北方向で、距離が2kmとする。この場合、別空間に対する実空間の方位差は、反時計回りに90度回転であり、縮尺は0.4倍となる。これらの幾何学的な関係(方位差と縮尺)を用いて、別空間における区間終点Peに対応する実空間における区間終点Pe’を求める。ただし、このようにして求めた区間終点Pe’の位置は、必ずしも実空間で移動可能な場所とは限らず、例えば建物が存在する領域に重なることもある。そこで、実空間の地図アプリを参照して、位置Pe’に近接した移動可能な位置(例えば道路上)を求めて目標の区間終点Pe”とする。そして、地図アプリを参照して区間始点Ps’から区間終点Pe”までの道順を決定し、実空間のコースを設定する。 In step S23a, in order to set a course in a real space that approximates a course in another space, the end point of the section is determined based on the geometric relationship between the two courses. Therefore, in different space and real space, compare the linear orientation and distance when the end point Pg (goal point) of the course is expected from the start point Ps (start point at the first time) of the section, and compare the orientation difference and scale between the two. Ask. For example, the linear direction from the section start point Ps to the goal point Pg in another space is in the east direction, and the distance is 5 km. On the other hand, the linear direction from the section start point Ps'to the goal point Pg'in the real space is in the north direction, and the distance is 2 km. In this case, the directional difference in the real space with respect to another space is rotated 90 degrees counterclockwise, and the scale is 0.4 times. Using these geometric relationships (direction difference and scale), the section end point Pe'in the real space corresponding to the section end point Pe in another space is obtained. However, the position of the section end point Pe'obtained in this way is not necessarily a movable place in the real space, and may overlap with, for example, an area where a building exists. Therefore, with reference to the map application in the real space, a movable position (for example, on the road) close to the position Pe'is obtained and set as the target section end point Pe ”, and the section start point Ps is referred to with reference to the map application. Determine the route from'to section end point Pe'and set the course in real space.
 ステップS23bでは、ユーザが区間終点Pe”に接近したかどうかを判定する。ユーザの実空間での移動位置は位置検出部7aにより検出する。そして、移動位置が区間終点Pe”の近傍、例えば区間終点Pe”から区間距離の10%の範囲内に到達したら、区間終点Pe”に接近したと判断する。 In step S23b, it is determined whether or not the user has approached the section end point Pe ". The user's movement position in the real space is detected by the position detection unit 7a. Then, the movement position is near the section end point Pe", for example, the section. When it reaches within a range of 10% of the section distance from the end point Pe ", it is determined that the section end point Pe" has been approached.
 ステップS23bの判定の結果、区間終点に接近したときには、ステップS23cにおいて、別空間の対応するポイントでの登録画像を表示部12aに表示する。これ以後は、ステップS23dで、実空間のコース上での区間終点Pe”に到達したかどうかの判断を繰り返し行う。区間終了と判断したら、その位置を実空間の次の区間の始点としてステップS23に戻る。 As a result of the determination in step S23b, when the end point of the section is approached, the registered image at the corresponding point in another space is displayed on the display unit 12a in step S23c. After that, in step S23d, it is repeatedly determined whether or not the section end point Pe ”on the course in the real space has been reached. If it is determined that the section ends, that position is set as the start point of the next section in the real space in step S23. Return to.
 なお、ステップS23dにおける区間終了判断の条件として、地図アプリ上で分岐点があり、その分岐点から先が行き止まりでない場合には、その分岐点を区間の終点とみなす。図4のポイントP213がこのケースに該当する。また、他の区間終了判断の条件としては、分岐点でなくても実空間の区間距離が所定の距離に達した場合やユーザが指定したポイントである場合などである。図4のポイントP212、P214がこのケースに該当する。 Note that, as a condition for determining the end of a section in step S23d, if there is a branch point on the map application and the destination from the branch point is not a dead end, the branch point is regarded as the end point of the section. Point P213 in FIG. 4 corresponds to this case. Further, other conditions for determining the end of the section include a case where the section distance in the real space reaches a predetermined distance even if it is not a branch point, or a case where the point is a point specified by the user. Points P212 and P214 in FIG. 4 correspond to this case.
 なお、実空間のコース27aの自動作成においては、地図アプリ上で模擬走行をすればよい。さらに、別空間のコース26aからユーザの好みに合った実空間のコース27aを作成するためには、移動を開始する前や移動中に、図6の処理により自動作成された実空間のコースを、手動で修正しても良い。例えば、ポイントP213に着いた時に、自動生成されたコース27a外のポイントP216までの往復のコース39aを追加してもよい。 In addition, in the automatic creation of the course 27a in the real space, a simulated run may be performed on the map application. Further, in order to create a real space course 27a that suits the user's preference from the course 26a in another space, the real space course automatically created by the process of FIG. 6 is created before or during the movement. , You may fix it manually. For example, when arriving at point P213, a round-trip course 39a to point P216 outside the automatically generated course 27a may be added.
 さらに、別空間のコース26aでの進行方向と実空間のコース27aでの進行方向の方位差が所定値以上になったとき、音声出力部13により音声で知らせるか、または表示部12aにメッセージを表示するようにしてもよい。これによりユーザは、実空間の地理的制約により別空間のコースと実空間のコースの差異が大きくなっていることを自覚した上で、コースを移動することができる。 Further, when the directional difference between the traveling direction on the course 26a in another space and the traveling direction on the course 27a in the real space becomes equal to or more than a predetermined value, the voice output unit 13 notifies by voice or a message is sent to the display unit 12a. It may be displayed. As a result, the user can move the course while being aware that the difference between the course in another space and the course in the real space is large due to the geographical restrictions of the real space.
 さらにまた、ユーザの位置が実空間でのコース27aから所定値以上外れたことを検出したときに、音声出力部13によりが警報音を発するか、または表示部12aに警告メッセージを表示するようにしてもよい。これによりユーザは、実空間でのコース27aをミスなく移動することができる。 Furthermore, when it is detected that the position of the user deviates from the course 27a in the real space by a predetermined value or more, the voice output unit 13 emits an alarm sound or displays a warning message on the display unit 12a. You may. As a result, the user can move the course 27a in the real space without any mistake.
 以上の説明では、撮像装置3で撮影した画像をWebサーバ4に登録し、画像表示装置2はWebサーバ4から登録された画像を取り込んで表示する構成とした。これに対しユーザ自身で撮影・表示を行う場合には、画像表示装置2で撮影、保存することができる。よって、撮像装置3やWebサーバ4を省略する構成とすることができる。 In the above description, the image captured by the imaging device 3 is registered in the Web server 4, and the image display device 2 is configured to capture and display the image registered from the Web server 4. On the other hand, when the user himself / herself shoots / displays, the image display device 2 can shoot / save. Therefore, the imaging device 3 and the Web server 4 can be omitted.
 本実施例によれば、ユーザが実空間のコースを移動するとき、別空間の対応する位置の風景を静止画や動画として見ることができるので、あたかも別空間を実際に移動しているような臨場感をもって体験することができる。また、ユーザが移動する実空間のコースは、別空間のコースに近似しているだけでなく、実空間の道路などの地理的条件に合わせて柔軟性をもって設定されるので、実用性が高いものとなっている。 According to this embodiment, when the user moves the course in the real space, the scenery at the corresponding position in the other space can be seen as a still image or a moving image, so that it is as if the user is actually moving in the other space. You can experience it with a sense of reality. In addition, the course in the real space where the user moves is not only similar to the course in another space, but also flexibly set according to the geographical conditions such as the road in the real space, so that it is highly practical. It has become.
 実施例2では、別空間のコースに高度差があり、実空間のコースが平坦であるような場合の、実空間のコースの設定について説明する。 In the second embodiment, the setting of the course in the real space will be described when there is a difference in altitude between the courses in another space and the course in the real space is flat.
 図7は、高度差のある別空間のコースの例を示す図である。別空間は起伏があり、高度差を等高線34(点線で示す)で表し、各等高線には高度[m]を記述している。別空間のコース26b(実線の矢印で示す)は、始点ポイントP121から終点ポイントP126までの高度差のあるコースである。ポイントP122は最も高度が高く、ポイントP123は最も高度が低い。ポイントP124、P125は高地部である。そのため、上り坂となる区間(ポイントP121~P122、ポイントP123~P124)では、歩行速度が低下する。逆に、下り坂となる区間(ポイントP122~P123、ポイントP125~P126)では、歩行速度が上昇する。ポイントP124には小屋33があり、その撮影画像を登録する。他のポイントでの撮影対象物についても同様であるが、ここでは説明を省略する。 FIG. 7 is a diagram showing an example of a course in another space with different altitudes. The other space has undulations, and the altitude difference is represented by contour lines 34 (indicated by dotted lines), and the altitude [m] is described in each contour line. Course 26b (indicated by a solid arrow) in another space is a course with an altitude difference from the start point point P121 to the end point point P126. Point P122 has the highest altitude and point P123 has the lowest altitude. Points P124 and P125 are highlands. Therefore, the walking speed decreases in the uphill section (points P121 to P122, points P123 to P124). On the contrary, the walking speed increases in the downhill section (points P122 to P123, points P125 to P126). There is a hut 33 at the point P124, and the photographed image is registered. The same applies to the object to be photographed at other points, but the description thereof is omitted here.
 各ポイントで撮影された画像データは、実施例1の場合と同様にWebサーバ4に登録するが、図5Bの付属情報17に示すように、撮影時に位置検出部7bより求められる撮影位置の情報に、高度を追加している。高度の情報は、高度センサでも測定可能である。あるいは、地図情報からその位置の高度値を読み取ることも可能である。 The image data taken at each point is registered in the Web server 4 as in the case of the first embodiment, but as shown in the attached information 17 of FIG. 5B, the information of the shooting position obtained from the position detection unit 7b at the time of shooting. The altitude is added to. Altitude information can also be measured by altitude sensors. Alternatively, it is also possible to read the altitude value of the position from the map information.
 図8は、実空間の高度差のない平坦なコース27bの例を示し、図7の別空間のコース26bに対応付けられている。実空間は高度差がないので、等高線の表記は省略している。実空間のコース27bは、始点ポイントP221から終点ポイントP226までであり、隣接するポイント間である各区間の方位は、別空間のコース26bの各区間の方位と所定の方位差で対応している。ただし、実空間の各区間の距離は、別空間の高度差により歩行速度が変化することを考慮して設定する。つまり、Webサーバ4から別空間のコース26bの各ポイントの位置情報と高度情報を取得し、各区間の歩行時間を仮定する。そして、実空間のコース27bの区間毎の歩行時間が、別空間のコース26bの区間毎の歩行時間に所定の縮尺で対応するように、コース27b上に中間ポイントを設定している。
FIG. 8 shows an example of a flat course 27b with no difference in altitude in the real space, and is associated with the course 26b in another space of FIG. Since there is no difference in altitude in the real space, the notation of contour lines is omitted. The course 27b in the real space is from the start point point P221 to the end point point P226, and the orientation of each section between adjacent points corresponds to the orientation of each section of the course 26b in another space with a predetermined orientation difference. .. However, the distance of each section in the real space is set in consideration of the fact that the walking speed changes due to the difference in altitude in another space. That is, the position information and the altitude information of each point of the course 26b in another space are acquired from the Web server 4, and the walking time of each section is assumed. Then, an intermediate point is set on the course 27b so that the walking time for each section of the course 27b in the real space corresponds to the walking time for each section of the course 26b in another space at a predetermined scale.
 具体的には、上り坂に対応する区間(ポイントP221~P222、ポイントP223~P224)は距離を拡大し、下り坂に対応する区間(ポイントP222~P223、ポイントP225~P226)は距離を縮小して設定する。ユーザが各区間のポイント(例えばポイントP224)に近づくと位置検出部7aで検出して、実施例1と同様に、登録された別空間の画像(例えば24g)を表示部12aに表示する。 Specifically, the section corresponding to the uphill (points P221 to P222, points P223 to P224) increases the distance, and the section corresponding to the downhill (points P222 to P223, points P225 to P226) decreases the distance. And set. When the user approaches a point in each section (for example, point P224), the position detection unit 7a detects it, and similarly to the first embodiment, the registered image of another space (for example, 24 g) is displayed on the display unit 12a.
 このように実施例2では、別空間の高度差を考慮して、高度差に伴うユーザの移動時間の差を打ち消すように実空間のポイント間の距離を補正して設定することにより、所用時間や疲労度の点で、別空間のコースにより近い体感で実空間を移動することができる。 As described above, in the second embodiment, the required time is set by correcting the distance between the points in the real space so as to cancel the difference in the movement time of the user due to the altitude difference in consideration of the altitude difference in another space. In terms of fatigue and fatigue, you can move in the real space with a feeling closer to the course in another space.
 ここでは、別空間のコースに高度差があり、実空間のコースが平坦である場合を想定したが、逆に、別空間のコースが平坦で、実空間のコースに高度差がある場合、あるいは両者のコースの高度差の状態が異なる場合にも、歩行時間を考慮して実空間のコースを設定することができる。 Here, it is assumed that the course in another space has an altitude difference and the course in the real space is flat, but conversely, the course in another space is flat and the course in the real space has an altitude difference, or Even when the altitude difference between the two courses is different, the real space course can be set in consideration of the walking time.
 実施例3では、実空間の周回コースをランニングするとき、別空間のコースの画像を見ながら走行することのできる画像表示装置2について説明する。ここでは画像表示装置2としてスマートフォン40を用いる場合について説明する。 In the third embodiment, an image display device 2 capable of running while looking at an image of a course in another space when running a circuit course in a real space will be described. Here, a case where the smartphone 40 is used as the image display device 2 will be described.
 図9は、別空間のコース26cを示す図である。ここではロードレースに用いられるコースを想定し、ポイントP131をスタートして、ポイントP135をゴールとするものである。中間ポイントP132にはビルディング35が存在し、その撮影画像がWebサーバ4に登録されている。 FIG. 9 is a diagram showing a course 26c in another space. Here, assuming a course used for road racing, point P131 is started and point P135 is the goal. A building 35 exists at the intermediate point P132, and a photographed image thereof is registered in the Web server 4.
 図10は、実空間のコース27cを示す図である。ここでは陸上競技用のトラックを周回するコースとする。実空間のコース27cの形状は、別空間のコース26cの形状とは関係がなく、走行距離が対応する位置にポイントを設定している。例えば、スタートライン38上のポイントP231をスタートして、トラックを2周した同じポイントP235をゴールとする。別空間のコース26cと実空間のコース27cの全長の比から縮尺を求め、実空間のコース27cの途中の通過ポイントは、別空間のコース26cのスタートから通過ポイントまでの距離に前記縮尺を掛けた距離の位置として自動的に決定する。この距離算出による方法では、実空間のコース27c上でのユーザの走行距離を画像表示装置2自体で測定する必要がある。これに代わる方法として、実空間のコース27c上の通過ポイント位置にビーコンを設置して、ビーコンからの受信信号により通過ポイントに達したことを検出する方法がある。 FIG. 10 is a diagram showing a course 27c in real space. Here, it is a course that goes around a track for athletics. The shape of the course 27c in the real space has nothing to do with the shape of the course 26c in another space, and points are set at positions corresponding to the mileage. For example, the point P231 on the start line 38 is started, and the same point P235 that makes two laps of the track is set as the goal. The scale is calculated from the ratio of the total lengths of the course 26c in another space and the course 27c in the real space, and the passing point in the middle of the course 27c in the real space is the distance from the start of the course 26c in the other space to the passing point multiplied by the scale. It is automatically determined as the position of the distance. In this method of calculating the distance, it is necessary to measure the mileage of the user on the course 27c in the real space by the image display device 2 itself. As an alternative method, there is a method of installing a beacon at a passing point position on the course 27c in the real space and detecting that the passing point has been reached by the received signal from the beacon.
 ここでは、後者のビーコンを用いる方法について説明する。複数人でのレースを想定し、実空間のコース27c上のスタート/ゴール及び各通過ポイントには、位置情報を正確に計測できるようビーコン36a~36dを配置した。ビーコンからの信号を一定の閾値以上で受信できる受信可能範囲を破線37で示す。ユーザはポイントP231(ビーコン36a)をスタートして左回りに進み、ポイントP232(ビーコン36c)を通過した後、スタートと同じポイントで1周する(ビーコン36a)。さらにポイントP233(ビーコン36b)、ポイントP234(ビーコン36d)を通過して、ゴールライン38の引かれたポイントP235(ビーコン36a)で2周しゴールとなる。 Here, the latter method using a beacon will be described. Assuming a race with multiple people, beacons 36a to 36d were placed at the start / goal and each passing point on the course 27c in the real space so that the position information could be measured accurately. The receivable range in which the signal from the beacon can be received above a certain threshold value is indicated by the broken line 37. The user starts the point P231 (beacon 36a), proceeds counterclockwise, passes the point P232 (beacon 36c), and then makes one round at the same point as the start (beacon 36a). Further, it passes through the point P233 (beacon 36b) and the point P234 (beacon 36d), and makes two laps at the point P235 (beacon 36a) drawn by the goal line 38 to reach the goal.
 図11は、画像表示装置を装着したユーザの状態を示す図である。ここでは画像表示装置2であるスマートフォン40を固定器具41を用いてユーザ20の体に装着し、ハンズフリーで表示画面が見えるように取り付けている。スマートフォン40は、位置検出部7aとしてBluetooth(登録商標)機能を用いることによりビーコン36a~36dの発する信号を検出し、現在位置を知ることができる。本例のようにトラックを複数回周回する場合は、ビーコン信号の受信回数(すなわち周回数)と組み合わせることで、ポイントP231~P235のいずれに達したかを判別することができる。 FIG. 11 is a diagram showing a state of a user wearing an image display device. Here, the smartphone 40, which is the image display device 2, is attached to the body of the user 20 by using the fixing device 41 so that the display screen can be seen hands-free. The smartphone 40 can detect the signals emitted by the beacons 36a to 36d and know the current position by using the Bluetooth (registered trademark) function as the position detection unit 7a. When the track is lapped a plurality of times as in this example, it is possible to determine which of the points P231 to P235 has been reached by combining with the number of times the beacon signal is received (that is, the number of laps).
 スマートフォン40には、実空間のコース27c上のポイントP231~P235と、別空間のコース26c上のポイントP131~P135とを対応づけるアプリケーションをインストールしておく。また、別空間のコース26c上の画像データは、通信網5を介してWebサーバ4から予め取り込んでおくか、アプリケーションからの要求に応じてWebサーバ4から随時取り込める状態にしておく。 On the smartphone 40, install an application that associates points P231 to P235 on the course 27c in the real space with points P131 to P135 on the course 26c in another space. Further, the image data on the course 26c in another space is fetched in advance from the Web server 4 via the communication network 5, or is in a state where it can be fetched from the Web server 4 at any time in response to a request from the application.
 実空間のコース27cにて多数のユーザがレースを行う場合について具体的に説明する。スマートフォン40のアプリケーションを立ち上げ後に、スタート地点のポイントP231の手前でビーコン36aからの信号の受信可能範囲37の内側に、第1の選手群が待機し、毎分00秒のときにレースをスタートする。このスタート後に第2の選手群が同様にポイントP231の手前で待機する。この方式によれば、数人ずつの選手群に区切ってスタートするウエーブ・スタートが可能となる。スターターは、毎分00秒の掲示時刻に従ってスタートの合図を送るか、選手がフライングしないように見守るだけでもよい。 The case where a large number of users race on the course 27c in the real space will be specifically described. After launching the application of the smartphone 40, the first group of athletes waits inside the receivable range 37 of the signal from the beacon 36a in front of the point P231 of the starting point, and the race starts at 00 seconds per minute. To do. After this start, the second group of players also stands by in front of point P231. According to this method, it is possible to start a wave by dividing into groups of several players. The starter may simply signal the start according to the posting time of 00 seconds per minute or just watch over the athletes from flying.
 各選手が実空間のポイントP233とポイントP234を通過し、ゴールのポイントP235に近づいたとき、スマートフォン40はビーコン36aからの受信信号が最大となる時刻をゴールタイムとして計測する。各選手のゴールタイムは、例えば無線の通信網5を介して図示しないレース主催者の集計装置に送信されてレースの順位の集計に用いられる。 When each player passes the points P233 and P234 in the real space and approaches the goal point P235, the smartphone 40 measures the time when the signal received from the beacon 36a becomes maximum as the goal time. The goal time of each athlete is transmitted to, for example, a race organizer's totaling device (not shown) via a wireless communication network 5, and is used for totaling the race ranking.
 図12A~図12Cは、スマートフォン40の表示画面の例を示す図である。ここでは、ユーザが別空間のポイントP132に対応する実空間のポイントP232に近づいたときの表示を示す。実空間のコース27cではポイントP232付近では直進するが、別空間のコース26cではポイントP132で右折する。ユーザの走行に従って表示部12aには、別空間を右折する図12Aの画像24hから図12Bの画像24iに変化して表示される。これらの画像には、別空間で撮影されたビルディング35a,35bや道路42a,42bが含まれている。別空間ではそこで右折しているが、実空間では進行方向が常に正面に見える画像として表示される。言い換えれば、別空間と実空間では進行方向の対応関係が固定されていないという特徴のため、別空間に対する実空間のコース形状を自由に設定できる。なお、進行方向の対応関係が固定されているとは、例えば時計回りに常に30度ずれているような状態である。 12A to 12C are diagrams showing an example of a display screen of the smartphone 40. Here, the display when the user approaches the point P232 in the real space corresponding to the point P132 in another space is shown. On the course 27c in the real space, go straight near the point P232, but on the course 26c in another space, turn right at the point P132. As the user travels, the display unit 12a changes from the image 24h of FIG. 12A turning right in another space to the image 24i of FIG. 12B. These images include buildings 35a, 35b and roads 42a, 42b taken in different spaces. In another space, it turns right there, but in real space, the direction of travel is always displayed as an image that can be seen in front. In other words, since the correspondence between the traveling directions is not fixed between the other space and the real space, the course shape of the real space with respect to the other space can be freely set. It should be noted that the fixed correspondence in the traveling direction means, for example, a state in which the correspondence is always shifted clockwise by 30 degrees.
 図12Cは、ユーザの正面方向の方位と姿勢(ユーザの視線方向の方位角と仰角)を反映した表示例を示す。図12Bにおいて、別空間のビルディングの画像35bの横を通過時に、これを見ようとしてユーザが上体を左にねじるとともに視線を下方に向けた場合の表示画像24jを示す。スマートフォンの通常の画像表示では、360°画角のような広視野角の画像であっても、進行方向と一致するスマートフォンの向きの画角の範囲の画像だけを表示する。そこで、スマートフォン40の方位・姿勢検出部11aでユーザの上体の動きを検出し、その視線方向の画角の範囲の画像を表示させることができる。これにより、表示画像24jでは、正面にビルディングの画像35cとその下方辺りに道路の画像42cが表示される。 FIG. 12C shows a display example reflecting the user's frontal orientation and posture (azimuth and elevation in the user's line-of-sight direction). FIG. 12B shows a display image 24j when a user twists his / her upper body to the left and turns his / her line of sight downward in order to see the image 35b of a building in another space. In the normal image display of a smartphone, even if the image has a wide viewing angle such as a 360 ° angle of view, only the image in the range of the angle of view in the direction of the smartphone that matches the traveling direction is displayed. Therefore, the orientation / posture detection unit 11a of the smartphone 40 can detect the movement of the user's upper body and display an image in the range of the angle of view in the line-of-sight direction. As a result, in the display image 24j, the image 35c of the building is displayed in front and the image 42c of the road is displayed in the lower part thereof.
 実施例3によれば、別空間の風景を見ながら同じコースを周回する競技等に本発明を適用することができる。特に、周回数が多い長距離走ではランナーが飽き易くなるが、時々異なった風景を見ながら走れるので、退屈感を解消する効果がある。 According to the third embodiment, the present invention can be applied to a competition or the like that goes around the same course while looking at the scenery of another space. In particular, long-distance running with many laps makes it easy for runners to get bored, but since they can run while looking at different scenery from time to time, it is effective in eliminating the feeling of boredom.
 図11では、スマートフォン40をハンズフリーで見えるように固定器具41を用いて体に装着する例を示したが、通常はスマートフォン40をザック等のポケットに入れて、各ポイントに近づいたときにスマートフォン40が警報音を発するようにしておき、警報音を聞いたら次のポイントに近づいたということでスマートフォン40を手に取ってスマートフォン40の画面を見るようにしてもよい。 FIG. 11 shows an example in which the smartphone 40 is attached to the body using a fixing device 41 so that it can be seen hands-free. Normally, the smartphone 40 is put in a pocket such as a zack and the smartphone is approached to each point. You may set the 40 to emit an alarm sound, and when you hear the alarm sound, you may pick up the smartphone 40 and look at the screen of the smartphone 40 because you are approaching the next point.
 また、実施例3において、表示部に静止画だけでなく動画も表示することができる。上記説明ではビーコンを使う構成としたが、最低限、ビーコンを受信している範囲だけ動画を再生できれば良い。あるいはビーコンの受信範囲外でも表示したい場合は、隣接するビーコン間でのユーザの走行速度を計算し、これに合わせて撮影した動画の再生速度を変えるようにすれば良い。 Further, in the third embodiment, not only a still image but also a moving image can be displayed on the display unit. In the above explanation, the beacon is used, but at a minimum, it is sufficient if the video can be played only in the range where the beacon is received. Alternatively, if it is desired to display even outside the reception range of the beacon, the running speed of the user between adjacent beacons may be calculated, and the playback speed of the recorded moving image may be changed accordingly.
 なお、実空間が広い周回コースであって、ビーコンを使わないで位置検出部だけで通過ポイントを判別できる場合には、位置検出部から取得した位置座標を基に動画の位置合わせをして全コース上で動画を再生することができる。その場合は、撮影した動画には、一定時間間隔もしくはポイントを通過時に位置情報を付加しておけば良い。 If the actual space is a wide circuit course and the passing point can be determined only by the position detection unit without using a beacon, the moving image is aligned based on the position coordinates obtained from the position detection unit. You can play the video on the course. In that case, the position information may be added to the captured moving image at regular time intervals or when passing through points.
 以上の各実施例では、ユーザが徒歩やランニングで移動する場合を示したが、自転車やユーザ自身が運転しない自動車や公共の移動手段であってもよい。また、コースの中間ポイントやゴールを、移動の途中でユーザが変更してもよい。また、各区間内を移動中に画像表示装置に画像を表示する場合には、予め別空間を撮影して画像表示装置からアクセス可能な状態で登録された車窓の景色等の画像であってもよい。 In each of the above embodiments, the case where the user moves by walking or running is shown, but it may be a bicycle, a car that the user does not drive, or a public means of transportation. In addition, the user may change the intermediate points and goals of the course during the movement. In addition, when displaying an image on the image display device while moving in each section, even if it is an image such as a view of a car window registered in a state in which another space is photographed in advance and can be accessed from the image display device. Good.
 以上いくつかの実施例について説明したが、本発明は上記した実施例に限定されるものではなく、様々な変形例が含まれる。例えば、上記した実施例は本発明を分かりやすく説明するために詳細に説明したものであって、必ずしも説明した全ての構成を備えるものに限定されるものではない。また、ある実施例の構成に他の実施例の構成を加えることも可能である。また、各実施例の一部について、他の構成の追加・削除・置換をすることが可能である。 Although some examples have been described above, the present invention is not limited to the above-mentioned examples, and includes various modifications. For example, the above-described embodiment has been described in detail in order to explain the present invention in an easy-to-understand manner, and is not necessarily limited to the one including all the described configurations. It is also possible to add the configuration of another embodiment to the configuration of one embodiment. Further, it is possible to add / delete / replace other configurations for a part of each embodiment.
 1:画像表示システム、2:画像表示装置、3:撮像装置、4:Webサーバ、5:通信網、6a,6b:撮像素子、7a,7b:位置検出部、8a,8b,8c:メモリ、9a,9b,9c:通信インタフェース、10a,10b,10c:制御部、11a,11b:方位・姿勢検出部、12a,12b:表示部、13:音声出力部、19:ヘッドマウントディスプレイ(HMD)、20:ユーザ、40:スマートフォン。 1: Image display system, 2: Image display device, 3: Image pickup device, 4: Web server, 5: Communication network, 6a, 6b: Image sensor, 7a, 7b: Position detection unit, 8a, 8b, 8c: Memory, 9a, 9b, 9c: Communication interface, 10a, 10b, 10c: Control unit, 11a, 11b: Orientation / orientation detection unit, 12a, 12b: Display unit, 13: Audio output unit, 19: Head-mounted display (HMD), 20: User, 40: Smartphone.

Claims (12)

  1.  ユーザが装着し、ユーザが実際に位置する実空間とは位置が異なる別空間の画像を表示する画像表示装置において、
     ユーザの位置を検出する位置検出部と、
     外部から別空間の画像データと撮影位置を含むコース情報を取得する通信インタフェースと、
     前記通信インタフェースで取得した画像データとコース情報を保存するメモリと、
     前記メモリに保存されている別空間の画像を表示する表示部と、
     前記表示部に対し別空間の画像の表示を制御する制御部と、を備え、
     前記制御部は、
     別空間のコース情報を基に、別空間の複数のポイントからなるコースを実空間の複数のポイントからなるコースに対応付けし、
     ユーザが実空間のコースを移動するとき、ユーザの位置に応じて、対応付けされた別空間のポイントの画像を表示することを特徴とする画像表示装置。
    In an image display device that is worn by the user and displays an image in a different space that is different from the actual space in which the user is actually located.
    A position detector that detects the user's position and
    A communication interface that acquires course information including image data of another space and shooting position from the outside,
    A memory for storing image data and course information acquired by the communication interface, and
    A display unit that displays an image of another space stored in the memory, and
    A control unit that controls the display of an image in another space with respect to the display unit is provided.
    The control unit
    Based on the course information of another space, the course consisting of multiple points in another space is associated with the course consisting of multiple points in the real space.
    An image display device characterized in that when a user moves through a course in real space, an image of associated points in another space is displayed according to the position of the user.
  2.  請求項1に記載の画像表示装置において、
     前記制御部は、前記別空間と前記実空間におけるコース上の複数のポイントを、互に対応するポイントの区間の距離の関係が近似的に一定の縮尺となるよう設定したことを特徴とする画像表示装置。
    In the image display device according to claim 1,
    The control unit is characterized in that a plurality of points on the course in the other space and the real space are set so that the relationship between the distances of the sections of the points corresponding to each other is approximately constant scale. Display device.
  3.  請求項2に記載の画像表示装置において、
     前記制御部は、前記別空間と前記実空間におけるコース上の複数のポイントを、互に対応するポイントの区間の進行方向の関係が近似的に一定の方位差となるよう設定したことを特徴とする画像表示装置。
    In the image display device according to claim 2,
    The control unit is characterized in that a plurality of points on the course in the other space and the real space are set so that the relationship between the traveling directions of the sections of the points corresponding to each other is approximately a constant directional difference. Image display device.
  4.  請求項2に記載の画像表示装置において、
     前記制御部は、前記別空間と前記実空間の少なくとも一方が高度差のあるコースである場合、コース上の複数のポイントを設定するための区間の距離について、高度差に伴うユーザの移動時間の差を打ち消すように補正した距離を用いることを特徴とする画像表示装置。
    In the image display device according to claim 2,
    When at least one of the other space and the real space is a course having an altitude difference, the control unit determines the distance of the section for setting a plurality of points on the course and the travel time of the user due to the altitude difference. An image display device characterized by using a distance corrected so as to cancel the difference.
  5.  請求項3に記載の画像表示装置において、
     さらに、音声を出力する音声出力部を備え、
     前記制御部は、ユーザが移動している実空間のコースの進行方向とこれに対応する別空間のコースの進行方向との方位差が所定値以上になったとき、前記音声出力部により音声で知らせるかまたは前記表示部にメッセージを表示することを特徴とする画像表示装置。
    In the image display device according to claim 3,
    In addition, it is equipped with an audio output unit that outputs audio.
    When the directional difference between the traveling direction of the course in the real space in which the user is moving and the traveling direction of the corresponding course in another space becomes a predetermined value or more, the control unit uses the voice output unit to speak. An image display device comprising notifying or displaying a message on the display unit.
  6.  請求項3に記載の画像表示装置において、
     さらに、音声を出力する音声出力部を備え、
     前記制御部は、ユーザの移動位置が実空間に設定されたコースから所定値以上外れたことを検出したとき、前記音声出力部により警報音を発するかまたは前記表示部に警告メッセージを表示することを特徴とする画像表示装置。
    In the image display device according to claim 3,
    In addition, it is equipped with an audio output unit that outputs audio.
    When the control unit detects that the movement position of the user deviates from the course set in the real space by a predetermined value or more, the voice output unit emits an alarm sound or displays a warning message on the display unit. An image display device characterized by.
  7.  請求項2に記載の画像表示装置において、
     ユーザの移動する実空間のコースが所定のコースを複数回周回する周回コースの場合、
     前記制御部は、前記周回コースをユーザが移動した距離に応じて、別空間の複数のポイントに対応する実空間の複数のポイントを設定することを特徴とする画像表示装置。
    In the image display device according to claim 2,
    When the real space course that the user moves is a lap course that goes around a predetermined course multiple times
    The control unit is an image display device characterized in that a plurality of points in a real space corresponding to a plurality of points in another space are set according to a distance traveled by the user on the circuit course.
  8.  請求項7に記載の画像表示装置において、
     前記周回コースに設定した複数のポイントには、所定の信号を発するビーコンが設置されており、
     前記ビーコンの発する信号を前記位置検出部で受信することでユーザの移動した距離を算出することを特徴とする画像表示装置。
    In the image display device according to claim 7,
    Beacons that emit predetermined signals are installed at a plurality of points set in the lap course.
    An image display device characterized in that the distance traveled by a user is calculated by receiving a signal emitted by the beacon by the position detection unit.
  9.  請求項1に記載の画像表示装置において、
     さらに、ユーザの正面方向の方位と姿勢を検出する方位・姿勢検出部を備え、
     当該画像表示装置はユーザが頭部に装着して使用する透過形ヘッドマウントディスプレイであり、
     前記表示部には、前方の実空間の風景に、別空間の画像が重畳されて表示されるとともに、
     前記制御部は、前記方位・姿勢検出部により検出した実空間でのユーザの視線方向が別空間での撮影方向の方位に一致したときに、該当する画像を表示させることを特徴とする画像表示装置。
    In the image display device according to claim 1,
    In addition, it is equipped with an orientation / attitude detection unit that detects the orientation and orientation of the user in the front direction.
    The image display device is a transmissive head-mounted display that the user wears on the head and uses.
    On the display unit, an image of another space is superimposed on the landscape of the real space in front and displayed.
    The control unit displays an image when the direction of the user's line of sight in the real space detected by the orientation / posture detection unit matches the direction of the shooting direction in another space. apparatus.
  10.  請求項1に記載の画像表示装置において、
     別空間のポイントでの画像を撮影する撮像素子を備え、
     前記位置検出部により前記別空間の撮影位置を検出し、
     前記メモリには、前記撮像素子で撮影した画像データとコース情報を保存しておき、
     ユーザが実空間のコースを移動するとき、前記制御部は、前記メモリから画像データと位置情報を読み出して、該当する画像を前記表示部に表示することを特徴とする画像表示装置。
    In the image display device according to claim 1,
    Equipped with an image sensor that captures an image at a point in another space
    The position detection unit detects the shooting position in the other space,
    Image data and course information taken by the image sensor are stored in the memory.
    An image display device characterized in that when a user moves through a course in real space, the control unit reads image data and position information from the memory and displays the corresponding image on the display unit.
  11.  画像表示装置と、撮像装置と、Webサーバを通信網を介して接続し、ユーザが装着する前記画像表示装置に、ユーザが実際に位置する実空間とは位置が異なる別空間の画像を表示する画像表示システムにおいて、
     前記撮像装置は、
     別空間のコース上の複数のポイントでの画像を撮影する撮像素子と、
     画像を撮影した各ポイントの位置を検出する位置検出部と、を備え、
     前記撮影した画像データと撮影位置の情報を前記Webサーバに送信し、
     前記Webサーバは、
     前記撮像装置から受信した別空間の画像データと撮影位置をコース情報として保存し、
     前記画像表示装置は、
     ユーザの位置を検出する位置検出部と、
     前記Webサーバから取得した別空間の画像データと撮影位置を含むコース情報を保存するメモリと、
     前記メモリに保存されている別空間の画像を表示する表示部と、
     前記表示部に対し別空間の画像の表示を制御する制御部と、を備え、
     前記制御部は、
     別空間のコース情報を基に、別空間の複数のポイントからなるコースを実空間の複数のポイントからなるコースに対応付けし、
     ユーザが実空間のコースを移動するとき、ユーザの位置に応じて、対応付けされた別空間のポイントの画像を表示することを特徴とする画像表示システム。
    An image display device, an image pickup device, and a Web server are connected via a communication network, and an image in a different space different from the actual space in which the user is actually located is displayed on the image display device worn by the user. In the image display system
    The image pickup device
    An image sensor that captures images at multiple points on a course in another space,
    It is equipped with a position detection unit that detects the position of each point where the image was taken.
    The captured image data and shooting position information are transmitted to the Web server,
    The Web server
    The image data in another space received from the image pickup device and the shooting position are saved as course information.
    The image display device is
    A position detector that detects the user's position and
    A memory for storing image data in another space acquired from the Web server and course information including a shooting position, and
    A display unit that displays an image of another space stored in the memory, and
    A control unit that controls the display of an image in another space with respect to the display unit is provided.
    The control unit
    Based on the course information of another space, the course consisting of multiple points in another space is associated with the course consisting of multiple points in the real space.
    An image display system characterized in that when a user moves through a course in real space, an image of associated points in another space is displayed according to the position of the user.
  12.  ユーザが実際に位置する実空間とは位置が異なる別空間の画像を表示する画像表示方法において、
     別空間のコース上の複数のポイントでの画像を撮影するステップと、
     画像を撮影した各ポイントの位置を検出するステップと、
     別空間の画像データと撮影位置をコース情報として保存するステップと、
     別空間のコース情報を基に、別空間の複数のポイントからなるコースを実空間の複数のポイントからなるコースに対応付けするステップと、
     実空間のコースを移動するユーザの位置を検出するステップと、
     ユーザの移動位置に応じて、対応付けされた別空間のポイントの画像を表示するステップと、
     を備えることを特徴とする画像表示方法。
    In an image display method that displays an image in a different space whose position is different from the actual space in which the user is actually located.
    Steps to take images at multiple points on a course in another space,
    Steps to detect the position of each point where the image was taken, and
    Steps to save image data and shooting position in another space as course information,
    Based on the course information of another space, the step of associating a course consisting of multiple points in another space with a course consisting of multiple points in real space,
    Steps to detect the position of the user moving through the course in real space,
    A step to display an image of the associated point in another space according to the movement position of the user,
    An image display method comprising.
PCT/JP2019/019112 2019-05-14 2019-05-14 Image display device, image display system, and image display method WO2020230262A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/JP2019/019112 WO2020230262A1 (en) 2019-05-14 2019-05-14 Image display device, image display system, and image display method
JP2021519096A JP7295224B2 (en) 2019-05-14 2019-05-14 Image display device, image display system and image display method
JP2023094479A JP2023130347A (en) 2019-05-14 2023-06-08 Image display device, image display system, and image display method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/019112 WO2020230262A1 (en) 2019-05-14 2019-05-14 Image display device, image display system, and image display method

Publications (1)

Publication Number Publication Date
WO2020230262A1 true WO2020230262A1 (en) 2020-11-19

Family

ID=73289933

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/019112 WO2020230262A1 (en) 2019-05-14 2019-05-14 Image display device, image display system, and image display method

Country Status (2)

Country Link
JP (2) JP7295224B2 (en)
WO (1) WO2020230262A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11283046A (en) * 1998-03-30 1999-10-15 Kubota Corp Image data generator
JP2017015593A (en) * 2015-07-02 2017-01-19 富士通株式会社 Display control method, display control program, information processing terminal, and head-mounted display

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11283046A (en) * 1998-03-30 1999-10-15 Kubota Corp Image data generator
JP2017015593A (en) * 2015-07-02 2017-01-19 富士通株式会社 Display control method, display control program, information processing terminal, and head-mounted display

Also Published As

Publication number Publication date
JP7295224B2 (en) 2023-06-20
JPWO2020230262A1 (en) 2020-11-19
JP2023130347A (en) 2023-09-20

Similar Documents

Publication Publication Date Title
US11257582B2 (en) Methods and apparatus for virtual competition
US8842003B2 (en) GPS-based location and messaging system and method
KR101748401B1 (en) Method for controlling virtual reality attraction and system thereof
US9569898B2 (en) Wearable display system that displays a guide for a user performing a workout
JP6683134B2 (en) Information processing apparatus, information processing method, and program
JP6809539B2 (en) Information processing equipment, information processing methods, and programs
US9175961B2 (en) Information processing apparatus, information processing method, program, and recording medium
JP2011517979A (en) System for simulating events in real environment
JP6345889B2 (en) Unmanned aircraft evacuation system, unmanned aircraft evacuation method, and program
US11710422B2 (en) Driving analysis and instruction device
KR20170138752A (en) System for providing virtual drone stadium using augmented reality and method thereof
US20160320203A1 (en) Information processing apparatus, information processing method, program, and recording medium
US11972450B2 (en) Spectator and participant system and method for displaying different views of an event
JP2003204481A (en) Image-information distribution system
WO2015033446A1 (en) Running assistance system and head mount display device used in same
WO2020230262A1 (en) Image display device, image display system, and image display method
JP2005034529A (en) Method and system for assisting golf player in play
JPH06318231A (en) Information adding device for image
US20170080330A1 (en) Location-based activity
KR101857104B1 (en) Contents service system for pictures of playing space
JP2021153221A (en) Drone captured video providing system, program of drone captured video providing system, moving body captured video providing system, program of moving body captured video providing system
JP7362885B1 (en) Golf support system, terminal device, golf support method, and golf support program
KR102056164B1 (en) Playback Method of Drone Moving Picture based on Position
US20240046564A1 (en) Simulated Consistency Check for Points of Interest on Three-Dimensional Maps
WO2023218627A1 (en) Golf assistance system, golf assistance method, and golf assistance program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19929106

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021519096

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19929106

Country of ref document: EP

Kind code of ref document: A1