WO2022172335A1 - Virtual guide display device, virtual guide display system, and virtual guide display method - Google Patents

Virtual guide display device, virtual guide display system, and virtual guide display method Download PDF

Info

Publication number
WO2022172335A1
WO2022172335A1 PCT/JP2021/004802 JP2021004802W WO2022172335A1 WO 2022172335 A1 WO2022172335 A1 WO 2022172335A1 JP 2021004802 W JP2021004802 W JP 2021004802W WO 2022172335 A1 WO2022172335 A1 WO 2022172335A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual guide
display device
virtual
guide display
viewpoint
Prior art date
Application number
PCT/JP2021/004802
Other languages
French (fr)
Japanese (ja)
Inventor
尚久 高見澤
康宣 橋本
治 川前
義憲 岡田
Original Assignee
マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by マクセル株式会社 filed Critical マクセル株式会社
Priority to PCT/JP2021/004802 priority Critical patent/WO2022172335A1/en
Publication of WO2022172335A1 publication Critical patent/WO2022172335A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a virtual guide display device, a virtual guide display system, and a virtual guide display method for specifying a position in a three-dimensional space.
  • Augmented Reality (AR) technology which adds digital information to the real world and reflects and augments virtual objects (virtual objects) in the virtual space created with CG (Computer Graphics), etc.
  • AR Augmented Reality
  • CG Computer Graphics
  • Virtual guide display devices and virtual guide display systems that can easily handle virtual objects while recognizing a three-dimensional real space are widely used.
  • applications such as remote work support are also expanding, so there are more opportunities than ever for users who are not familiar with the device to specify a position in a three-dimensional space. Improvement is desired.
  • Position specification in a three-dimensional space is mainly performed when specifying an object (physical object or virtual object) that already exists in the space, or arranging a virtual object at a specified position.
  • Patent Document 1 discloses "display means for displaying an image of a virtual space in which virtual object information is common among a plurality of virtual guide display devices.
  • An information processing system comprising a plurality of virtual guide display devices communicably connected to a head-mounted display, which acquires the position and orientation of the head-mounted display in the real space, and based on the acquired position and orientation information , to specify a position in a virtual space, and an information processing system that controls to display an arrow from a head-mounted display to the specified position (summary excerpt).
  • HMD head mounted display
  • Patent Document 1 the position of the place where the line of sight hits is specified from one direction. There is no consideration for specifying a different position on the back side, and there is a problem that an arbitrary position on the three-dimensional space cannot be specified.
  • the present invention provides a virtual guide display device, a virtual guide display system, and a virtual guide display method that can easily and accurately designate an arbitrary position in a three-dimensional space.
  • the present invention is a virtual guide display device comprising: a direction sensor for detecting a direction in the world coordinate system to which the virtual guide display device is directed; a position sensor that detects a position; a display; an operation input device that receives an operation to specify an arbitrary point on an image displayed on the display; and a processor connected to each of said processors, based on sensor outputs from said direction sensor and said position sensor, on a first field of view image viewed from a first viewpoint from said virtual guide display device.
  • the operation input device receives an operation to specify a position on the virtual guide displayed on the display, and the specified position is converted to a position in the world coordinate system output.
  • FIG. 2 is a diagram schematically showing the appearance of a configuration example of a virtual guide display device and a virtual guide display system according to the present embodiment;
  • FIG. 2 is a diagram for explaining a position designation operation on a field-of-view image from a first viewpoint in the embodiment shown in FIG. 1;
  • FIG. 4 is a diagram showing a screen when a position is specified on the field-of-view image from a second viewpoint in the embodiment shown in FIG. 1;
  • FIG. 4 is a diagram for explaining a position designation operation on a field-of-view image from a second viewpoint in the embodiment shown in FIG. 1;
  • FIG. 11 is a diagram for explaining another example of a position designation operation on a field-of-view image from a second viewpoint in the embodiment shown in FIG. 1;
  • FIG. 1 is a diagram schematically showing the appearance of a configuration example of a virtual guide display device and a virtual guide display system according to the present embodiment;
  • FIG. 2 is a diagram for explaining a position designation operation on
  • FIG. 2 is a diagram showing a state in which a position is specified in the embodiment shown in FIG. 1;
  • FIG. 2 is an image diagram for explaining a case where a virtual object is arranged at a designated position in the embodiment shown in FIG. 1;
  • FIG. 2 is an image diagram for explaining a case in which an object at a designated position is specified in the embodiment shown in FIG. 1;
  • FIG. 2 is an image diagram for explaining a case in which an object at a designated position is specified in the embodiment shown in FIG. 1;
  • FIG. 1 is a diagram showing a state in which a position is specified in the embodiment shown in FIG. 1;
  • FIG. 2 is an image diagram for explaining a case where a virtual object is arranged at a designated position in the embodiment shown in FIG. 1;
  • FIG. 2 is an image diagram for explaining a case in which an object at a designated position is specified in the embodiment shown in FIG. 1;
  • FIG. 2 is a diagram schematically showing the appearance of a configuration example of a virtual guide display device and a virtual guide display system according to the present embodiment
  • 4 is a flowchart for explaining the basic operation of the virtual guide display device and virtual guide display system according to the present embodiment
  • 1 is a block diagram showing a configuration example of a virtual guide display device according to this embodiment
  • FIG. FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a diagram for explaining
  • FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 5 is a diagram for explaining a position specifying operation when arranging a virtual object at a specified position in a three-dimensional space in the virtual guide display device according to the embodiment
  • FIG. 5 is a diagram for explaining a position specifying operation when arranging a virtual object at a specified position in a three-dimensional space in the virtual guide display device according to the embodiment
  • FIG. 5 is a diagram for explaining a position specifying operation when arranging a virtual object at a specified position in a three-dimensional space in the virtual guide display device according to the embodiment
  • FIG. 5 is a diagram for explaining a position specifying operation when arranging a virtual object at a specified position in a three-dimensional space in the virtual guide display device according to the embodiment
  • FIG. 5 is a diagram for explaining a position specifying operation when arranging a virtual
  • FIG. 5 is a diagram for explaining a position specifying operation when arranging a virtual object at a specified position in a three-dimensional space in the virtual guide display device according to the embodiment
  • FIG. 4 is a diagram for explaining a case where two field-of-view images viewed from a first viewpoint and a second viewpoint are displayed on two screens in the virtual guide display device according to the present embodiment
  • FIG. 4 is a diagram for explaining a case where two field-of-view images viewed from a first viewpoint and a second viewpoint are displayed on two screens in the virtual guide display device according to the present embodiment
  • FIG. 10 is a view for explaining position designation by tag selection, in which tags are displayed at intersections between the virtual guide and objects in the visual field image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a view for explaining position designation by tag selection, in which tags are displayed at intersections between the virtual guide and objects in the visual field image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a view for explaining position designation by tag selection
  • FIG. 1 is a diagram schematically showing the appearance of a configuration example of a virtual guide display device and a virtual guide display system according to this embodiment.
  • 2, 3, 4A, 5, and 4B are diagrams for explaining the three-dimensional position designation operation in the embodiment shown in FIG.
  • HMD head-mounted display
  • the HMD 100 comprises a camera 104.
  • the angle of view of the camera 104 is assumed to be the same as the first viewpoint 101 of the first user 10 wearing the HMD 100 .
  • a camera 104 captures a view scene in a three-dimensional space from a first viewpoint 101 to obtain a first view image viewed from the first viewpoint 101 .
  • the first field-of-view image is an image obtained by photographing the field-of-view scenery in the actual three-dimensional space, but it may be an image of virtual reality viewed from the first viewpoint 101 .
  • the HMD 100 captures the scenery from the camera 104 at the second viewpoint 102 of the first user 10 . is captured to obtain a second view image viewed from a second viewpoint 102 .
  • the tablet terminal 110 is operated by the second user 20 .
  • the tablet terminal 110 receives field-of-view images obtained by the HMD 100 and viewed from the first viewpoint 101 and the second viewpoint 102 through wireless communication with the HMD 100 .
  • the tablet terminal 110 displays the received field-of-view image on the screen 106 of the tablet terminal 110 .
  • the server 120 which can process and store a large amount of information at high speed, transmits and receives various types of information such as view images and virtual guide information to and from the HMD 100 and the tablet terminal 110 through wireless communication.
  • a field-of-view image 140 (see FIG. 2) at 101 is transmitted to the tablet terminal 110 .
  • the tablet terminal 110 displays the received view image 140 on the screen 106.
  • the second user 20 designates the position 141 of the point when viewed from the first viewpoint 101 on the displayed field-of-view image 140 .
  • a position 141 of a point on the field-of-view image 140 when viewed from the first viewpoint 101 corresponds to a position 130 on the three-dimensional space.
  • the tablet terminal 110 generates a virtual object (hereinafter referred to as a "virtual guide 150") that extends in the same direction as the first viewpoint 101 via a position 141 specified on the view image 140. A point corresponding to the position 130 to be specified in the three-dimensional space exists on the generated virtual guide 150 .
  • the camera 104 shoots the scenery from the second viewpoint 102, A view image 160 (see FIG. 3) captured at the second viewpoint 102 is transmitted to the tablet terminal 110 .
  • a view image 160 as shown in FIG. 3 is displayed on the screen 106 of the tablet terminal 110 . Further, on the tablet terminal 110, the generated virtual guide 150 is superimposed on the view image 160 and displayed.
  • a position 130 in the three-dimensional space to be specified is identified as a position 161 corresponding to the position 151 specified on the virtual guide 150 on the view image 160, as shown in FIG. That is, a virtual guide 150 is generated that extends in the same direction as the first viewpoint 101 via a specified position 141 (see FIG. 2) on the visual field image viewed from the first viewpoint 101, and By displaying the generated virtual guide 150 on the viewed visual field image and specifying the position 151 (see FIG. 4A) on the displayed virtual guide 150, it is possible to identify any position in the three-dimensional space. be possible.
  • the virtual guide 150 is displayed and arranged in space according to the spatial coordinates from the first viewpoint 101, and the virtual guide 150 fixed in the space is viewed from the second viewpoint 102 to create the final three-dimensional space. It determines the top position 130 .
  • the virtual guide 150 may be generated by using the HMD 100 or the server 120 capable of handling a large amount of information other than the tablet terminal 110 .
  • the server 120 distributes virtual guide information for displaying the virtual guide 150 on the tablet terminal 110 by wireless communication or the like.
  • the process of generating the virtual guide and the process of specifying the position may also be displayed on the HMD 100 so that the first user 10 can see it.
  • a position near the virtual guide 150 may be specified as an extended method of specifying the position. Description will be made with reference to FIG. 4B.
  • the distance in the depth direction along the second viewpoint 102 cannot be determined simply by specifying the position, and the position 152 cannot be determined.
  • a rule is established in advance, for example, the position at which the distance to the virtual guide 150 is the shortest, that is, the shortest distance between the straight line in the direction of the second viewpoint 102 passing through the position 152 and the straight line indicated by the virtual guide 150. Then, the position in the depth direction can be determined according to the rule.
  • Another processing method when the second user 20 designates a position not on the virtual guide 150 is, for example, on the virtual guide 150 closest to the position 152 on the visual field image of the second viewpoint 102 . It is also possible to specify the position of
  • a plurality of processing rules may be defined when the second user 20 designates a position not on the virtual guide 150 so that the second user 20 can designate which rule to apply.
  • FIG. 6 is an image diagram for explaining a case where a virtual object is arranged at the position specified in this embodiment shown in FIG.
  • the tablet terminal 110 determines the designation of, for example, a position 130 in the three-dimensional space by the spatial position designation process described above, and generates a virtual object 170 to be placed on the position 130 .
  • the spatial coordinates of the position 130 are associated with the virtual object 170 .
  • the HMD 100 displays the generated virtual object 170 with spatial coordinates on the field image 180 (see FIG. 6) viewed from the third viewpoint 103 (see FIG. 1), which is a bird's eye view, for example.
  • FIG. 6 shows, as an example of the virtual object 170, a virtual object labeled "This Point" for indicating a destination or gathering place.
  • the first user 10 wearing the HMD 100 can easily visually recognize the destination and meeting place indicated by the second user 20 operating the tablet terminal 110 . That is, by arranging the virtual object 170 at the specified position 130 in the three-dimensional space, the virtual object 170 is generated, and the virtual object 170 is generated for the first user 10 different from the second user 20 who is arranged at the position 130 .
  • Useful information such as instructions indicated by 170 can be accurately and easily notified at the specified position 130 .
  • the virtual object may be generated by using the server 120 other than the tablet terminal 110, and is distributed to the virtual guide display device (HMD 100 in the above example) that can display the virtual object 170 by wireless communication or the like.
  • the server 120 may specify only the spatial coordinates, transmit and receive the spatial coordinate data, and display the virtual object 170 at the spatial coordinate position specified by the HMD 100 .
  • the server 120 may specify the type and display direction of the object, and the HMD 100 may display the object.
  • FIG. 7A is an image diagram illustrating a case of identifying an object at a designated position in the embodiment shown in FIG.
  • positions 191 and 192 in the three-dimensional space are designated by the spatial position designation process described above.
  • the HMD 100 uses the specified position 191 to identify the tree 193 of the physical object on the specified position 191 on the field image 180 seen from the third viewpoint 103, for example.
  • the virtual object 194 located on the specified position 192 is specified on the field image 180 viewed from the third viewpoint 103, for example.
  • FIG. 7A shows an example of a screen displaying a virtual object 194 labeled "This is a memorial tree" for explaining the tree 193.
  • FIG. 7A shows a case in which a physical object such as a tree 193 is specified when viewed from the third viewpoint 103.
  • the first user 10 since the first user 10 who is viewing the physical space walks on the ground, the first user 10 It is difficult for the robot 10 to move to the position of the third viewpoint 103 shown in FIG. 7A, and in practice, the physical object may be identified by viewing from a viewpoint that is possible in the physical space.
  • the image of the third viewpoint 103 is merely an example for explanation.
  • the image of the third viewpoint 103 is merely an example for explanation.
  • a virtual guide 151a based on the position 151 specified at the second viewpoint 102 is displayed, and the position other than the intersection of the two guides is displayed. It may be configured so that the point 162 can be specified.
  • the rule that the sum of the distances to the virtual guide 150 and the virtual guide 151a is the smallest is set in advance and determined. Just do it. This technique improves the convenience of position designation, such as allowing designation of a position deviated from the intersection of the plurality of virtual guides 150 and 151a.
  • this allows the first user 10 wearing the HMD 100 to accurately and easily confirm the physical object or virtual object pointed by the second user 20 operating the tablet terminal 110 . That is, by going back and forth between the first viewpoint and the second viewpoint, or by recognizing the designated point from the difference in appearance between the first or second viewpoint and the n-th viewpoint, the position in the three-dimensional space can be determined. By specifying, it is possible to accurately and easily indicate a physical object or virtual object at a specific position to a user viewing the physical object or virtual object.
  • a worker corresponding to the first user 10 photographs a work place or the like with a camera 104 mounted on an HMD
  • a support instructor corresponding to the second user 20 in the office can easily specify a desired position in the three-dimensional space while viewing the camera-captured image of the work place, etc. on the screen 106 of the tablet terminal 110, etc. . Therefore, information such as work instructions can be given to a worker by a virtual object placed at a specified desired position, or an object at a specified position can be identified and pointed to the worker to inform the worker of the information. Therefore, a support instructor who is unfamiliar with the apparatus can accurately and conveniently provide support such as instructions to remote workers.
  • FIG. 8 is a diagram schematically showing the appearance of another configuration example of the virtual guide display device and the virtual guide display system according to this embodiment.
  • the tablet terminal 110 is equipped with a camera 801 and uses the camera 801 to photograph the scenery from a first viewpoint 101 and a second viewpoint 102 . 2 to 7B are used as the drawings used for explaining the position designation.
  • the tablet terminal 110 is provided with a camera 801 that captures a field of view scenery, captures the field of view of the second user 20 from the first viewpoint 101 with the camera 801, and displays the captured field of view image 140 on the screen.
  • a position 141 is specified on the field of view image 140 displayed by the second user 20 .
  • the tablet terminal 110 generates a virtual guide 150 (see FIG. 3) that extends in the same direction as the first viewpoint 101 via a position 141 specified on the field-of-view image 140 .
  • the camera 801 of the tablet terminal 110 moves the second user 20 from the second viewpoint.
  • the field of view scenery at the viewpoint 102 is photographed, and the photographed field of view image 160 (see FIG. 3) is displayed on the screen 106.
  • a position 130 on the three-dimensional space to be specified can be specified as a position 161 on the field-of-view image 160 .
  • the HMD 100 and the tablet terminal 110 are used as examples of the first virtual guide display device and the second virtual guide display device. You can use any device such as a smartphone, a personal computer, etc.
  • both the first viewpoint 101 and the second viewpoint 102 use field-of-view images of the current landscape.
  • a position is specified using a field image taken at the current viewpoint, a virtual guide is generated and displayed, and a field image taken in the past is used as a second viewpoint. You can specify the position. That is, as the first viewpoint and the second viewpoint, past field-of-view images may be appropriately combined and used.
  • past visual field images may be used for both the first viewpoint and the second viewpoint. It goes without saying that this past field-of-view image must be associated with spatial coordinate data. Also, satellite photographs and map data may be used as long as they have spatial coordinate data. Image data in the server 120, which can store a large amount of information data 1024, may be used as the past view image. In this case, the capacity load on the first virtual guide display device and the second virtual guide display device can be greatly reduced.
  • the server 120 may execute not only processing such as holding past field-of-view images and virtual objects, but also a part of other operation processing such as position designation in a three-dimensional space.
  • FIG. 9 is an example of a flowchart explaining the basic operation of the virtual guide display device and virtual guide display system according to this embodiment.
  • FIG. 9 when a visual field image at a first viewpoint is obtained by the first virtual guide display device (S901), and a position is specified on the obtained visual field image at the first viewpoint (S902), A virtual guide is generated that extends in the same direction as the first viewpoint via the specified position (S903).
  • An example algorithm for generating and displaying a virtual guide is described below.
  • the virtual guide information including the shape information of the virtual guide, the position information indicating the position of the world coordinate system, and the direction information indicating the direction of the world coordinate system is sent to the second virtual guide display device.
  • a virtual guide is displayed on the visual field image at the second viewpoint (S904), and a position is designated on the displayed virtual guide (S905).
  • the position in the three-dimensional space can be specified by the process 900 for specifying the position from the above two viewpoints.
  • the second virtual guide display device converts the specified position into a position in the world coordinate system and outputs the position.
  • This output mode may be displayed on the display 1034 of the second virtual guide display device, or may be displayed in the virtual guide generation display processing unit 1012 (FIG. 10) inside the processor 1010 (FIG. 10) of the second virtual guide display device. 10) to the virtual object generation processing unit 1013 and output to each processing unit that uses the specified position, such as arranging the virtual object at this position. Thereafter, the process proceeds to a process using the output specified position (S906).
  • utilization processes such as arranging a virtual object at a specific position, using it to specify a physical object or virtual object at a specific position, and transmitting and using coordinate information of a specific position to another terminal. be done.
  • a virtual object representing information such as an instruction is arranged, and useful information such as an instruction is given to a user who can visually recognize the virtual object in the three-dimensional space. be able to.
  • the final position of the generated virtual guide can be specified by viewing the generated virtual guide from the second viewpoint. can be accurately and easily specified.
  • useful information can be displayed by a virtual object placed at the specified position, or an object at the specified position can be displayed to the user.
  • pointing it is possible to implement user-friendly instruction support to the user.
  • FIG. 10 is a functional block diagram of a configuration example of a first virtual guide display device and a second virtual guide display device according to this embodiment.
  • the first virtual guide display device (HMD 100) and the second virtual guide display device (tablet terminal 110) include a processor 1010, a memory 1020, a camera 104 (a camera 801 in the tablet terminal 110), and a left eye line detection sensor. 1032, a right eye line of sight detection sensor 1033, a display 1034, an operation input device 1035, a microphone 1036, a speaker 1037, a vibrator 1038, a communication I/F (communication device) 1039, a sensor group 1040, etc. They are interconnected via bus 1050 .
  • the processor 1010 is composed of a CPU, ROM, RAM, etc., and constitutes a controller of the first virtual guide display device (HMD 100) and the second virtual guide display device (tablet terminal 110).
  • the processor 1010 executes processing according to an operating system (OS) 1022 stored as a control program 1021 in the memory 1020 and an application program 1023 for operation control, thereby performing each function in the processor 1010. It controls the parts and implements the functions of the OS, middleware, applications, etc. and other functions.
  • OS operating system
  • application program 1023 for operation control
  • Functional units configured by execution by the processor 1010 include a position designation processing unit 1011, a virtual guide generation display processing unit 1012, and a virtual object generation processing unit 1013.
  • the memory 1020 is composed of a non-volatile storage device or the like, and stores various programs 1021 and information data 1024 handled by the processor 1010 and the like.
  • the information data 1024 includes coordinate position information 1025 indicating a spatial coordinate position such as a designated position, virtual guide information 1026 required to generate and display a virtual guide, virtual object information 1027 representing a virtual object, and physical object information 1027.
  • the view field image information 1028 of the photographed scenery and the like are stored.
  • the cameras 104 and 801 take images of the field of view around the front, and acquire the field of view image by converting the light incident from the lens into an electrical signal with an imaging device.
  • the first user 10 obtains a field image by photographing the field of view with the camera 104 while visually recognizing an actual object in the front surrounding field of view with his/her own eyes.
  • the left-eye line-of-sight detection sensor 1032 and the right-eye line-of-sight detection sensor 1033 detect the line of sight by capturing the movements and orientations of the left and right eyes, respectively.
  • a well-known technology that is generally used as eye tracking processing may be used.
  • a known technique is to detect the line of sight based on the position of the pupil with respect to the position of the corneal reflection, with the position of the reflected light (corneal reflection) on the cornea taken by an infrared camera and the position of the corneal reflection as a reference point.
  • the display 1034 includes a screen 106 configured by a liquid crystal panel, and displays a visual field image, notification information to the user such as an alarm, and the like on the screen 106.
  • the operation input device 1035 may be, for example, a capacitive touch pad stacked on the screen 106 .
  • the touch pad detects an approach or contact operation (touch operation) by a finger, touch pen, or the like.
  • the position can be easily specified by the user performing a touch operation on the position to be specified on the display image.
  • the display 1034 in the case of the HMD 100, in the case of an optical see-through type, for example, a projector that projects a virtual object, notification information to the user, and the like, and a transparent transparent display that displays the projected virtual object, etc. in front of the eyes.
  • a half mirror may be used.
  • the HMD 100 when the HMD 100 is of the video see-through type, it is configured using a display 1034 such as a liquid crystal panel that displays together the physical object in front of the eye photographed by the camera 104 and the virtual object. As a result, the user can use the HMD 100 to superimpose and visually recognize the physical object and the virtual object within the visual field in front of the user's eyes.
  • a display 1034 such as a liquid crystal panel that displays together the physical object in front of the eye photographed by the camera 104 and the virtual object.
  • the operation input device 1035 may be, for example, a keyboard, key buttons, touch keys, or the like.
  • the operation input device 1035 may be provided in a position and form in which the first user 10 can easily perform an input operation in the HMD 100, or may be separated from the main body of the HMD 100 and connected by wire or wirelessly.
  • the input operation screen may be displayed on the screen 106 of the display 1034, and the input operation information may be captured based on the position on the input operation screen to which the line of sight is directed detected by the left-eye line-of-sight detection sensor 1032 and the right-eye line-of-sight detection sensor 1033.
  • a pointer may be displayed on the input operation screen and input operation information may be obtained by operating the pointer with the operation input device 1035 .
  • the operation input device 1035 may utter a voice indicating an input operation by the user, collect the sound with the microphone 1036, and capture the input operation information.
  • the microphone 1036 collects voice from the outside or the user's own voice and converts it into voice data. Instruction information uttered by the user can be taken into the virtual guide display device, and an operation in response to the instruction information can be conveniently executed.
  • a speaker 1037 outputs sound based on the sound data. Thereby, the notification information to the user can be notified by voice. Speaker 1037 can be replaced with headphones.
  • the vibrator 1038 generates vibration under the control of the processor 1010, and converts notification information to the user transmitted by the virtual guide display device into vibration.
  • the vibrator 1038 by causing the vibrator 1038 to vibrate the user's head to which the vibrator 1038 is closely attached, it is possible to reliably transmit the notification to the user.
  • Examples of information to be notified to the user include designation of the position at the first viewpoint 101, generation and display of the virtual guide 150, placement of the virtual object 170, and finally specification of the position in the three-dimensional space. There are notifications and notification contents, and these notification information can improve usability.
  • the communication I/F 1039 is a communication interface that performs wireless communication with other nearby information terminals by short-range wireless communication, wireless LAN, base station communication, or the like. Including antennas, etc.
  • the communication I/F 1039 performs wireless communication between the first virtual guide display device and the second virtual guide display device and with the server 120 .
  • short-range wireless communication Bluetooth (registered trademark), IrDA (Infrared Data Association, registered trademark), Zigbee (registered trademark), HomeRF (Home Radio Frequency, registered trademark), or Wi-Fi (registered trademark) It is performed using a wireless LAN such as.
  • long-distance wireless communication such as W-CDMA (Wideband Code Division Multiple Access, registered trademark) and GSM (Global System for Mobile Communications) may be used.
  • the communication I/F 1039 may use other methods such as optical communication and sound wave communication as means for wireless communication.
  • a light emitting part, a light receiving part, a speaker and a microphone are used instead of the transmitting/receiving antenna.
  • high-speed large-capacity communication networks such as 5G (5th Generation: 5th generation mobile communication system) and local 5G are used for wireless communication. This can dramatically improve usability.
  • the distance sensor 1041 is a sensor that measures the distance between the virtual guide display device and a real object in the outside world.
  • the distance measurement sensor 1041 may use a TOF (Time Of Flight) sensor, or may use a stereo camera or another method. From the three-dimensional data created using the distance measuring sensor 1041, it is possible to create a virtual space in which physical objects are virtually arranged, and it is also possible to arrange virtual objects by designating three-dimensional coordinates in the virtual space. becomes.
  • the acceleration sensor 1042 is a sensor that detects acceleration, which is a change in speed per unit time, and can detect movement, vibration, impact, and the like.
  • the gyro sensor 1043 is a sensor that detects the angular velocity in the rotational direction, and can capture the state of vertical, horizontal, and diagonal postures.
  • the posture and movement of the head of the first user 10 can be detected using the acceleration sensor 1042 and the gyro sensor 1043 mounted on the HMD 100 .
  • the geomagnetic sensor 1044 is a sensor that detects the magnetic force of the earth, and detects the direction in which the virtual guide display device is facing. It is also possible to detect the movement of the virtual guide display device by using a 3-axis type that detects geomagnetism in the vertical direction as well as the front and back directions and the left and right directions, and by capturing changes in geomagnetism with respect to the movement of the virtual guide display device.
  • the gyro sensor 1043 and the geomagnetic sensor 1044 function as direction sensors that detect the orientation of the HMD 100 in the world coordinate system.
  • the GPS sensor 1045 receives signals from GPS (Global Positioning System) satellites in the sky and detects the current position of the virtual guide display device. This makes it possible to fix the position of the moving viewpoint.
  • GPS sensor 1045 is a position sensor that detects a position in the world coordinate system.
  • the position designation processing unit 1011 performs processing for designating a position on the field-of-view image displayed on the display 1034 using the operation input device 1035 .
  • the position of a point or range desired by the user is designated on the field of view image viewed from the first viewpoint 101
  • the desired position on the displayed virtual guide 150 is designated on the field of view image viewed from the second viewpoint 102.
  • a position 152 around the virtual guide may be specified using the virtual guide 150 as a guideline, not only on the virtual guide 150 .
  • the virtual guide generation display processing unit 1012 generates a virtual guide 150 that extends in the same direction as the first viewpoint 101 via points 141 and range positions specified on the field of view image at the first viewpoint 101 . In addition, processing for displaying a virtual guide 150 on the field-of-view image at the second viewpoint 102 is performed.
  • the virtual object generation processing unit 1013 generates a virtual object 170 that is an object in a virtual space different from the real space. Note that the virtual object 170 generated by the external server 120 may be imported into the virtual guide display device through wireless communication.
  • the position designation processing unit 1011 designates the position 141 on the view image 140 viewed from the first viewpoint 101, and the virtual guide generation display processing unit 1012 passes through the designated position 141.
  • a virtual guide 150 extending in the same direction as the first viewpoint 101 is generated.
  • the virtual guide generation and display processing unit 1012 displays the virtual guide 150 generated on the visual field image 160 viewed from the second viewpoint 102, and the position on the virtual guide 150 displayed by the position designation processing unit 1011 is specified. do. Therefore, the virtual guide 150 is generated according to the space coordinates at the first viewpoint 101, and the final position in the three-dimensional space is specified while viewing the virtual guide 150 fixed in the space from the second viewpoint 102. be able to. That is, by specifying a position in space from two different viewpoints while using the virtual guide 150, it is possible to determine an arbitrary position in the three-dimensional space accurately, easily and conveniently.
  • the field-of-view images 140 and 160 viewed from the first viewpoint 101 and the second viewpoint 102 may be images captured by the camera 104 of the HMD 100 or images captured by the camera 801 of the tablet terminal 110 .
  • the virtual object 170 generated by the virtual object generation processing unit 1013 is arranged at the specified coordinate position in the three-dimensional space, and the virtual object with spatial coordinates is displayed on the display 1034 of the first and second virtual guide display devices. By displaying, it is possible to inform the user who operates the first and second virtual guide display devices of the useful information indicated by the virtual object 170 at the appropriate spatial coordinate position.
  • the user can be pointed to the physical object or virtual object at the specified position, It is possible to easily give an accurate object indication to the user.
  • the HMD 100 captures the field of view seen from the first viewpoint 101 and the second viewpoint 102 with the camera 104, and 1. Transmit the second field-of-view image to the tablet terminal 110 .
  • the field of view viewed from the first viewpoint 101 and the second viewpoint 102 is photographed by the camera 801, and the photographed first and second field of view images are acquired.
  • the position designation processing unit 1011 displays the first field-of-view image viewed from the first viewpoint 101.
  • a virtual guide generation display processing unit 1012 generates a virtual guide 150 extending in the same direction as the first viewpoint 101 via a position 140 specified on the field of view image.
  • the virtual guide generation display processing unit 1012 After that, in the tablet terminal 110 , based on the second field-of-view image transmitted from the HMD 100 or the second field-of-view image acquired by the tablet terminal 110 , the virtual guide generation display processing unit 1012 generates images from the second viewpoint 102 .
  • the generated virtual guide 150 is displayed on the viewed second view image, and a position 151 on the virtual guide 150 displayed by the position designation processing unit 1011 is designated.
  • the second virtual guide display device can accurately and easily determine an arbitrary position in the three-dimensional space with good usability.
  • the virtual object generation processing unit 1013 generates a virtual object 170 at a position 151 on the virtual guide 150 specified by the tablet terminal 110, and transmits the generated virtual object with spatial coordinates to the HMD 100.
  • the HMD 100 receives the virtual object with spatial coordinates transmitted from the tablet terminal 110, and displays the received virtual object with spatial coordinates on the visual field image according to the spatial coordinates.
  • the user holding the tablet terminal 110 to inform the user equipped with the HMD 100 of useful information indicated by the virtual object at a suitable spatial coordinate position.
  • the tablet terminal 110 transmits to the HMD 100 spatial coordinate position information indicating a position 151 on the virtual guide 150 specified by the position specification processing unit 1011 .
  • the HMD 100 receives the spatial coordinate position information transmitted from the tablet terminal 110 and displays the position indicated by the received spatial coordinate position information on the visual field image. Therefore, the user holding the tablet terminal 110 can point to the user wearing the HMD 100 a physical object or a virtual object at a designated position in the three-dimensional space.
  • the second embodiment is an embodiment in which a range is specified instead of a point when specifying a position on the view field image in the virtual guide display device.
  • 11A to 11G are diagrams for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image viewed from the first viewpoint 101 and the second viewpoint 102.
  • FIG. 11A to 11G the parts shown in FIGS. 2 to 7 and denoted by the same reference numerals have the same operations as those already described in FIGS. 2 to 7, so detailed descriptions thereof will be omitted. .
  • a range 1101 on the field of view image 140 is specified by the position specification processing unit 1011 .
  • the range 1101 is designated, as shown in FIG. 11B, the virtual guide 1102 displayed on the field-of-view image 160 is generated as a three-dimensional object having the range 1101 as a cross section.
  • the position designation processing unit 1011 acquires a position designation operation on the virtual guide 1102, it is possible to specify the position in the three-dimensional space with a range. Therefore, when arranging a wide-spread virtual object, it is possible to specify a range corresponding to the spread, or to specify the range of a physical object or a virtual object and point to that object.
  • a ring object framing the vicinity of the designated position is displayed. It may be displayed as 1103, and the specified position can be visually recognized stereoscopically.
  • the range to select and specify may be distorted. That is, not only when the shape of the range at the first viewpoint 101 is distorted, but also at the second viewpoint 102, a range having a unique shape different from that at the first viewpoint 101 is specified. However, when combined with the range seen from the first viewpoint 101 side, it may become stereoscopically distorted. In this way, when the range to be selected and specified becomes distorted, the sphere object 1106 including the range to be selected and specified may be displayed as the specified range.
  • a sphere object 1106 surrounding a three-dimensional distorted range 1105 is specified as the range.
  • a sphere object 1106 including a selected and specified range 1105 can be displayed as a specified range.
  • the identification can be performed accurately and conveniently.
  • the object may be a closed three-dimensional area other than the spherical object 1106, and may be an object of other geometric shape such as a rectangular parallelepiped.
  • FIG. 12 is a diagram for explaining a position specification operation when arranging a virtual object at a specified position in a three-dimensional space.
  • the parts shown in FIGS. 2 to 7 and denoted by the same reference numerals have the same operations as those already explained in FIGS. 2 to 7, so detailed explanations thereof will be omitted.
  • a case of arranging a virtual object representing a new car for advertisement at a specified position in a three-dimensional space will be described.
  • a virtual guide 1202 is generated and displayed that has a shape of a new car of the virtual object 1201 and extends in the same direction as the first viewpoint 101 in cross section.
  • the position designation processing unit 1011 acquires the designation operation.
  • FIGS. 12C and 12D show the field of view of a virtual object 1204 arranged at a specified position in the three-dimensional space and having a cross-sectional shape representing the shape of a new car, viewed from the second viewpoint 102 and the third viewpoint 103. It is an image.
  • a shape object and a virtual guide reflecting the shape and size of the virtual object 1204 to be placed when the position is specified at the first viewpoint 101 and the second viewpoint 102 are displayed to obtain further convenience. .
  • 13A and 13B show, in addition to the field-of-view image 160 viewed from the second viewpoint 102, a field-of-view image viewed from the first viewpoint 101 on a part of the screen 106 (a screen area smaller than the screen area of the field-of-view image 160).
  • 140 is a diagram showing displaying 140.
  • FIG. 13A and 13B the parts shown in FIGS. 2 to 7 and denoted by the same reference numerals have the same operations as those already explained in FIGS. 2 to 7, so detailed explanations thereof will be omitted. .
  • a It shows a state in which the viewed field image 140 is displayed in a reduced form.
  • the field image 160 viewed from the second viewpoint 102 is displayed on the left side of the screen 106
  • the field of view image 140 viewed from the first viewpoint 101 is displayed on the right side of the screen 106.
  • the vicinity of the position designation portion at the first viewpoint 101 may be enlarged and displayed, A transition video from the first viewpoint 101 to the second viewpoint 102 may be used. If playback control such as repetition is enabled for the transition moving image, the state of the transition can be easily grasped, which is effective in improving the accuracy of position designation.
  • the state designated by the second viewpoint 102 may be synthesized with the field-of-view image 140 of the first viewpoint 101 and reflected.
  • the view image 140 from the first viewpoint 101 is also viewed from the angle of the first viewpoint 101. Synthesize images of virtual objects. Therefore, it is possible to obtain an effect that it is possible to confirm how the object looks from the first viewpoint 101 without having to bother to return to the first viewpoint 101 .
  • the virtual guide 150 As a method of displaying the virtual guide, if processing (so-called occlusion processing) is performed so that the virtual guide 150 is not displayed in the shadow of the real object when viewed from the current viewpoint, the position in the depth direction with respect to the real object is displayed. The relationship becomes easy to understand and the position specification becomes easy. However, if the physical objects are densely packed, the virtual guide 150 is difficult to see, and when the position of the point is specified from the second viewpoint 102, if the virtual guide is a line, it may be thin and difficult to see. On the other hand, the virtual guide 150 may be displayed thicker, blinked, displayed in front of the physical object that shields the virtual guide 150, or may be combined.
  • the original display of the virtual guide 150 and the one displayed in front of the real object may be combined and displayed alternately, or a combination of the thick display and the original thin display may be used.
  • the field of view image photographed by the other camera may be used as the view image 160 at the second viewpoint 102 to specify the spatial coordinate position.
  • Other cameras include, for example, surveillance cameras, cameras used by other people on site, and the like. Needless to say, the field-of-view image must be associated with the spatial coordinate data. Also, satellite photographs and map data may be used as long as they have spatial coordinate data.
  • a virtual space in which physical objects are virtually arranged can be created from three-dimensional data created using the distance measuring sensor 1041, and a virtual object can be arranged by designating three-dimensional coordinates in the virtual space. is possible.
  • the second viewpoint 102 may be obtained by rotating the virtual space. That is, by rotating the three-dimensional data of the object in the virtual space, the position to be specified can be changed by changing the viewpoint without moving from the first viewpoint 101 to the second viewpoint 102 . It becomes possible to specify.
  • a mark may be used to identify a candidate for specification. For example, as shown in FIG. 14A, when a plurality of physical objects 1401, 1402, 1403, 1404, and 1405 exist on a virtual guide 1400, the virtual guide 1400 and the plurality of physical objects 1401 and 1402 are shown in FIG. 14B. , 1403, 1404, and 1405 are displayed, and by selecting tags 1411, 1412, 1413, 1414, and 1415, the tagged physical objects are specified. As a result, even when a plurality of physical objects are densely packed on the virtual guide 1400 and it is difficult to select the designated object, the position can be designated by the tag selection operation, and the desired physical object can be pointed accurately and easily. It becomes possible.
  • the coordinate system of the target space in which the virtual guide is set is assumed to be the world coordinate system.
  • the target space may be a real space or a virtual space.
  • the viewpoint may or may not be the same as the position of the actual virtual guide display. In any case, it is assumed that an image seen from a viewpoint used as a reference for the display is displayed on the virtual guide display device.
  • the orientation of the local coordinate system with respect to the world coordinate system is represented by the rotation matrix R.
  • the coordinate axis direction rotated by R becomes the coordinate axis direction of the local coordinate system.
  • the origin of the local coordinate system, that is, the viewpoint is represented by O, which is the position coordinate in the world coordinate system.
  • U the position coordinate in the local coordinate system corresponding to the position coordinate X in the world coordinate system
  • U the relationship is expressed by the following equation (1).
  • U R ⁇ 1 (X ⁇ O) (1)
  • the rotation matrix R and the origin O of the local coordinate system are updated as the viewpoint moves after initial setting.
  • the reference line (hereinafter simply referred to as the reference line) that serves as a reference for drawing the virtual guide in the world coordinate system.
  • the direction of the virtual guide viewed from the first viewpoint in FIG. 2 is determined by designating point 141 .
  • the direction is determined as the direction vector N1 in the local coordinate system (hereinafter referred to as the first local coordinate system) corresponding to the first viewpoint.
  • the rotation matrix representing the orientation of the first local coordinate system at this time is R1
  • the direction vector W1 of the reference line in the world coordinate system is given by the following equation (2).
  • W1 R1N1 ( 2 )
  • the reference line in the world coordinate system can be expressed as a straight line passing through the point O1 and extending in the same direction as the direction vector W1.
  • the reference line of the virtual guide 150 in the local coordinate system corresponding to the second viewpoint 102 (hereinafter referred to as the second local coordinate system) can be configured.
  • R2 be the rotation matrix representing the orientation of the second local coordinate system in the world coordinate system
  • O2 be the position of the second viewpoint 102 in the world coordinate system.
  • U2 is represented by the following equations ((1), (3 ) formula).
  • U 2 R 2 ⁇ 1 (O 1 ⁇ O 2 +kW 1 ) (4)
  • the range of the real number parameter k is the range required for drawing on the display screen displaying the virtual guide. Exceptionally, when the direction of the virtual guide is perpendicular to the display screen, only the section of the virtual guide corresponding to the point R 2 -1 (O 1 -O 2 ) is displayed.
  • the coordinates in the world coordinate system of the viewpoint used as the display reference in the virtual guide display device (tablet terminal 110) and the world coordinates in the local coordinate system can be used to construct the reference line.
  • the present invention is not limited to the above-described embodiments, and includes various modifications.
  • the above-described embodiments have been described in detail in order to explain the present invention in an easy-to-understand manner, and are not necessarily limited to those having all the described configurations.
  • part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment.
  • part or all of the above configurations, functions, processing units, processing means, etc. may be realized by hardware, for example, by designing them as integrated circuits.
  • each of the above configurations, functions, and the like may be realized by software by the processor 1010 interpreting and executing the program 1021 for realizing each function.
  • Information such as programs, tables, and files that implement each function may be stored in the memory 1020, a recording device such as a hard disk, SSD (Solid State Drive), or a recording medium such as an IC card, SD card, or DVD. Alternatively, it may be stored in a device on a communication network.
  • control lines and information lines indicate what is considered necessary for explanation, and not all control lines and information lines are necessarily indicated on the product. In practice, it may be considered that almost all configurations are interconnected.
  • first user 20 second user 100: HMD 101: first viewpoint 102: second viewpoint 103: third viewpoint 104, 801: camera 105: arrow 106: screen 110: tablet terminal 120: servers 140, 160, 180: view images 150, 151a, 1102, 1104, 1202, 1400: virtual guides 170, 194, 1201, 1204: virtual objects 191, 192: designated position 193: tree 1010: processor 1011: position designation processing unit 1012: virtual guide generation display processing unit 1013: virtual object generation processing Unit 1020 : Memory 1021 : Program 1023 : Application program 1024 : Information data 1025 : Coordinate position information 1026 : Virtual guide information 1027 : Virtual object information 1028 : View image information 1032 : Left-eye line-of-sight detection sensor 1033 : Right-eye line-of-sight detection sensor 1034 : Display 1035: operation input device 1036: microphone 1037: speaker 1038: vibrator 1039: communication I/F 1040: Sensor group 1041: Ranging sensor 1042: Acceleration sensor

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

This virtual guide display device uses sensor output from a direction sensor for detecting the direction the virtual guide display device is facing in the world coordinate system and from a position sensor for detecting the position of the virtual guide display device in the world coordinate system as a basis to generate a virtual guide extending in the same direction as a first viewpoint from the virtual guide display device via a position specified in a first visual field image viewed from the first viewpoint, superposes the virtual guide on a second visual field image viewed from a second viewpoint differing from the first viewpoint and displays the result on a display, receives an operation specifying a position on the virtual guide displayed on the display, converts the specified position to a position in the world coordinate system, and outputs the result.

Description

仮想ガイド表示装置、仮想ガイド表示システム、及び仮想ガイド表示方法VIRTUAL GUIDE DISPLAY DEVICE, VIRTUAL GUIDE DISPLAY SYSTEM, AND VIRTUAL GUIDE DISPLAY METHOD
 本発明は、3次元空間の位置指定を行う仮想ガイド表示装置、仮想ガイド表示システム、及び仮想ガイド表示方法に関する。 The present invention relates to a virtual guide display device, a virtual guide display system, and a virtual guide display method for specifying a position in a three-dimensional space.
 近年、現実世界にデジタル情報を付与しCG(Computer Graphics)などで作った仮想空間上の仮想物体(仮想オブジェクト)を現実空間に反映し拡張していく拡張現実(Augmented Reality:AR)技術が広く利用され、3次元で現実空間を認識しつつ仮想オブジェクトを手軽に扱える仮想ガイド表示装置や仮想ガイド表示システムが普及している。その用途例として、遠隔作業支援等への適用も拡大してきているため、今までよりも装置に習熟していないユーザが3次元空間に対する位置の指定を行う機会が増えており、位置指定操作の向上が望まれている。例えば、作業者がヘッドマウントディスプレイ等のカメラで作業場所を撮影し、作業場所と離れた遠隔地にいる支援指示者が、作業場所の撮影画像をディスプレイ等で見ながら、撮影画像上の位置を指定して作業指示を与える場合、装置に不慣れな支援指示者にとって目的物体の位置を的確に指定することに難しさが生じる。また、ユーザが、自身でカメラ撮影しその撮影画像を見ながら、撮影画像上の目的物体の位置を指定し、例えば近傍の他のユーザに目的物体の位置を知らせる場合などでも、装置に習熟していないユーザにとって位置指定操作に難儀することとなる。3次元空間での位置指定は、主に、空間上に既に存在する物体(現実物体や仮想オブジェクト)を特定したり、指定の位置に仮想オブジェクトを配置したりする際に行われる。 In recent years, Augmented Reality (AR) technology, which adds digital information to the real world and reflects and augments virtual objects (virtual objects) in the virtual space created with CG (Computer Graphics), etc., has become widespread. 2. Description of the Related Art Virtual guide display devices and virtual guide display systems that can easily handle virtual objects while recognizing a three-dimensional real space are widely used. As an example of its use, applications such as remote work support are also expanding, so there are more opportunities than ever for users who are not familiar with the device to specify a position in a three-dimensional space. Improvement is desired. For example, a worker takes a picture of the work place with a camera such as a head-mounted display, and a support instructor who is in a remote location away from the work place looks at the picture of the work place on a display, etc. When specifying and giving work instructions, it is difficult for a support instructor who is unfamiliar with the apparatus to accurately specify the position of the target object. In addition, even when the user himself/herself takes a picture with a camera and, while looking at the photographed image, designates the position of the target object on the photographed image and, for example, informs other nearby users of the position of the target object, the user must be familiar with the apparatus. For a user who does not have this function, it is difficult to perform a position specifying operation. Position specification in a three-dimensional space is mainly performed when specifying an object (physical object or virtual object) that already exists in the space, or arranging a virtual object at a specified position.
 ここで、3次元空間上に既に存在する仮想オブジェクトを特定する方法として、特許文献1には、「複数の仮想ガイド表示装置間で仮想オブジェクトの情報が共通する仮想空間の画像を表示させる表示手段を備えるヘッドマウントディスプレイと通信可能に接続された仮想ガイド表示装置を複数備える情報処理システムであって、ヘッドマウントディスプレイの現実空間上の位置及び向きを取得し、取得した位置向き情報をもとに、仮想空間上の位置を特定する。特定した位置に対してヘッドマウントディスプレイからの矢印を表示するよう制御する情報処理システム(要約抜粋)」と記載されている。即ち、ヘッドマウントディスプレイ(Head Mounted Display;HMD)をかけた第1のユーザの視線を仮想オブジェクトとして第2のユーザのHMDに表示することで、視線の先のオブジェクトを第2のユーザに特定させることが示されている。 Here, as a method for specifying a virtual object already existing in a three-dimensional space, Patent Document 1 discloses "display means for displaying an image of a virtual space in which virtual object information is common among a plurality of virtual guide display devices. An information processing system comprising a plurality of virtual guide display devices communicably connected to a head-mounted display, which acquires the position and orientation of the head-mounted display in the real space, and based on the acquired position and orientation information , to specify a position in a virtual space, and an information processing system that controls to display an arrow from a head-mounted display to the specified position (summary excerpt)." That is, by displaying the line of sight of the first user through a head mounted display (HMD) as a virtual object on the HMD of the second user, the second user can identify the object ahead of the line of sight. is shown.
特開2017-33575号公報JP 2017-33575 A
 前記特許文献1では、一方向から視線が突き当たった場所の位置を指定するものであり、指定したい位置に突き当たる対象がない場合や、視線の向きに対して視線の突き当たった場所から奥行方向手前側及び奥側に異なる位置を指定する場合については何ら考慮されておらず、3次元空間上の任意の位置の指定ができないという課題があった。 In Patent Document 1, the position of the place where the line of sight hits is specified from one direction. There is no consideration for specifying a different position on the back side, and there is a problem that an arbitrary position on the three-dimensional space cannot be specified.
 本発明は、前記課題を考慮し、3次元空間上の任意の位置の指定を使い勝手良く簡単かつ的確に行える仮想ガイド表示装置、仮想ガイド表示システム、及び仮想ガイド表示方法を提供するものである。 In view of the above problems, the present invention provides a virtual guide display device, a virtual guide display system, and a virtual guide display method that can easily and accurately designate an arbitrary position in a three-dimensional space.
 上記課題を解決するために、本発明は請求の範囲に記載の構成を備える。その一例をあげるならば、本発明は、仮想ガイド表示装置であって、仮想ガイド表示装置が向いている世界座標系における方向を検出する方向センサと、前記仮想ガイド表示装置の前記世界座標系における位置を検出する位置センサと、ディスプレイと、前記ディスプレイに表示された画像の任意の点を指定する操作を受け付ける操作入力装置と、前記方向センサ、前記位置センサ、前記ディスプレイ、及び前記操作入力装置のそれぞれに接続されたプロセッサと、を備え、前記プロセッサは、前記方向センサ及び前記位置センサからのセンサ出力に基づいて、前記仮想ガイド表示装置からの第1の視点で見た第1の視界画像上の指定された位置を経由し、前記第1の視点と同方向に延びる仮想ガイドを生成し、前記第1の視点とは異なる第2の視点で見た第2の視界画像上に前記仮想ガイドを重畳して前記ディスプレイに表示し、前記ディスプレイに表示された前記仮想ガイド上の位置を指定する操作を前記操作入力装置が受け付け、前記指定された位置を前記世界座標系における位置に変換して出力する、ことを特徴とする。 In order to solve the above problems, the present invention has the configuration described in the claims. As an example, the present invention is a virtual guide display device comprising: a direction sensor for detecting a direction in the world coordinate system to which the virtual guide display device is directed; a position sensor that detects a position; a display; an operation input device that receives an operation to specify an arbitrary point on an image displayed on the display; and a processor connected to each of said processors, based on sensor outputs from said direction sensor and said position sensor, on a first field of view image viewed from a first viewpoint from said virtual guide display device. generating a virtual guide extending in the same direction as the first viewpoint via the specified position of, and displaying the virtual guide on a second view image viewed from a second viewpoint different from the first viewpoint is superimposed and displayed on the display, the operation input device receives an operation to specify a position on the virtual guide displayed on the display, and the specified position is converted to a position in the world coordinate system output.
 本発明によれば、3次元空間上の任意の位置の指定を使い勝手良く簡単かつ的確に行える仮想ガイド表示装置、仮想ガイド表示システム、及び仮想ガイド表示方法を提供することができる。なお、上記した以外の課題、構成及び効果は、以下の実施形態の説明により明らかにされる。 According to the present invention, it is possible to provide a virtual guide display device, a virtual guide display system, and a virtual guide display method that can easily and accurately designate any position in a three-dimensional space. Problems, configurations, and effects other than those described above will be clarified by the following description of the embodiments.
本実施形態に係る仮想ガイド表示装置、仮想ガイド表示システムの構成例を外観模式的に示す図。FIG. 2 is a diagram schematically showing the appearance of a configuration example of a virtual guide display device and a virtual guide display system according to the present embodiment; 図1で示した本実施形態において第1の視点からの視界画像上での位置指定操作を説明する図。FIG. 2 is a diagram for explaining a position designation operation on a field-of-view image from a first viewpoint in the embodiment shown in FIG. 1; 図1で示した本実施形態において第2の視点からの視界画像上で位置指定される場合の画面を示す図。FIG. 4 is a diagram showing a screen when a position is specified on the field-of-view image from a second viewpoint in the embodiment shown in FIG. 1; 図1で示した本実施形態において第2の視点からの視界画像上での位置指定操作を説明する図。FIG. 4 is a diagram for explaining a position designation operation on a field-of-view image from a second viewpoint in the embodiment shown in FIG. 1; 図1で示した本実施形態において第2の視点からの視界画像上での位置指定操作の別例を説明する図。FIG. 11 is a diagram for explaining another example of a position designation operation on a field-of-view image from a second viewpoint in the embodiment shown in FIG. 1; 図1で示した本実施形態で位置指定された状態を示す図。FIG. 2 is a diagram showing a state in which a position is specified in the embodiment shown in FIG. 1; 図1で示した本実施形態で指定位置に仮想オブジェクトを配置する場合を説明するイメージ図。FIG. 2 is an image diagram for explaining a case where a virtual object is arranged at a designated position in the embodiment shown in FIG. 1; 図1で示した本実施形態で指定位置にある物体を特定する場合を説明するイメージ図。FIG. 2 is an image diagram for explaining a case in which an object at a designated position is specified in the embodiment shown in FIG. 1; 図1で示した本実施形態で指定位置にある物体を特定する場合を説明するイメージ図。FIG. 2 is an image diagram for explaining a case in which an object at a designated position is specified in the embodiment shown in FIG. 1; 本実施形態に係る仮想ガイド表示装置、仮想ガイド表示システムの構成例を外観模式的に示す図。FIG. 2 is a diagram schematically showing the appearance of a configuration example of a virtual guide display device and a virtual guide display system according to the present embodiment; 本実施形態に係る仮想ガイド表示装置、仮想ガイド表示システムの基本動作を説明するフローチャート。4 is a flowchart for explaining the basic operation of the virtual guide display device and virtual guide display system according to the present embodiment; 本実施形態に係る仮想ガイド表示装置の構成例を示すブロック図。1 is a block diagram showing a configuration example of a virtual guide display device according to this embodiment; FIG. 本実施形態に係る仮想ガイド表示装置において視界画像上の位置指定のときに点でなく範囲を指定する場合の操作を説明する図。FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment; 本実施形態に係る仮想ガイド表示装置において視界画像上の位置指定のときに点でなく範囲を指定する場合の操作を説明する図。FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment; 本実施形態に係る仮想ガイド表示装置において視界画像上の位置指定のときに点でなく範囲を指定する場合の操作を説明する図。FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment; 本実施形態に係る仮想ガイド表示装置において視界画像上の位置指定のときに点でなく範囲を指定する場合の操作を説明する図。FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment; 本実施形態に係る仮想ガイド表示装置において視界画像上の位置指定のときに点でなく範囲を指定する場合の操作を説明する図。FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment; 本実施形態に係る仮想ガイド表示装置において視界画像上の位置指定のときに点でなく範囲を指定する場合の操作を説明する図。FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment; 本実施形態に係る仮想ガイド表示装置において視界画像上の位置指定のときに点でなく範囲を指定する場合の操作を説明する図。FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment; 本実施形態に係る仮想ガイド表示装置において3次元空間上の指定位置に仮想オブジェクトを配置する場合の位置指定操作を説明する図。FIG. 5 is a diagram for explaining a position specifying operation when arranging a virtual object at a specified position in a three-dimensional space in the virtual guide display device according to the embodiment; 本実施形態に係る仮想ガイド表示装置において3次元空間上の指定位置に仮想オブジェクトを配置する場合の位置指定操作を説明する図。FIG. 5 is a diagram for explaining a position specifying operation when arranging a virtual object at a specified position in a three-dimensional space in the virtual guide display device according to the embodiment; 本実施形態に係る仮想ガイド表示装置において3次元空間上の指定位置に仮想オブジェクトを配置する場合の位置指定操作を説明する図。FIG. 5 is a diagram for explaining a position specifying operation when arranging a virtual object at a specified position in a three-dimensional space in the virtual guide display device according to the embodiment; 本実施形態に係る仮想ガイド表示装置において3次元空間上の指定位置に仮想オブジェクトを配置する場合の位置指定操作を説明する図。FIG. 5 is a diagram for explaining a position specifying operation when arranging a virtual object at a specified position in a three-dimensional space in the virtual guide display device according to the embodiment; 本実施形態に係る仮想ガイド表示装置において第1の視点、第2の視点で見た両視界画像を2画面表示する場合を説明する図。FIG. 4 is a diagram for explaining a case where two field-of-view images viewed from a first viewpoint and a second viewpoint are displayed on two screens in the virtual guide display device according to the present embodiment; 本実施形態に係る仮想ガイド表示装置において第1の視点、第2の視点で見た両視界画像を2画面表示する場合を説明する図。FIG. 4 is a diagram for explaining a case where two field-of-view images viewed from a first viewpoint and a second viewpoint are displayed on two screens in the virtual guide display device according to the present embodiment; 本実施形態に係る仮想ガイド表示装置において、仮想ガイドと視界画像内の物体との交差部分にタグを表示し、タグ選択による位置指定を説明する図。FIG. 10 is a view for explaining position designation by tag selection, in which tags are displayed at intersections between the virtual guide and objects in the visual field image in the virtual guide display device according to the present embodiment; 本実施形態に係る仮想ガイド表示装置において、仮想ガイドと視界画像内の物体との交差部分にタグを表示し、タグ選択による位置指定を説明する図。FIG. 10 is a view for explaining position designation by tag selection, in which tags are displayed at intersections between the virtual guide and objects in the visual field image in the virtual guide display device according to the present embodiment;
 以下、本発明の実施形態の例を、図面を用いて説明する。全図を通じて同一の構成には同一の符号を付し、重複説明を省略する。 Hereinafter, examples of embodiments of the present invention will be described with reference to the drawings. The same reference numerals are given to the same configurations throughout the drawings, and redundant explanations are omitted.
<第1実施形態>
 図1は、本実施形態に係る仮想ガイド表示装置、仮想ガイド表示システムの構成例を外観模式的に示す図である。図2、図3、図4A、図5、図4Bは、図1で示した本実施形態における3次元位置の指定操作を説明する図である。以下では、第1の仮想ガイド表示装置として、ユーザの頭部に装着されるヘッドマウントディスプレイ(以下「HMD」と称する。)100を、第2の仮想ガイド表示装置としてタブレット端末110を用いた例について説明する。
<First Embodiment>
FIG. 1 is a diagram schematically showing the appearance of a configuration example of a virtual guide display device and a virtual guide display system according to this embodiment. 2, 3, 4A, 5, and 4B are diagrams for explaining the three-dimensional position designation operation in the embodiment shown in FIG. In the following, an example of using a head-mounted display (hereinafter referred to as "HMD") 100 worn on the user's head as the first virtual guide display device and a tablet terminal 110 as the second virtual guide display device. will be explained.
 図1において、HMD100は、カメラ104を具備する。カメラ104の画角は、HMD100を装着した第1のユーザ10の第1の視点101と同一とみなす。カメラ104は、第1の視点101で3次元空間の視界風景を撮影し、第1の視点101で見た第1の視界画像を得る。本実施形態では、第1の視界画像は、現実の3次元空間の視界風景を撮影した画像であるが、仮想現実(Virtual Reality)を第1の視点101からみた画像でもよい。 In FIG. 1, the HMD 100 comprises a camera 104. The angle of view of the camera 104 is assumed to be the same as the first viewpoint 101 of the first user 10 wearing the HMD 100 . A camera 104 captures a view scene in a three-dimensional space from a first viewpoint 101 to obtain a first view image viewed from the first viewpoint 101 . In this embodiment, the first field-of-view image is an image obtained by photographing the field-of-view scenery in the actual three-dimensional space, but it may be an image of virtual reality viewed from the first viewpoint 101 .
 第1のユーザ10が矢印105で示すように第1の視点101の位置から第2の視点102の位置に移動すると、HMD100は第1のユーザ10の第2の視点102でカメラ104により視界風景を撮影し、第2の視点102で見た第2の視界画像を得る。 When the first user 10 moves from the position of the first viewpoint 101 to the position of the second viewpoint 102 as indicated by an arrow 105 , the HMD 100 captures the scenery from the camera 104 at the second viewpoint 102 of the first user 10 . is captured to obtain a second view image viewed from a second viewpoint 102 .
 一方、タブレット端末110は第2のユーザ20により操作される。タブレット端末110はHMD100との無線等の通信により、HMD100で得られた第1の視点101及び第2の視点102のそれぞれで見た視界画像を受信する。タブレット端末110は、受信した視界画像をタブレット端末110が具備する画面106に表示する。 On the other hand, the tablet terminal 110 is operated by the second user 20 . The tablet terminal 110 receives field-of-view images obtained by the HMD 100 and viewed from the first viewpoint 101 and the second viewpoint 102 through wireless communication with the HMD 100 . The tablet terminal 110 displays the received field-of-view image on the screen 106 of the tablet terminal 110 .
 また、多くの情報を高速に処理し格納できるサーバ120は、無線等の通信により、HMD100及びタブレット端末110のそれぞれとの間で、視界画像や仮想ガイド情報など各種の情報を送受信する。 In addition, the server 120, which can process and store a large amount of information at high speed, transmits and receives various types of information such as view images and virtual guide information to and from the HMD 100 and the tablet terminal 110 through wireless communication.
 このような状態において、3次元空間上の任意の位置に仮想オブジェクトを配置したり、その位置にある物体を特定したりするには、3次元空間上の任意の位置を指定する必要がある。 In such a state, it is necessary to specify an arbitrary position on the 3D space in order to place the virtual object at an arbitrary position on the 3D space or to specify the object at that position.
 例えば、図1に示した何もない3次元空間上の位置130を指定するには、まず、HMD100では、カメラ104により第1の視点101で視界風景を撮影し、撮影された第1の視点101での視界画像140(図2参照)をタブレット端末110に送信する。 For example, to designate a position 130 in the empty three-dimensional space shown in FIG. A field-of-view image 140 (see FIG. 2) at 101 is transmitted to the tablet terminal 110 .
 タブレット端末110は、受信した視界画像140を画面106に表示する。第2のユーザ20は、表示された視界画像140上で第1の視点101で見たときの点の位置141を指定する。視界画像140上で第1の視点101で見たときの点の位置141は、3次元空間上では、位置130に相当する。 The tablet terminal 110 displays the received view image 140 on the screen 106. The second user 20 designates the position 141 of the point when viewed from the first viewpoint 101 on the displayed field-of-view image 140 . A position 141 of a point on the field-of-view image 140 when viewed from the first viewpoint 101 corresponds to a position 130 on the three-dimensional space.
 更に、タブレット端末110では、視界画像140上で指定された位置141を経由し第1の視点101と同方向に延びる仮想オブジェクト(以降、「仮想ガイド150」と称す)を生成する。生成された仮想ガイド150上には、3次元空間上で指定したい位置130に相当する点が存在することになる。 Furthermore, the tablet terminal 110 generates a virtual object (hereinafter referred to as a "virtual guide 150") that extends in the same direction as the first viewpoint 101 via a position 141 specified on the view image 140. A point corresponding to the position 130 to be specified in the three-dimensional space exists on the generated virtual guide 150 .
 次に、第1のユーザ10が第1の視点101から第2の視点102へ視点移動(矢印105に沿った移動)したHMD100では、カメラ104により第2の視点102で視界風景を撮影し、撮影された第2の視点102での視界画像160(図3参照)をタブレット端末110に送信する。 Next, in the HMD 100 where the first user 10 has moved from the first viewpoint 101 to the second viewpoint 102 (moved along the arrow 105), the camera 104 shoots the scenery from the second viewpoint 102, A view image 160 (see FIG. 3) captured at the second viewpoint 102 is transmitted to the tablet terminal 110 .
 タブレット端末110では、図3に示すような視界画像160が画面106に表示される。更にタブレット端末110では、視界画像160上に生成された仮想ガイド150が重畳表示される。 A view image 160 as shown in FIG. 3 is displayed on the screen 106 of the tablet terminal 110 . Further, on the tablet terminal 110, the generated virtual guide 150 is superimposed on the view image 160 and displayed.
 第2のユーザ20は、タブレット端末110上で、図4Aに示すように、3次元空間上の位置130を第2の視点102で見たときの視界画像160上の位置として、仮想ガイド150上にある位置151を指定する。よって、位置指定したい3次元空間上の位置130は、視界画像160上では、図5に示すように、仮想ガイド150上で指定された位置151に相当する位置161として特定される。即ち、第1の視点101で見た視界画像上の指定された位置141(図2参照)を経由し第1の視点101と同方向に延びる仮想ガイド150を生成し、第2の視点102で見た視界画像上に生成された仮想ガイド150を表示し、表示された仮想ガイド150上の位置151(図4A参照)を指定することにより、3次元空間上の任意の位置を特定することが可能になる。 On the tablet terminal 110, as shown in FIG. 4A, the second user 20, as shown in FIG. Specifies the position 151 at . Therefore, a position 130 in the three-dimensional space to be specified is identified as a position 161 corresponding to the position 151 specified on the virtual guide 150 on the view image 160, as shown in FIG. That is, a virtual guide 150 is generated that extends in the same direction as the first viewpoint 101 via a specified position 141 (see FIG. 2) on the visual field image viewed from the first viewpoint 101, and By displaying the generated virtual guide 150 on the viewed visual field image and specifying the position 151 (see FIG. 4A) on the displayed virtual guide 150, it is possible to identify any position in the three-dimensional space. be possible.
 要約すると、第1の視点101で仮想ガイド150を空間座標に合わせて空間に表示、配置して、その空間に固定された仮想ガイド150を第2の視点102から見ながら最終的な3次元空間上の位置130を決定するものである。なお、仮想ガイド150の生成は、タブレット端末110以外に、HMD100や多量の情報を扱えるサーバ120を用いてもよい。この場合、サーバ120が無線等の通信により、タブレット端末110に仮想ガイド150を表示するための仮想ガイド情報を配信する。また、仮想ガイドの生成過程、位置指定過程は、HMD100にも表示して第1のユーザ10が見えるようにしてもよい。 To summarize, the virtual guide 150 is displayed and arranged in space according to the spatial coordinates from the first viewpoint 101, and the virtual guide 150 fixed in the space is viewed from the second viewpoint 102 to create the final three-dimensional space. It determines the top position 130 . Note that the virtual guide 150 may be generated by using the HMD 100 or the server 120 capable of handling a large amount of information other than the tablet terminal 110 . In this case, the server 120 distributes virtual guide information for displaying the virtual guide 150 on the tablet terminal 110 by wireless communication or the like. The process of generating the virtual guide and the process of specifying the position may also be displayed on the HMD 100 so that the first user 10 can see it.
 なお、位置指定の拡張した手法として、仮想ガイド150の近傍の位置を指定してもよい。図4Bを用いて説明する。 It should be noted that a position near the virtual guide 150 may be specified as an extended method of specifying the position. Description will be made with reference to FIG. 4B.
 仮想ガイド150から外れた位置152を指定した場合、位置を指定しただけでは第2の視点102に沿った奥行方向の距離がわからず、位置152が定まらない。この点に関しては、予め、例えば仮想ガイド150への距離が最も近くなる位置、即ち、位置152を経由する第2の視点102方向の直線と仮想ガイド150が示す直線との最短距離という規則を定めておき、その規則に従って奥行方向の位置を決定すればよい。 When a position 152 deviated from the virtual guide 150 is specified, the distance in the depth direction along the second viewpoint 102 cannot be determined simply by specifying the position, and the position 152 cannot be determined. Regarding this point, a rule is established in advance, for example, the position at which the distance to the virtual guide 150 is the shortest, that is, the shortest distance between the straight line in the direction of the second viewpoint 102 passing through the position 152 and the straight line indicated by the virtual guide 150. Then, the position in the depth direction can be determined according to the rule.
 また、第2のユーザ20が、仮想ガイド150上でない位置を指定した場合の別の処理方法としては、例えば第2の視点102の視界画像上において、位置152に対して最も近い仮想ガイド150上の位置を指定したこととしてもよい。 Another processing method when the second user 20 designates a position not on the virtual guide 150 is, for example, on the virtual guide 150 closest to the position 152 on the visual field image of the second viewpoint 102 . It is also possible to specify the position of
 また、仮想ガイド150上でない位置を第2のユーザ20が指定したときの処理規則を複数規定しておき、どの規則を適用するかを、第2のユーザ20が指定できるようにしてもよい。これにより、位置指定の自由度が向上し、ユーザにとっての利便性が向上する。 Also, a plurality of processing rules may be defined when the second user 20 designates a position not on the virtual guide 150 so that the second user 20 can designate which rule to apply. As a result, the degree of freedom in specifying the position is improved, and the convenience for the user is improved.
 次に、3次元空間上の指定した位置に仮想オブジェクトを配置する場合について、図6を用いて説明する。図6は、図1で示した本実施形態で指定した位置に仮想オブジェクトを配置する場合を説明するイメージ図である。 Next, the case of arranging a virtual object at a specified position in the three-dimensional space will be explained using FIG. FIG. 6 is an image diagram for explaining a case where a virtual object is arranged at the position specified in this embodiment shown in FIG.
 図6において、タブレット端末110では、上記の空間位置指定プロセスにより、例えば3次元空間上の位置130の指定を確定し、その位置130上に配置する仮想オブジェクト170を生成する。仮想オブジェクト170には、位置130の空間座標が紐づけられている。 In FIG. 6, the tablet terminal 110 determines the designation of, for example, a position 130 in the three-dimensional space by the spatial position designation process described above, and generates a virtual object 170 to be placed on the position 130 . The spatial coordinates of the position 130 are associated with the virtual object 170 .
 更に、HMD100では、生成された空間座標付の仮想オブジェクト170を、例えば俯瞰の視点である第3の視点103(図1参照)で見た視界画像180(図6参照)上に表示する。図6では、仮想オブジェクト170として、行先や集合の場所を示すための「This Point」と表記された仮想オブジェクトを例として示している。これにより、HMD100を装着している第1のユーザ10は、タブレット端末110を操作している第2のユーザ20が指示する行先や集合の場所を容易に視認把握することができる。即ち、仮想オブジェクト170を3次元空間上の指定した位置130に配置することにより、仮想オブジェクト170を生成、位置130に配置した第2のユーザ20とは異なる第1のユーザ10に対し、仮想オブジェクト170が示す指示等の有益な情報を指定した位置130で的確かつ容易に知らしめることができる。 Furthermore, the HMD 100 displays the generated virtual object 170 with spatial coordinates on the field image 180 (see FIG. 6) viewed from the third viewpoint 103 (see FIG. 1), which is a bird's eye view, for example. FIG. 6 shows, as an example of the virtual object 170, a virtual object labeled "This Point" for indicating a destination or gathering place. As a result, the first user 10 wearing the HMD 100 can easily visually recognize the destination and meeting place indicated by the second user 20 operating the tablet terminal 110 . That is, by arranging the virtual object 170 at the specified position 130 in the three-dimensional space, the virtual object 170 is generated, and the virtual object 170 is generated for the first user 10 different from the second user 20 who is arranged at the position 130 . Useful information such as instructions indicated by 170 can be accurately and easily notified at the specified position 130 .
 なお、仮想オブジェクトの生成は、タブレット端末110以外に、サーバ120を用いてもよく、無線等の通信により、仮想オブジェクト170を表示できる仮想ガイド表示装置(上記の例ではHMD100)に配信することになる。また、空間座標の指定だけをサーバ120で行い、その空間座標データを送受信してHMD100で指定した空間座標位置に仮想オブジェクト170を表示してもよい。サーバ120でオブジェクトの種類、表示方向を指定してHMD100で表示してもよい。 Note that the virtual object may be generated by using the server 120 other than the tablet terminal 110, and is distributed to the virtual guide display device (HMD 100 in the above example) that can display the virtual object 170 by wireless communication or the like. Become. Alternatively, the server 120 may specify only the spatial coordinates, transmit and receive the spatial coordinate data, and display the virtual object 170 at the spatial coordinate position specified by the HMD 100 . The server 120 may specify the type and display direction of the object, and the HMD 100 may display the object.
 また、3次元空間上の位置指定により現実物体や仮想オブジェクトを特定する場合について、図7Aを用いて説明する。図7Aは、図1で示した本実施形態で指定位置にある物体を特定する場合を説明するイメージ図である。図7Aにおいて、タブレット端末110では、上記の空間位置指定プロセスにより、例えば3次元空間上の位置191、192の指定を行う。 Also, a case of specifying a physical object or a virtual object by specifying a position in a three-dimensional space will be described with reference to FIG. 7A. FIG. 7A is an image diagram illustrating a case of identifying an object at a designated position in the embodiment shown in FIG. In FIG. 7A, in the tablet terminal 110, for example, positions 191 and 192 in the three-dimensional space are designated by the spatial position designation process described above.
 HMD100では、指定された位置191を用いて、指定位置191上にある現実物体の樹木193を、例えば第3の視点103で見た視界画像180上で特定する。 The HMD 100 uses the specified position 191 to identify the tree 193 of the physical object on the specified position 191 on the field image 180 seen from the third viewpoint 103, for example.
 また、HMD100では、指定された位置192を用いて、指定位置192上にある仮想オブジェクト194を、例えば第3の視点103で見た視界画像180上で特定する。 Also, in the HMD 100, using the specified position 192, the virtual object 194 located on the specified position 192 is specified on the field image 180 viewed from the third viewpoint 103, for example.
 図7Aは、樹木193を説明するための「記念の木です」と表記された仮想オブジェクト194が表示されている画面例を示す。なお、図7Aでは、第3の視点103で見て樹木193という現実物体を特定する場合を示したが、現実空間上では見ている第1のユーザ10は地面を歩くため、第1のユーザ10が図7Aに示す第3の視点103の位置に移動することは困難であり、実際は現実空間上可能な視点で見て現実物体を特定してもよい。 FIG. 7A shows an example of a screen displaying a virtual object 194 labeled "This is a memorial tree" for explaining the tree 193. FIG. Note that FIG. 7A shows a case in which a physical object such as a tree 193 is specified when viewed from the third viewpoint 103. However, since the first user 10 who is viewing the physical space walks on the ground, the first user 10 It is difficult for the robot 10 to move to the position of the third viewpoint 103 shown in FIG. 7A, and in practice, the physical object may be identified by viewing from a viewpoint that is possible in the physical space.
 図7Aの説明において、現実空間上での現実物体を特定する場合は、第3の視点103の映像はあくまでも説明上の例である。ただし、現実空間の画像、距離情報を取り込んで仮想空間として扱う場合には、第3の視点103に容易に移動することが可能なため、第3の視点103で現実物体を特定してもよい。 In the description of FIG. 7A, when specifying a physical object in the physical space, the image of the third viewpoint 103 is merely an example for explanation. However, when an image of the physical space and distance information are captured and treated as a virtual space, it is possible to easily move to the third viewpoint 103, so the physical object may be identified from the third viewpoint 103. .
 また、図7Bに示すように、第1の視点101で設定した仮想ガイド150の他、第2の視点102で指定した位置151に基づいた仮想ガイド151aを表示し、2つのガイドの交点以外の点162を指定できるように構成してもよい。この時、第3の視点103での点162の奥行方向の位置については、例えば、仮想ガイド150と仮想ガイド151aに対する距離の和が最小になる位置、という規則を予め設定しておき、決定すればよい。この手法により、複数の仮想ガイド150、151aの交点からずれた位置を指定できる等、位置指定の利便性が向上する。 In addition, as shown in FIG. 7B, in addition to the virtual guide 150 set at the first viewpoint 101, a virtual guide 151a based on the position 151 specified at the second viewpoint 102 is displayed, and the position other than the intersection of the two guides is displayed. It may be configured so that the point 162 can be specified. At this time, regarding the position of the point 162 in the depth direction at the third viewpoint 103, for example, the rule that the sum of the distances to the virtual guide 150 and the virtual guide 151a is the smallest is set in advance and determined. Just do it. This technique improves the convenience of position designation, such as allowing designation of a position deviated from the intersection of the plurality of virtual guides 150 and 151a.
 また、これにより、HMD100を装着している第1のユーザ10は、タブレット端末110を操作している第2のユーザ20が指し示す現実物体や仮想オブジェクトを的確かつ容易に確認することができる。即ち、第1の視点と第2の視点を行き来することや、もしくは第1又は第2の視点と第nの視点との見え方の差から指定地点を認識して3次元空間上の位置を特定することにより、現実物体や仮想オブジェクトを見ているユーザに対し、特定位置にある現実物体や仮想オブジェクトの指示を的確かつ容易に行うことができる。 Also, this allows the first user 10 wearing the HMD 100 to accurately and easily confirm the physical object or virtual object pointed by the second user 20 operating the tablet terminal 110 . That is, by going back and forth between the first viewpoint and the second viewpoint, or by recognizing the designated point from the difference in appearance between the first or second viewpoint and the n-th viewpoint, the position in the three-dimensional space can be determined. By specifying, it is possible to accurately and easily indicate a physical object or virtual object at a specific position to a user viewing the physical object or virtual object.
 図6、図7A、図7Bで説明した例によれば、第1のユーザ10に相当する作業者がHMDに搭載されたカメラ104で作業場所等を撮影し、作業場所等と離れた遠隔地にいる第2のユーザ20に相当する支援指示者が、作業場所等のカメラ撮影画像をタブレット端末110の画面106等で見ながら、3次元空間上の所望の位置を容易に指定することができる。よって、指定された所望の位置に配置した仮想オブジェクトにより作業指示などの情報を作業者に与えたり、指定位置にある物体を特定し作業者に指し示めして知らせたりすることができる。而して、装置に不慣れな支援指示者は遠隔地の作業者への指示等の支援を的確に使い勝手よく行うことが可能になる。 According to the examples described with reference to FIGS. 6, 7A, and 7B, a worker corresponding to the first user 10 photographs a work place or the like with a camera 104 mounted on an HMD, A support instructor corresponding to the second user 20 in the office can easily specify a desired position in the three-dimensional space while viewing the camera-captured image of the work place, etc. on the screen 106 of the tablet terminal 110, etc. . Therefore, information such as work instructions can be given to a worker by a virtual object placed at a specified desired position, or an object at a specified position can be identified and pointed to the worker to inform the worker of the information. Therefore, a support instructor who is unfamiliar with the apparatus can accurately and conveniently provide support such as instructions to remote workers.
 次に、本実施形態に係る仮想ガイド表示装置、仮想ガイド表示システムの他の構成例について、図8を用いて説明する。図8は、本実施形態に係る仮想ガイド表示装置、仮想ガイド表示システムの他の構成例を外観模式的に示す図である。図1に示した構成例との相違点は、タブレット端末110はカメラ801を具備し、カメラ801を用いて第1の視点101、第2の視点102で視界風景を撮影する点である。位置指定の説明に用いる図面は、図2から図7Bを流用する。 Next, another configuration example of the virtual guide display device and virtual guide display system according to this embodiment will be described with reference to FIG. FIG. 8 is a diagram schematically showing the appearance of another configuration example of the virtual guide display device and the virtual guide display system according to this embodiment. The difference from the configuration example shown in FIG. 1 is that the tablet terminal 110 is equipped with a camera 801 and uses the camera 801 to photograph the scenery from a first viewpoint 101 and a second viewpoint 102 . 2 to 7B are used as the drawings used for explaining the position designation.
 図8において、タブレット端末110は、視界風景を撮影するカメラ801を具備し、カメラ801により第2のユーザ20の第1の視点101での視界風景を撮影し、撮影された視界画像140を画面106に表示すると共に、第2のユーザ20によって表示された視界画像140上で位置141(図2参照)が指定される。 In FIG. 8, the tablet terminal 110 is provided with a camera 801 that captures a field of view scenery, captures the field of view of the second user 20 from the first viewpoint 101 with the camera 801, and displays the captured field of view image 140 on the screen. 106 and a position 141 (see FIG. 2) is specified on the field of view image 140 displayed by the second user 20 .
 更に、タブレット端末110では、視界画像140上で指定された位置141を経由し第1の視点101と同方向に延びる仮想ガイド150(図3参照)を生成する。 Furthermore, the tablet terminal 110 generates a virtual guide 150 (see FIG. 3) that extends in the same direction as the first viewpoint 101 via a position 141 specified on the field-of-view image 140 .
 更に、第2のユーザ20が矢印105で示すように第1の視点101の位置から第2の視点102の位置に移動すると、タブレット端末110では、カメラ801により第2のユーザ20の第2の視点102での視界風景を撮影し、撮影された視界画像160(図3参照)を画面106に表示する。 Furthermore, when the second user 20 moves from the position of the first viewpoint 101 to the position of the second viewpoint 102 as indicated by an arrow 105 , the camera 801 of the tablet terminal 110 moves the second user 20 from the second viewpoint. The field of view scenery at the viewpoint 102 is photographed, and the photographed field of view image 160 (see FIG. 3) is displayed on the screen 106. - 特許庁
 第2のユーザ20が、視界画像160上に表示された仮想ガイド150上にあって、3次元空間上の位置130を第2の視点102で見たときの視界画像160上の位置151が視界画像160上で指定される。よって、位置指定したい3次元空間上の位置130を視界画像160上で位置161として指定することができる。 When the second user 20 is on the virtual guide 150 displayed on the field of view image 160 and sees the position 130 in the three-dimensional space from the second viewpoint 102, the position 151 on the field of view image 160 is the field of view. Specified on image 160 . Therefore, a position 130 on the three-dimensional space to be specified can be specified as a position 161 on the field-of-view image 160 .
 これにより、装置に習熟していないユーザが、自身でカメラ撮影しその撮影画像を見ながら、3次元空間上の位置をたやすく指定することが可能になる。而して、その場で又は遠隔で空間を共有している他のユーザに対し、3次元空間上の指定位置に配置した仮想オブジェクトを見せて仮想オブジェクトが示す情報を知らせたり、3次元空間上の指定位置にある物体を指し示して知らせたりすることが的確かつ容易に行うことができるという効果が得られる。尚、以上の動作はタブレット端末110の代わりにHMD100を使用して行ってもよい。その際は、HMD100に搭載されたカメラ及び入力装置を使用する。 This makes it possible for users who are not familiar with the device to easily specify the position in the three-dimensional space while viewing the captured image from the camera themselves. Therefore, other users sharing the space on the spot or remotely can be shown a virtual object arranged at a specified position in the three-dimensional space to be notified of the information indicated by the virtual object, or It is possible to accurately and easily indicate an object at the specified position of the user. Note that the above operation may be performed using the HMD 100 instead of the tablet terminal 110 . At that time, the camera and input device mounted on the HMD 100 are used.
 なお、図1、図8では、第1の仮想ガイド表示装置、第2の仮想ガイド表示装置はHMD100、タブレット端末110を例に挙げて説明したが、HMD100、タブレット端末110以外に、相応の機能を有するスマートフォン、パソコンなどどのような機器を用いても構わない。 1 and 8, the HMD 100 and the tablet terminal 110 are used as examples of the first virtual guide display device and the second virtual guide display device. You can use any device such as a smartphone, a personal computer, etc.
 以上の説明例では、第1の視点101、第2の視点102とも現在の風景を撮影した視界画像を用いた場合であるが、第1の視点101として過去に撮影した視界画像を用いて位置を指定し、仮想ガイドを生成表示し、第2の視点102として今現在の視点で仮想ガイド上の位置を指定してもよい。この場合、視点を変えるために場所を移動しなくても3次元空間上の座標位置の指定が可能となる。 In the above explanation example, both the first viewpoint 101 and the second viewpoint 102 use field-of-view images of the current landscape. may be specified to generate and display a virtual guide, and a position on the virtual guide may be specified at the current viewpoint as the second viewpoint 102 . In this case, it is possible to designate a coordinate position in the three-dimensional space without moving the place to change the viewpoint.
 同様に、第1の視点として今現在の視点で撮影する視界画像を用いて位置を指定し、仮想ガイドを生成表示し、第2の視点として過去に撮影した視界画像を用いて仮想ガイド上の位置を指定してもよい。即ち、第1の視点、第2の視点として過去の視界画像を適宜組み合わせて使用してもよい。 Similarly, as a first viewpoint, a position is specified using a field image taken at the current viewpoint, a virtual guide is generated and displayed, and a field image taken in the past is used as a second viewpoint. You can specify the position. That is, as the first viewpoint and the second viewpoint, past field-of-view images may be appropriately combined and used.
 また、第1の視点、第2の視点の両方に過去の視界画像を使用してもよい。この過去の視界画像は空間座標データと紐づいている必要があることは言うまでもない。また、空間座標データを有しているものであれば、衛星写真、地図データのようなものでもよい。過去の視界画像として多量の情報データ1024を格納できるサーバ120にある画像データを使用してもよい。この場合、第1の仮想ガイド表示装置、第2の仮想ガイド表示装置での容量負担を大いに軽減できる。 Also, past visual field images may be used for both the first viewpoint and the second viewpoint. It goes without saying that this past field-of-view image must be associated with spatial coordinate data. Also, satellite photographs and map data may be used as long as they have spatial coordinate data. Image data in the server 120, which can store a large amount of information data 1024, may be used as the past view image. In this case, the capacity load on the first virtual guide display device and the second virtual guide display device can be greatly reduced.
 また、サーバ120では、過去の視界画像や仮想オブジェクトの保有等の処理だけでなく、それ以外の、例えば3次元空間上の位置指定などの動作処理の一部を実行しても構わない。 In addition, the server 120 may execute not only processing such as holding past field-of-view images and virtual objects, but also a part of other operation processing such as position designation in a three-dimensional space.
 図9は、本実施形態に係る仮想ガイド表示装置、仮想ガイド表示システムの基本操作を説明するフローチャートの例である。 FIG. 9 is an example of a flowchart explaining the basic operation of the virtual guide display device and virtual guide display system according to this embodiment.
 図9において、第1の仮想ガイド表示装置により第1の視点での視界画像が得られ(S901)、得られた第1の視点での視界画像上で位置指定が行われると(S902)、指定された位置を経由し第1の視点と同方向に延びる仮想ガイドを生成する(S903)。仮想ガイドを生成・表示するアルゴリズム例は後述する。 In FIG. 9, when a visual field image at a first viewpoint is obtained by the first virtual guide display device (S901), and a position is specified on the obtained visual field image at the first viewpoint (S902), A virtual guide is generated that extends in the same direction as the first viewpoint via the specified position (S903). An example algorithm for generating and displaying a virtual guide is described below.
 次に、第1の仮想ガイド表示装置から仮想ガイドの形状情報及び世界座標系の位置を示す位置情報及び世界座標系の向きを示す向き情報を含む仮想ガイド情報を第2の仮想ガイド表示装置に送信する。第2の仮想ガイド表示装置では、第2の視点での視界画像上に仮想ガイドを表示し(S904)、表示された仮想ガイド上で位置指定が行われる(S905)。 Next, from the first virtual guide display device, the virtual guide information including the shape information of the virtual guide, the position information indicating the position of the world coordinate system, and the direction information indicating the direction of the world coordinate system is sent to the second virtual guide display device. Send. In the second virtual guide display device, a virtual guide is displayed on the visual field image at the second viewpoint (S904), and a position is designated on the displayed virtual guide (S905).
 以上の2つの視点で位置を指定する処理900により、3次元空間上の位置を特定することができる。第2の仮想ガイド表示装置は、特定された位置を世界座標系における位置に変換して出力する。この出力態様は、第2の仮想ガイド表示装置のディスプレイ1034に表示してもよいし、第2の仮想ガイド表示装置のプロセッサ1010(図10)の内部において、仮想ガイド生成表示処理部1012(図10)から仮想オブジェクト生成処理部1013に出力して、この位置に仮想オブジェクトを配置するなど、特定した位置を利用する各処理部に出力する態様を含む。以下、出力された特定した位置を利用する処理に移行する(S906)。 The position in the three-dimensional space can be specified by the process 900 for specifying the position from the above two viewpoints. The second virtual guide display device converts the specified position into a position in the world coordinate system and outputs the position. This output mode may be displayed on the display 1034 of the second virtual guide display device, or may be displayed in the virtual guide generation display processing unit 1012 (FIG. 10) inside the processor 1010 (FIG. 10) of the second virtual guide display device. 10) to the virtual object generation processing unit 1013 and output to each processing unit that uses the specified position, such as arranging the virtual object at this position. Thereafter, the process proceeds to a process using the output specified position (S906).
 例えば、特定位置に仮想オブジェクトを配置すること、特定位置にある現実物体や仮想オブジェクトの指定に使用すること、特定位置の座標情報を他の端末に送信して使用することなどの利用処理が挙げられる。 For example, there are utilization processes such as arranging a virtual object at a specific position, using it to specify a physical object or virtual object at a specific position, and transmitting and using coordinate information of a specific position to another terminal. be done.
 例えば、特定した位置に仮想オブジェクトを配置する場合には、指示等の情報を表す仮想オブジェクトを配置し、3次元空間上で仮想オブジェクトの視認が可能なユーザに、指示等の有益な情報を与えることができる。 For example, when arranging a virtual object at a specified position, a virtual object representing information such as an instruction is arranged, and useful information such as an instruction is given to a user who can visually recognize the virtual object in the three-dimensional space. be able to.
 また、例えば、特定した位置で物体の位置を指定する場合には、特定した位置にある現実物体や仮想オブジェクトを指定し、現実物体や仮想オブジェクトを見ているユーザに、現実物体や仮想オブジェクトを指し示して知らしめることができる。このあと引き続き、3次元空間上の位置指定を繰り返す場合にはS901に戻る(S909:肯定)。 Further, for example, when specifying the position of an object at a specified position, a physical object or virtual object at the specified position is specified, and the user viewing the physical object or virtual object is notified of the physical object or virtual object. You can point it out. Subsequently, if the position specification in the three-dimensional space is to be repeated, the process returns to S901 (S909: Yes).
 一方、3次元空間上の位置指定が不要であれば(S909:否定)、一連の処理を終了する。 On the other hand, if the position designation in the three-dimensional space is not required (S909: No), the series of processing ends.
 以上の動作により、第1の視点で仮想ガイドを空間座標に合わせて生成した後、生成された仮想ガイドを第2の視点から見て最終的な位置を指定することができ、3次元空間上の任意の位置を的確かつ容易に特定することができる。 By the above operation, after the virtual guide is generated in accordance with the spatial coordinates from the first viewpoint, the final position of the generated virtual guide can be specified by viewing the generated virtual guide from the second viewpoint. can be accurately and easily specified.
 更に、3次元空間上の特定位置が表示視認できる仮想ガイド表示装置を装着したユーザに対し、特定した位置に配置した仮想オブジェクトにより有益な情報を表示したり、特定した位置にある物体をユーザに指し示して、ユーザへの指示支援を使い勝手よく実現することができる。 Furthermore, for a user wearing a virtual guide display device that can display and visually recognize a specific position in a three-dimensional space, useful information can be displayed by a virtual object placed at the specified position, or an object at the specified position can be displayed to the user. By pointing, it is possible to implement user-friendly instruction support to the user.
 次に、本実施形態に係る仮想ガイド表示装置の構成例について、図10を用いて説明する。図10は、本実施形態に係る第1の仮想ガイド表示装置及び第2の仮想ガイド表示装置の構成例の機能ブロック図である。 Next, a configuration example of the virtual guide display device according to this embodiment will be described using FIG. FIG. 10 is a functional block diagram of a configuration example of a first virtual guide display device and a second virtual guide display device according to this embodiment.
 図10において、第1の仮想ガイド表示装置(HMD100)、第2の仮想ガイド表示装置(タブレット端末110)は、プロセッサ1010、メモリ1020、カメラ104(タブレット端末110ではカメラ801)、左目視線検出センサ1032、右目視線検出センサ1033、ディスプレイ1034、操作入力装置1035、マイク1036、スピーカ1037、バイブレータ1038、通信I/F(通信器)1039、センサ群1040などを適宜用いて構成され、各構成要素はバス1050を介して相互に接続されている。 In FIG. 10, the first virtual guide display device (HMD 100) and the second virtual guide display device (tablet terminal 110) include a processor 1010, a memory 1020, a camera 104 (a camera 801 in the tablet terminal 110), and a left eye line detection sensor. 1032, a right eye line of sight detection sensor 1033, a display 1034, an operation input device 1035, a microphone 1036, a speaker 1037, a vibrator 1038, a communication I/F (communication device) 1039, a sensor group 1040, etc. They are interconnected via bus 1050 .
 プロセッサ1010は、CPU、ROM、RAM等で構成され、第1の仮想ガイド表示装置(HMD100)、第2の仮想ガイド表示装置(タブレット端末110)のコントローラを構成する。 The processor 1010 is composed of a CPU, ROM, RAM, etc., and constitutes a controller of the first virtual guide display device (HMD 100) and the second virtual guide display device (tablet terminal 110).
 プロセッサ1010は、メモリ1020に制御用のプログラム1021として格納されているオペレーティングシステム(Operating System:OS)1022や動作制御用のアプリケーションプログラム1023に従った処理を実行することにより、プロセッサ1010内の各機能部を制御し、OS、ミドルウェア、アプリケーション等の機能や他の機能を実現する。 The processor 1010 executes processing according to an operating system (OS) 1022 stored as a control program 1021 in the memory 1020 and an application program 1023 for operation control, thereby performing each function in the processor 1010. It controls the parts and implements the functions of the OS, middleware, applications, etc. and other functions.
 プロセッサ1010が実行することにより構成される機能部は、位置指定処理部1011、仮想ガイド生成表示処理部1012、仮想オブジェクト生成処理部1013がある。 Functional units configured by execution by the processor 1010 include a position designation processing unit 1011, a virtual guide generation display processing unit 1012, and a virtual object generation processing unit 1013.
 メモリ1020は、不揮発性記憶装置等で構成され、プロセッサ1010等が扱う各種のプログラム1021や情報データ1024を記憶する。情報データ1024としては、指定された位置などの空間座標位置を示す座標位置情報1025、仮想ガイドを生成・表示するために必要な仮想ガイド情報1026、仮想オブジェクトを表す仮想オブジェクト情報1027、現実物体を含む撮影された風景の視界画像情報1028などが格納される。 The memory 1020 is composed of a non-volatile storage device or the like, and stores various programs 1021 and information data 1024 handled by the processor 1010 and the like. The information data 1024 includes coordinate position information 1025 indicating a spatial coordinate position such as a designated position, virtual guide information 1026 required to generate and display a virtual guide, virtual object information 1027 representing a virtual object, and physical object information 1027. The view field image information 1028 of the photographed scenery and the like are stored.
 カメラ104、801は、前方周囲の視界状態を撮影するもので、レンズから入射した光を撮像素子で電気信号に変換して視界画像を取得する。なお、HMD100では、光学シースルー型の場合、第1のユーザ10が直接目で前方周囲の視界内の実体物を視認しながらカメラ104で視界を撮影し視界画像を得る。 The cameras 104 and 801 take images of the field of view around the front, and acquire the field of view image by converting the light incident from the lens into an electrical signal with an imaging device. In the case of the HMD 100 of the optical see-through type, the first user 10 obtains a field image by photographing the field of view with the camera 104 while visually recognizing an actual object in the front surrounding field of view with his/her own eyes.
 左目視線検出センサ1032、右目視線検出センサ1033は、それぞれ左目、右目の動きや向きを捉えて視線を検出する。なお、視線を検出する処理は、アイトラッキング処理として一般的に用いられている周知技術を利用すればよく、例えば、角膜反射を利用した技術として、赤外線LED(Light Emitting Diode)を顔に照射し赤外線カメラで撮影し、赤外線LED照射でできた反射光の角膜上の位置(角膜反射)を基準点とし、角膜反射の位置に対する瞳孔の位置に基づいて視線を検出する技術が知られている。また、可視光カメラで目を写し、基準点を目頭とし動点を虹彩にして目頭に対する虹彩の位置に基づいて視線を検出する方法も知られている。 The left-eye line-of-sight detection sensor 1032 and the right-eye line-of-sight detection sensor 1033 detect the line of sight by capturing the movements and orientations of the left and right eyes, respectively. For the processing of detecting the line of sight, a well-known technology that is generally used as eye tracking processing may be used. A known technique is to detect the line of sight based on the position of the pupil with respect to the position of the corneal reflection, with the position of the reflected light (corneal reflection) on the cornea taken by an infrared camera and the position of the corneal reflection as a reference point. Also known is a method of capturing an eye with a visible light camera, setting the inner corner of the eye as a reference point and the iris as a moving point, and detecting the line of sight based on the position of the iris relative to the inner corner of the eye.
 ディスプレイ1034は、タブレット端末110では、液晶パネルにより構成された画面106を含んで構成され、画面106に視界画像やアラーム等のユーザへの通知情報などを表示する。 In the tablet terminal 110, the display 1034 includes a screen 106 configured by a liquid crystal panel, and displays a visual field image, notification information to the user such as an alarm, and the like on the screen 106.
 操作入力装置1035は、例えば画面106に積層された静電容量式などのタッチパッドでもよい。タッチパッドは、指やタッチペンなどによる接近又は接触操作(タッチ操作)を検出する。ユーザが表示画像上で指定したい位置をタッチ操作することにより、容易に位置を指定できる。 The operation input device 1035 may be, for example, a capacitive touch pad stacked on the screen 106 . The touch pad detects an approach or contact operation (touch operation) by a finger, touch pen, or the like. The position can be easily specified by the user performing a touch operation on the position to be specified on the display image.
 また、ディスプレイ1034として、HMD100では、光学シースルー型の場合には、例えば、仮想オブジェクトやユーザへの通知情報などを投影するプロジェクタと投影された仮想オブジェクト等を目の前で結像表示させる透明なハーフミラーを用いて構成してもよい。これにより、ユーザは、目の前の視界範囲の実体物と共に、結像された仮想オブジェクトを浮かんでいるような形で両者合わせて視認することができる。 As the display 1034, in the case of the HMD 100, in the case of an optical see-through type, for example, a projector that projects a virtual object, notification information to the user, and the like, and a transparent transparent display that displays the projected virtual object, etc. in front of the eyes. A half mirror may be used. As a result, the user can visually recognize both the real object in the field of view in front of the user and the imaged virtual object as if they are floating.
 また、HMD100がビデオシースルー型の場合には、カメラ104で撮影された目の前の実体物と仮想オブジェクト等を合わせて表示する液晶パネル等のディスプレイ1034を用いて構成される。これにより、ユーザは、HMD100を用いて、目の前の視野内の実体物と仮想オブジェクト等を重ねて視認することができる。 Also, when the HMD 100 is of the video see-through type, it is configured using a display 1034 such as a liquid crystal panel that displays together the physical object in front of the eye photographed by the camera 104 and the virtual object. As a result, the user can use the HMD 100 to superimpose and visually recognize the physical object and the virtual object within the visual field in front of the user's eyes.
 操作入力装置1035は、例えばキーボードやキーボタン、タッチキー等でもよい。操作入力装置1035は、HMD100において、第1のユーザ10が入力操作を行いやすい位置や形態に設ければよいし、或いはHMD100の本体から分離し有線や無線で接続された形態でもよい。 The operation input device 1035 may be, for example, a keyboard, key buttons, touch keys, or the like. The operation input device 1035 may be provided in a position and form in which the first user 10 can easily perform an input operation in the HMD 100, or may be separated from the main body of the HMD 100 and connected by wire or wirelessly.
 また、ディスプレイ1034の画面106内に入力操作画面を表示させ、左目視線検出センサ1032、右目視線検出センサ1033により検出した視線が向いている入力操作画面上の位置により入力操作情報を取り込んでもよいし、ポインタを入力操作画面上に表示させ操作入力装置1035によりポインタを操作して入力操作情報を取り込んでもよい。 Alternatively, the input operation screen may be displayed on the screen 106 of the display 1034, and the input operation information may be captured based on the position on the input operation screen to which the line of sight is directed detected by the left-eye line-of-sight detection sensor 1032 and the right-eye line-of-sight detection sensor 1033. Alternatively, a pointer may be displayed on the input operation screen and input operation information may be obtained by operating the pointer with the operation input device 1035 .
 また操作入力装置1035は、ユーザが入力操作を示す音声を発声し、マイク1036で集音して入力操作情報を取り込んでもよい。 Also, the operation input device 1035 may utter a voice indicating an input operation by the user, collect the sound with the microphone 1036, and capture the input operation information.
 マイク1036は、外部からの音声やユーザ自身の発声を集音し音声データに変換するものである。ユーザからの発声音による指示情報を仮想ガイド表示装置内に取り込み、指示情報に対する動作を使い勝手よく実行することができる。 The microphone 1036 collects voice from the outside or the user's own voice and converts it into voice data. Instruction information uttered by the user can be taken into the virtual guide display device, and an operation in response to the instruction information can be conveniently executed.
 スピーカ1037は、音声データに基づいて音声を出力する。これにより、ユーザへの通知情報を音声で知らしめることができる。スピーカ1037はヘッドフォンに置換できる。 A speaker 1037 outputs sound based on the sound data. Thereby, the notification information to the user can be notified by voice. Speaker 1037 can be replaced with headphones.
 バイブレータ1038は、プロセッサ1010からの制御によって振動を発生させるもので、仮想ガイド表示装置で発信されたユーザへの通知情報を振動に変換する。特に、HMD100では、バイブレータ1038が密接装着されているユーザ頭部で振動を発生させることにより、ユーザへの通知を確実に伝えることができる。なお、ユーザへの通知情報の例としては、第1の視点101での位置指定、仮想ガイド150の生成表示、仮想オブジェクト170の配置、最終的に3次元空間上の位置の特定、などを示す通知、及び通知内容があり、これらの通知情報により、使い勝手の向上を図ることができる。 The vibrator 1038 generates vibration under the control of the processor 1010, and converts notification information to the user transmitted by the virtual guide display device into vibration. In particular, in the HMD 100, by causing the vibrator 1038 to vibrate the user's head to which the vibrator 1038 is closely attached, it is possible to reliably transmit the notification to the user. Examples of information to be notified to the user include designation of the position at the first viewpoint 101, generation and display of the virtual guide 150, placement of the virtual object 170, and finally specification of the position in the three-dimensional space. There are notifications and notification contents, and these notification information can improve usability.
 通信I/F1039は、近距離無線通信、無線LAN或いは基地局通信等により、近傍にある他の情報端末と無線通信を行う通信インタフェースであり、所定の各種の通信インタフェースに対応する通信処理回路やアンテナ等を含む。通信I/F1039は、第1の仮想ガイド表示装置及び第2の仮想ガイド表示装置相互で、またサーバ120との無線通信を行う。なお、近距離無線通信としては、Bluetooth(登録商標)、IrDA(Infrared Data Association、登録商標)、Zigbee(登録商標)、HomeRF(Home Radio Frequency、登録商標)、又は、Wi-Fi(登録商標)などの無線LANを用いて行なわれる。また、基地局通信としては、W-CDMA(Wideband Code Division Multiple Access、登録商標)やGSM(Global System for Mobile Communications)などの遠距離の無線通信を用いればよい。なお、図示しないが通信I/F1039は無線通信の手段として光通信、音波による通信等、他の方法を使用してもよい。その場合、送受信アンテナの代わりにそれぞれ光発光部及び受光部、スピーカ及びマイクを用いる。また、高精細映像等を扱う場合などでは使うデータ量は飛躍的に多く、この場合、無線通信に5G(5th Generation:第5世代移動通信システム)、ローカル5Gなどの高速大容量通信網を使用すれば、飛躍的に使い勝手を向上できる。 The communication I/F 1039 is a communication interface that performs wireless communication with other nearby information terminals by short-range wireless communication, wireless LAN, base station communication, or the like. Including antennas, etc. The communication I/F 1039 performs wireless communication between the first virtual guide display device and the second virtual guide display device and with the server 120 . As short-range wireless communication, Bluetooth (registered trademark), IrDA (Infrared Data Association, registered trademark), Zigbee (registered trademark), HomeRF (Home Radio Frequency, registered trademark), or Wi-Fi (registered trademark) It is performed using a wireless LAN such as. Further, as base station communication, long-distance wireless communication such as W-CDMA (Wideband Code Division Multiple Access, registered trademark) and GSM (Global System for Mobile Communications) may be used. Although not shown, the communication I/F 1039 may use other methods such as optical communication and sound wave communication as means for wireless communication. In that case, a light emitting part, a light receiving part, a speaker and a microphone are used instead of the transmitting/receiving antenna. In addition, when handling high-definition video, etc., the amount of data used is dramatically large, and in this case, high-speed large-capacity communication networks such as 5G (5th Generation: 5th generation mobile communication system) and local 5G are used for wireless communication. This can dramatically improve usability.
 測距センサ1041は、仮想ガイド表示装置と外界の実体物との距離を測定するセンサである。測距センサ1041は、TOF(Time Of Flight)方式のセンサを用いてもよいし、ステレオカメラや他の方式を用いてもよい。測距センサ1041を用いて作成した3次元データから、実体物を仮想的に配置した仮想空間を造り上げることができ、その仮想空間内に3次元座標を指定して仮想オブジェクトを配置することも可能となる。 The distance sensor 1041 is a sensor that measures the distance between the virtual guide display device and a real object in the outside world. The distance measurement sensor 1041 may use a TOF (Time Of Flight) sensor, or may use a stereo camera or another method. From the three-dimensional data created using the distance measuring sensor 1041, it is possible to create a virtual space in which physical objects are virtually arranged, and it is also possible to arrange virtual objects by designating three-dimensional coordinates in the virtual space. becomes.
 加速度センサ1042は、単位時間当たりの速度の変化である加速度を検出するセンサであり、動き・振動・衝撃などを捉えることができる。 The acceleration sensor 1042 is a sensor that detects acceleration, which is a change in speed per unit time, and can detect movement, vibration, impact, and the like.
 ジャイロセンサ1043は、回転方向の角速度を検出するセンサであり、縦・横・斜めの姿勢の状態を捉えることができる。 The gyro sensor 1043 is a sensor that detects the angular velocity in the rotational direction, and can capture the state of vertical, horizontal, and diagonal postures.
 HMD100に搭載されている加速度センサ1042及びジャイロセンサ1043を用いて、第1のユーザ10の頭部の姿勢や動きを検出することができる。 The posture and movement of the head of the first user 10 can be detected using the acceleration sensor 1042 and the gyro sensor 1043 mounted on the HMD 100 .
 地磁気センサ1044は、地球の磁力を検出するセンサであり、仮想ガイド表示装置の向いている方向を検出するものである。前後方向と左右方向に加え上下方向の地磁気も検出する3軸タイプを用い、仮想ガイド表示装置の動きに対する地磁気変化を捉まえることにより、仮想ガイド表示装置の動きを検出することも可能である。ジャイロセンサ1043及び地磁気センサ1044は、世界座標系におけるHMD100の向きを検出する方向センサとして機能する。 The geomagnetic sensor 1044 is a sensor that detects the magnetic force of the earth, and detects the direction in which the virtual guide display device is facing. It is also possible to detect the movement of the virtual guide display device by using a 3-axis type that detects geomagnetism in the vertical direction as well as the front and back directions and the left and right directions, and by capturing changes in geomagnetism with respect to the movement of the virtual guide display device. The gyro sensor 1043 and the geomagnetic sensor 1044 function as direction sensors that detect the orientation of the HMD 100 in the world coordinate system.
 これら各センサのセンサ出力を用いることにより、仮想ガイド表示装置の姿勢や動き変動を詳しく検出することができ、3次元空間上の位置確認の手助けとなりうる。 By using the sensor output of each of these sensors, it is possible to detect the posture and movement fluctuations of the virtual guide display device in detail, which can help confirm the position in the three-dimensional space.
 GPSセンサ1045は、上空にあるGPS(Global Positioning System)衛星からの信号を受信するもので、仮想ガイド表示装置の現在位置を検出するものである。これにより、移動する視点の位置を確定することが可能となる。GPSセンサ1045は、世界座標系における位置を検出する位置センサである。 The GPS sensor 1045 receives signals from GPS (Global Positioning System) satellites in the sky and detects the current position of the virtual guide display device. This makes it possible to fix the position of the moving viewpoint. GPS sensor 1045 is a position sensor that detects a position in the world coordinate system.
 位置指定処理部1011は、ディスプレイ1034に表示された視界画像上の位置を操作入力装置1035により指定する処理を行うものである。第1の視点101から見た視界画像上ではユーザの所望する点や範囲の位置を指定し、第2の視点102から見た視界画像上では、表示されている仮想ガイド150上の所望位置を指定する。また、仮想ガイド150上のみでなく、仮想ガイド150を目安に仮想ガイド周辺の位置152を指定してもよい。 The position designation processing unit 1011 performs processing for designating a position on the field-of-view image displayed on the display 1034 using the operation input device 1035 . The position of a point or range desired by the user is designated on the field of view image viewed from the first viewpoint 101, and the desired position on the displayed virtual guide 150 is designated on the field of view image viewed from the second viewpoint 102. specify. Also, a position 152 around the virtual guide may be specified using the virtual guide 150 as a guideline, not only on the virtual guide 150 .
 仮想ガイド生成表示処理部1012は、第1の視点101での視界画像上で指定された点141や範囲の位置を経由し第1の視点101と同方向に延びる仮想ガイド150を生成する。また、第2の視点102での視界画像上に仮想ガイド150を表示する処理を行うものである。 The virtual guide generation display processing unit 1012 generates a virtual guide 150 that extends in the same direction as the first viewpoint 101 via points 141 and range positions specified on the field of view image at the first viewpoint 101 . In addition, processing for displaying a virtual guide 150 on the field-of-view image at the second viewpoint 102 is performed.
 仮想オブジェクト生成処理部1013は、現実空間とは異なる仮想空間の物体である仮想オブジェクト170を生成する。なお、外部のサーバ120で生成された仮想オブジェクト170を無線通信により仮想ガイド表示装置内に取り込んでもよい。 The virtual object generation processing unit 1013 generates a virtual object 170 that is an object in a virtual space different from the real space. Note that the virtual object 170 generated by the external server 120 may be imported into the virtual guide display device through wireless communication.
 以上の構成により、タブレット端末110では、位置指定処理部1011によって第1の視点101からみた視界画像140上の位置141を指定し、仮想ガイド生成表示処理部1012では指定された位置141を経由し第1の視点101と同方向に延びる仮想ガイド150を生成する。 With the above configuration, in the tablet terminal 110, the position designation processing unit 1011 designates the position 141 on the view image 140 viewed from the first viewpoint 101, and the virtual guide generation display processing unit 1012 passes through the designated position 141. A virtual guide 150 extending in the same direction as the first viewpoint 101 is generated.
 更に、仮想ガイド生成表示処理部1012では第2の視点102で見た視界画像160上に生成された仮想ガイド150を表示し、位置指定処理部1011によって表示された仮想ガイド150上の位置を指定する。よって、第1の視点101で仮想ガイド150を空間座標に合わせて生成し、その空間に固定された仮想ガイド150を第2の視点102から見ながら最終的な3次元空間上の位置を特定することができる。即ち、仮想ガイド150を用いながら異なる2つの視点から空間上の位置を指定することにより、3次元空間上の任意の位置を的確かつ容易に使い勝手よく決定することが可能になる。 Furthermore, the virtual guide generation and display processing unit 1012 displays the virtual guide 150 generated on the visual field image 160 viewed from the second viewpoint 102, and the position on the virtual guide 150 displayed by the position designation processing unit 1011 is specified. do. Therefore, the virtual guide 150 is generated according to the space coordinates at the first viewpoint 101, and the final position in the three-dimensional space is specified while viewing the virtual guide 150 fixed in the space from the second viewpoint 102. be able to. That is, by specifying a position in space from two different viewpoints while using the virtual guide 150, it is possible to determine an arbitrary position in the three-dimensional space accurately, easily and conveniently.
 なお、第1の視点101、第2の視点102で見た視界画像140、160は、HMD100のカメラ104で撮影された画像でもよいし、タブレット端末110のカメラ801で撮影され画像でもよい。 The field-of- view images 140 and 160 viewed from the first viewpoint 101 and the second viewpoint 102 may be images captured by the camera 104 of the HMD 100 or images captured by the camera 801 of the tablet terminal 110 .
 更には、特定した3次元空間上の座標位置に仮想オブジェクト生成処理部1013で生成された仮想オブジェクト170を配置し、空間座標付仮想オブジェクトを第1、第2の仮想ガイド表示装置のディスプレイ1034で表示することにより、第1、第2の仮想ガイド表示装置を操作するユーザに仮想オブジェクト170で示す有益な情報を相応しい空間座標位置で知らしめることが可能になる。 Furthermore, the virtual object 170 generated by the virtual object generation processing unit 1013 is arranged at the specified coordinate position in the three-dimensional space, and the virtual object with spatial coordinates is displayed on the display 1034 of the first and second virtual guide display devices. By displaying, it is possible to inform the user who operates the first and second virtual guide display devices of the useful information indicated by the virtual object 170 at the appropriate spatial coordinate position.
 また、特定した3次元空間上の位置を第1、第2の仮想ガイド表示装置のディスプレイ1034上で指定することにより、指定された位置にある現実物体や仮想オブジェクトをユーザに指し示すことができ、ユーザに的確な物体指示を行うことが容易に可能となる。 In addition, by specifying the specified position in the three-dimensional space on the display 1034 of the first and second virtual guide display devices, the user can be pointed to the physical object or virtual object at the specified position, It is possible to easily give an accurate object indication to the user.
 また、第1、第2の仮想ガイド表示装置を有する仮想ガイド表示システムとしては、HMD100では、第1の視点101、第2の視点102で見た視界をカメラ104で撮影し、撮影された第1、第2の視界画像をタブレット端末110に送信する。 As a virtual guide display system having the first and second virtual guide display devices, the HMD 100 captures the field of view seen from the first viewpoint 101 and the second viewpoint 102 with the camera 104, and 1. Transmit the second field-of-view image to the tablet terminal 110 .
 或いは、タブレット端末110では、第1の視点101、第2の視点102で見た視界をカメラ801で撮影し、撮影された第1、第2の視界画像を取得する。 Alternatively, in the tablet terminal 110, the field of view viewed from the first viewpoint 101 and the second viewpoint 102 is photographed by the camera 801, and the photographed first and second field of view images are acquired.
 タブレット端末110では、HMD100から送信された第1の視界画像、或いはタブレット端末110で取得した第1の視界画像をもとに、位置指定処理部1011によって第1の視点101で見た第1の視界画像上に指定された位置140を経由し第1の視点101と同方向に延びる仮想ガイド150を仮想ガイド生成表示処理部1012により生成する。 In the tablet terminal 110, based on the first field-of-view image transmitted from the HMD 100 or the first field-of-view image acquired by the tablet terminal 110, the position designation processing unit 1011 displays the first field-of-view image viewed from the first viewpoint 101. A virtual guide generation display processing unit 1012 generates a virtual guide 150 extending in the same direction as the first viewpoint 101 via a position 140 specified on the field of view image.
 この後、タブレット端末110では、HMD100から送信された第2の視界画像、或いはタブレット端末110で取得した第2の視界画像をもとに、仮想ガイド生成表示処理部1012によって第2の視点102で見た第2の視界画像上に生成された仮想ガイド150を表示し、位置指定処理部1011により表示された仮想ガイド150上の位置151を指定する。これにより、第2の仮想ガイド表示装置では、3次元空間上の任意の位置を的確かつ容易に使い勝手よく決定することができる。 After that, in the tablet terminal 110 , based on the second field-of-view image transmitted from the HMD 100 or the second field-of-view image acquired by the tablet terminal 110 , the virtual guide generation display processing unit 1012 generates images from the second viewpoint 102 . The generated virtual guide 150 is displayed on the viewed second view image, and a position 151 on the virtual guide 150 displayed by the position designation processing unit 1011 is designated. As a result, the second virtual guide display device can accurately and easily determine an arbitrary position in the three-dimensional space with good usability.
 更に、タブレット端末110では、仮想オブジェクト生成処理部1013によってタブレット端末110で指定された仮想ガイド150上の位置151に仮想オブジェクト170を生成し、生成された仮想オブジェクトを空間座標付きでHMD100に送信し、HMD100では、タブレット端末110から送信された空間座標付仮想オブジェクトを受信し、受信した空間座標付仮想オブジェクトを空間座標にしたがって視界画像上に表示する。 Further, in the tablet terminal 110, the virtual object generation processing unit 1013 generates a virtual object 170 at a position 151 on the virtual guide 150 specified by the tablet terminal 110, and transmits the generated virtual object with spatial coordinates to the HMD 100. , the HMD 100 receives the virtual object with spatial coordinates transmitted from the tablet terminal 110, and displays the received virtual object with spatial coordinates on the visual field image according to the spatial coordinates.
 よって、タブレット端末110を保有しているユーザから、HMD100を装備しているユーザに、仮想オブジェクトで示す有益な情報を相応しい空間座標位置で知らしめることができる。 Therefore, it is possible for the user holding the tablet terminal 110 to inform the user equipped with the HMD 100 of useful information indicated by the virtual object at a suitable spatial coordinate position.
 また、タブレット端末110では、位置指定処理部1011により指定された仮想ガイド150上の位置151を示す空間座標位置情報をHMD100に送信する。HMD100では、タブレット端末110から送信された空間座標位置情報を受信し、受信した空間座標位置情報が示す位置を視界画像上に表示する。よって、タブレット端末110を保有しているユーザから、HMD100を装備しているユーザに、指定した3次元空間上の位置で現実物体や仮想オブジェクトを指し示すことができる。 In addition, the tablet terminal 110 transmits to the HMD 100 spatial coordinate position information indicating a position 151 on the virtual guide 150 specified by the position specification processing unit 1011 . The HMD 100 receives the spatial coordinate position information transmitted from the tablet terminal 110 and displays the position indicated by the received spatial coordinate position information on the visual field image. Therefore, the user holding the tablet terminal 110 can point to the user wearing the HMD 100 a physical object or a virtual object at a designated position in the three-dimensional space.
<第2実施形態>
 第2実施形態は、仮想ガイド表示装置において、視界画像上の位置指定のときに点でなく範囲を指定する実施形態である。図11A~図11Gは、第1の視点101、第2の視点102からみた視界画像上の位置指定のときに点でなく範囲を指定する場合の操作を説明する図である。図11A~図11Gにおいて、図2から図7に示され同一の符号を付された部分は、図2から図7で既に説明した動作と同一の動作を有するので、それらの詳細説明は省略する。
<Second embodiment>
The second embodiment is an embodiment in which a range is specified instead of a point when specifying a position on the view field image in the virtual guide display device. 11A to 11G are diagrams for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image viewed from the first viewpoint 101 and the second viewpoint 102. FIG. In FIGS. 11A to 11G, the parts shown in FIGS. 2 to 7 and denoted by the same reference numerals have the same operations as those already described in FIGS. 2 to 7, so detailed descriptions thereof will be omitted. .
 図11Aに例示するように、位置指定処理部1011により視界画像140上である範囲1101を指定する。範囲1101で指定されると、図11Bに示すように、視界画像160上に表示される仮想ガイド1102は、その範囲1101を断面として有する立体的オブジェクトからなる仮想ガイド1102が生成される。 As illustrated in FIG. 11A, a range 1101 on the field of view image 140 is specified by the position specification processing unit 1011 . When the range 1101 is designated, as shown in FIG. 11B, the virtual guide 1102 displayed on the field-of-view image 160 is generated as a three-dimensional object having the range 1101 as a cross section.
 仮想ガイド1102上での位置の指定操作を位置指定処理部1011が取得すると、範囲を持って3次元空間上の位置を特定することができる。よって、広がりのある仮想オブジェクトの配置に際しその広がりに対応した範囲で指定することや、現実物体や仮想オブジェクトの範囲を指定してその物体を指し示すことなどが可能になる。 When the position designation processing unit 1011 acquires a position designation operation on the virtual guide 1102, it is possible to specify the position in the three-dimensional space with a range. Therefore, when arranging a wide-spread virtual object, it is possible to specify a range corresponding to the spread, or to specify the range of a physical object or a virtual object and point to that object.
 ここで、図11Cに示すように、仮想ガイド1102を半透明にすることにより、仮想ガイド1102の後ろに隠れた物体が見えるようにして仮想ガイド1102の背景を確認することができ、利便性を向上できる。 Here, as shown in FIG. 11C, by making the virtual guide 1102 translucent, an object hidden behind the virtual guide 1102 can be seen and the background of the virtual guide 1102 can be confirmed, which is convenient. can improve.
 また、図11Dに示すように、第2の視点102での視界画像160上で位置指定処理部1011により指(ポインタ)等を当てて指定したら、位置指定された近傍を縁取りするようなリングオブジェクト1103にして表示してもよく、特定した位置を立体的に視認することが可能になる。 Further, as shown in FIG. 11D, when the position designation processing unit 1011 places a finger (pointer) or the like on the visual field image 160 at the second viewpoint 102 and designates it, a ring object framing the vicinity of the designated position is displayed. It may be displayed as 1103, and the specified position can be visually recognized stereoscopically.
 また、選んで指定する範囲がいびつになる場合がある。即ち、第1の視点101の範囲の形状で輪切りにしたもの自体がいびつな場合だけでなく、第2の視点102側でも第1の視点101の場合と異なる独自の形状を持った範囲を指定でき、第1の視点101側から見た範囲と合わせることで立体的にいびつになる場合がある。このように、選んで指定する範囲がいびつになる場合などには、選んで指定する範囲を含む球オブジェクト1106を指定範囲にして表示してもよい。 Also, the range to select and specify may be distorted. That is, not only when the shape of the range at the first viewpoint 101 is distorted, but also at the second viewpoint 102, a range having a unique shape different from that at the first viewpoint 101 is specified. However, when combined with the range seen from the first viewpoint 101 side, it may become stereoscopically distorted. In this way, when the range to be selected and specified becomes distorted, the sphere object 1106 including the range to be selected and specified may be displayed as the specified range.
 例えば、図11Eに示すように、仮想ガイド1104がいびつな範囲1105を持っていて、第2の視点102での視界画像160上でいびつな範囲1105の指定操作を位置指定処理部1011が取得すると、図11Fに示すように、3次元のいびつな範囲1105を囲む球オブジェクト1106が範囲として指定される。 For example, as shown in FIG. 11E, when the virtual guide 1104 has an distorted range 1105 and the position designation processing unit 1011 acquires a designation operation of the distorted range 1105 on the field image 160 at the second viewpoint 102, , as shown in FIG. 11F, a sphere object 1106 surrounding a three-dimensional distorted range 1105 is specified as the range.
 これにより、図11Gに示すように、選んで指定した範囲1105を含む球オブジェクト1106を指定範囲にして表示することができ、選んで指定する範囲がいびつな場合などでも、3次元空間上の位置特定を的確かつ使い勝手よく行うことができる。 As a result, as shown in FIG. 11G, a sphere object 1106 including a selected and specified range 1105 can be displayed as a specified range. The identification can be performed accurately and conveniently.
 また、球オブジェクト1106でなくとも閉領域の3次元領域であればよく、直方体等他の幾何学形状のオブジェクトでもよい。 Also, the object may be a closed three-dimensional area other than the spherical object 1106, and may be an object of other geometric shape such as a rectangular parallelepiped.
 次に、本実施形態に係る仮想ガイド表示装置において、仮想オブジェクトを配置する場合の動作例について説明する。図12は、3次元空間上の指定位置に仮想オブジェクトを配置する場合の位置指定操作を説明する図である。図12において、図2から図7に示され同一の符号を付された部分は、図2から図7で既に説明した動作と同一の動作を有するので、それらの詳細説明は省略する。一例として、3次元空間上の指定位置に宣伝目的の新車を示す仮想オブジェクトを配置する場合で説明する。 Next, an operation example when arranging a virtual object in the virtual guide display device according to this embodiment will be described. FIG. 12 is a diagram for explaining a position specification operation when arranging a virtual object at a specified position in a three-dimensional space. In FIG. 12, the parts shown in FIGS. 2 to 7 and denoted by the same reference numerals have the same operations as those already explained in FIGS. 2 to 7, so detailed explanations thereof will be omitted. As an example, a case of arranging a virtual object representing a new car for advertisement at a specified position in a three-dimensional space will be described.
 第1の視点101での視界画像140上で位置が指定されると、図12Aに示すように、点ではなく第1の視点101から見た新車形状の仮想オブジェクト1201を表示する。 When a position is specified on the view image 140 at the first viewpoint 101, as shown in FIG. 12A, a new car-shaped virtual object 1201 viewed from the first viewpoint 101 is displayed instead of a point.
 更に、図12Bに示すように、第2の視点102での視界画像160上では、断面が仮想オブジェクト1201の新車の形状で第1の視点101と同方向に延びる仮想ガイド1202を生成表示する。ユーザが表示された仮想ガイド1202上の位置1203を指定すると、その指定操作を位置指定処理部1011が取得する。 Furthermore, as shown in FIG. 12B, on the field-of-view image 160 at the second viewpoint 102, a virtual guide 1202 is generated and displayed that has a shape of a new car of the virtual object 1201 and extends in the same direction as the first viewpoint 101 in cross section. When the user designates a position 1203 on the displayed virtual guide 1202, the position designation processing unit 1011 acquires the designation operation.
 図12C、図12Dは、特定された3次元空間上の位置に配置され、断面形状が新車の形状を表す仮想オブジェクト1204を、第2の視点102、第3の視点103から見たときの視界画像である。これらにより、配置したい仮想オブジェクト1204を視認しつつ3次元空間上の位置を特定することが可能になり、ユーザにとって位置指定の際の使い勝手を大幅に向上することができる。即ち、第1の視点101、第2の視点102での位置指定時に配置したい仮想オブジェクト1204の形、大きさを反映した形状オブジェクト、仮想ガイドを表示して、一層の利便性を得るものである。 12C and 12D show the field of view of a virtual object 1204 arranged at a specified position in the three-dimensional space and having a cross-sectional shape representing the shape of a new car, viewed from the second viewpoint 102 and the third viewpoint 103. It is an image. As a result, it becomes possible to specify the position in the three-dimensional space while visually recognizing the virtual object 1204 to be arranged, and the user can greatly improve usability when specifying the position. That is, a shape object and a virtual guide reflecting the shape and size of the virtual object 1204 to be placed when the position is specified at the first viewpoint 101 and the second viewpoint 102 are displayed to obtain further convenience. .
 次に、本実施形態に係る仮想ガイド表示装置において、第1の視点101、第2の視点102で見た両視界画像140、160を2画面表示する場合について説明する。図13A、図13Bは、第2の視点102で見た視界画像160に加えて画面106の一部(視界画像160の画面領域よりも小さい画面領域)に第1の視点101で見た視界画像140を表示することを示す図である。図13A、図13Bにおいて、図2から図7に示され同一の符号を付された部分は、図2から図7で既に説明した動作と同一の動作を有するので、それらの詳細説明は省略する。 Next, in the virtual guide display device according to the present embodiment, a case of displaying both field-of- view images 140 and 160 viewed from the first viewpoint 101 and the second viewpoint 102 on two screens will be described. 13A and 13B show, in addition to the field-of-view image 160 viewed from the second viewpoint 102, a field-of-view image viewed from the first viewpoint 101 on a part of the screen 106 (a screen area smaller than the screen area of the field-of-view image 160). 140 is a diagram showing displaying 140. FIG. In FIGS. 13A and 13B, the parts shown in FIGS. 2 to 7 and denoted by the same reference numerals have the same operations as those already explained in FIGS. 2 to 7, so detailed explanations thereof will be omitted. .
 図13Aでは、第2の視点102で見た視界画像160上で位置を指定するときに第2の視点102で見た視界画像160に加えて、画面106の右上部分に第1の視点101で見た視界画像140を縮小した形で表示する状態を示している。 In FIG. 13A , in addition to the field-of-view image 160 viewed from the second viewpoint 102 when specifying a position on the field-of-view image 160 viewed from the second viewpoint 102 , a It shows a state in which the viewed field image 140 is displayed in a reduced form.
 また、図13Bでは、画面106の左側に第2の視点102で見た視界画像160を、画面106の右側に第1の視点101で見た視界画像140を表示している場合である。 Also, in FIG. 13B, the field image 160 viewed from the second viewpoint 102 is displayed on the left side of the screen 106, and the field of view image 140 viewed from the first viewpoint 101 is displayed on the right side of the screen 106.
 このように第1の視点101、第2の視点102で見た両視界画像140、160を2画面表示することにより、第2の視点102で見た視界画像160上で位置を指定するときにその前に第1の視点101で位置指定した状態も合わせて視認することができ、使いやすさを高めることが可能となる。 By displaying the field-of- view images 140 and 160 viewed from the first viewpoint 101 and the second viewpoint 102 on two screens in this way, when specifying a position on the field-of-view image 160 viewed from the second viewpoint 102, The state in which the position has been designated by the first viewpoint 101 can also be visually recognized before that, and the ease of use can be improved.
 また、第1の視点101、第2の視点102で見た両視界画像140、160の2画面表示に際し、第1の視点101での位置指定部分付近を拡大して表示してもよいし、第1の視点101から第2の視点102への遷移動画でもよい。遷移動画は繰り返し等の再生制御を可能にしておけば、遷移の状態の把握がしやすくなり位置指定の正確性の向上等に有効である。 In addition, when displaying both field-of- view images 140 and 160 viewed from the first viewpoint 101 and the second viewpoint 102 on two screens, the vicinity of the position designation portion at the first viewpoint 101 may be enlarged and displayed, A transition video from the first viewpoint 101 to the second viewpoint 102 may be used. If playback control such as repetition is enabled for the transition moving image, the state of the transition can be easily grasped, which is effective in improving the accuracy of position designation.
 また、第2の視点102で指定した状態を第1の視点101の視界画像140に合成して反映させてもよい。例えば、第2の視点102での位置指定時に、その座標に実際の仮想オブジェクトを表示する設定にしていた場合、第1の視点101の視界画像140にも第1の視点101の角度から見た仮想オブジェクトの画像を合成する。よって、第1の視点101からどう見えるかをわざわざ第1の視点101の場所に戻らなくても確認できるという効果が得られる。 Also, the state designated by the second viewpoint 102 may be synthesized with the field-of-view image 140 of the first viewpoint 101 and reflected. For example, when the position is specified from the second viewpoint 102, if the setting is made to display the actual virtual object at the coordinates, the view image 140 from the first viewpoint 101 is also viewed from the angle of the first viewpoint 101. Synthesize images of virtual objects. Therefore, it is possible to obtain an effect that it is possible to confirm how the object looks from the first viewpoint 101 without having to bother to return to the first viewpoint 101 .
 仮想ガイドの表示方法としては、現在の視点から見て現実物体の陰になる部分について仮想ガイド150の表示を行わない処理(いわゆるオクルージョン処理)を施して表示すると、現実物体との奥行き方向の位置関係がわかりやすくなり位置指定が容易になる。しかし、現実物体が密集していると仮想ガイド150が見えにくく、第2の視点102で点の位置を指定する際も仮想ガイドが線だと細くて視認困難な場合がある。これに対して、仮想ガイド150を太く表示したり、点滅させたり、仮想ガイド150を遮蔽している現実物体の手前に表示したりしてもよく、それぞれを組み合わせてもよい。 As a method of displaying the virtual guide, if processing (so-called occlusion processing) is performed so that the virtual guide 150 is not displayed in the shadow of the real object when viewed from the current viewpoint, the position in the depth direction with respect to the real object is displayed. The relationship becomes easy to understand and the position specification becomes easy. However, if the physical objects are densely packed, the virtual guide 150 is difficult to see, and when the position of the point is specified from the second viewpoint 102, if the virtual guide is a line, it may be thin and difficult to see. On the other hand, the virtual guide 150 may be displayed thicker, blinked, displayed in front of the physical object that shields the virtual guide 150, or may be combined.
 また、元々の仮想ガイド150の表示と組み合わせて表示してもよい。例えば、元々の仮想ガイド150の表示と現実物体の手前に表示したものを組み合わせて交互に表示してもよいし、そのほか、太い表示と元々の細い表示の組み合わせでもよい。 It may also be displayed in combination with the display of the original virtual guide 150. For example, the original display of the virtual guide 150 and the one displayed in front of the real object may be combined and displayed alternately, or a combination of the thick display and the original thin display may be used.
 また、他のカメラによる視界画像を使用してもよい。第1の視点101から撮影したカメラ104と共有可能な空間座標データを有した他のカメラがある場合、第2の視点102に撮影者が移動するのではなく、他のカメラで撮影した視界画像を第2の視点102での視界画像160として使用し、空間座標位置の指定を行ってもよい。他のカメラとしては、例えば監視カメラ、その場にいる他の人が使用しているカメラなどが挙げられる。なお、視界画像が空間座標データと紐づいている必要があることは言うまでもない。また、空間座標データを有しているものであれば、衛星写真、地図データのようなものでもよい。 You may also use the view image from another camera. When there is another camera having spatial coordinate data that can be shared with the camera 104 photographed from the first viewpoint 101, instead of moving the photographer to the second viewpoint 102, the field of view image photographed by the other camera may be used as the view image 160 at the second viewpoint 102 to specify the spatial coordinate position. Other cameras include, for example, surveillance cameras, cameras used by other people on site, and the like. Needless to say, the field-of-view image must be associated with the spatial coordinate data. Also, satellite photographs and map data may be used as long as they have spatial coordinate data.
 更には、取得済みの3次元データによる仮想空間に適用してもよい。例えば、測距センサ1041を用いて作成した3次元データから、実体物を仮想的に配置した仮想空間を造り上げることができ、その仮想空間内に3次元座標を指定して仮想オブジェクトを配置することが可能である。 Furthermore, it may be applied to the virtual space based on the acquired 3D data. For example, a virtual space in which physical objects are virtually arranged can be created from three-dimensional data created using the distance measuring sensor 1041, and a virtual object can be arranged by designating three-dimensional coordinates in the virtual space. is possible.
 また、仮想空間内での位置指定の場合には、仮想空間を回転させることにより、第2の視点102を得るようにしてもよい。即ち、全て仮想空間の中で、仮想空間内のオブジェクトの3次元データを回すことにより、第1の視点101から第2の視点102への移動を行うことなく、視点を変えて指定したい位置を特定することが可能となる。 Also, in the case of specifying a position within the virtual space, the second viewpoint 102 may be obtained by rotating the virtual space. That is, by rotating the three-dimensional data of the object in the virtual space, the position to be specified can be changed by changing the viewpoint without moving from the first viewpoint 101 to the second viewpoint 102 . It becomes possible to specify.
 また、現実物体の指定時に目印を用いて指定候補を識別するようにしてもよい。例えば、図14Aに示すように、仮想ガイド1400上に複数の現実物体1401、1402、1403、1404、1405が存在する場合は、図14Bに示すように仮想ガイド1400と複数の現実物体1401、1402、1403、1404、1405の交差部分等に目印になるタグを表示して、タグ1411、1412、1413、1414、1415を選ぶことでタグをつけた現実物体が指定されるようにする。これにより、仮想ガイド1400上に複数の現実物体が密集したりしていて指定物体を選別しにくい場合でも、タグの選択操作により位置を指定でき、所望の現実物体を的確かつ容易に指し示すことが可能となる。 Also, when specifying a physical object, a mark may be used to identify a candidate for specification. For example, as shown in FIG. 14A, when a plurality of physical objects 1401, 1402, 1403, 1404, and 1405 exist on a virtual guide 1400, the virtual guide 1400 and the plurality of physical objects 1401 and 1402 are shown in FIG. 14B. , 1403, 1404, and 1405 are displayed, and by selecting tags 1411, 1412, 1413, 1414, and 1415, the tagged physical objects are specified. As a result, even when a plurality of physical objects are densely packed on the virtual guide 1400 and it is difficult to select the designated object, the position can be designated by the tag selection operation, and the desired physical object can be pointed accurately and easily. It becomes possible.
<仮想ガイドの設定方法のアルゴリズム>
 視点の変更による仮想ガイドの設定方法について説明する。まず、座標系についてであるが、仮想ガイドを設定する対象となる対象空間の座標系を世界座標系とする。対象空間は、現実の空間であったり、仮想空間であったりする。そして、各視点を座標原点とした局所座標系を考える。視点は、実際の仮想ガイド表示装置の位置と同じ場合もあるし、異なる場合もある。いずれにせよ、その表示の基準とする視点から見える画像を仮想ガイド表示装置に表示しているものとする。
<Algorithm for how to set the virtual guide>
A method of setting a virtual guide by changing the viewpoint will be described. First, regarding the coordinate system, the coordinate system of the target space in which the virtual guide is set is assumed to be the world coordinate system. The target space may be a real space or a virtual space. Then, consider a local coordinate system with each viewpoint as the coordinate origin. The viewpoint may or may not be the same as the position of the actual virtual guide display. In any case, it is assumed that an image seen from a viewpoint used as a reference for the display is displayed on the virtual guide display device.
 局所座標系の世界座標系に対する向きは、回転行列Rで表される。世界座標系において、座標軸方向をRで回転させたものが局所座標系の座標軸方向となる。また、局所座標系の原点すなわち視点は、世界座標系における位置座標でありOと表す。ここで、世界座標系の位置座標Xに対応する局所座標系での位置座標をUとすると、その関係は次式(1)で表される。
  U=R-1(X-O)・・・(1)
この回転行列Rと、局所座標系原点Oは、初期設定の後、視点の移動に伴って更新される。
The orientation of the local coordinate system with respect to the world coordinate system is represented by the rotation matrix R. In the world coordinate system, the coordinate axis direction rotated by R becomes the coordinate axis direction of the local coordinate system. Also, the origin of the local coordinate system, that is, the viewpoint is represented by O, which is the position coordinate in the world coordinate system. Here, assuming that the position coordinate in the local coordinate system corresponding to the position coordinate X in the world coordinate system is U, the relationship is expressed by the following equation (1).
U=R −1 (X−O) (1)
The rotation matrix R and the origin O of the local coordinate system are updated as the viewpoint moves after initial setting.
 次に、仮想ガイドの描画の基準となる基準線(以下、単に基準線と呼ぶ)を世界座標系で表す具体的表式を書き下す。図2と図3を例にとる。まず図2の第1の視点から見た仮想ガイドの方向を、点141を指定することにより決定する。その方向は、第1の視点に対応した局所座標系(以下第1の局所座標系と呼ぶ)における方向ベクトルNとして決定される。ここで、この時の第1の局所座標系の向きを表す回転行列をRとすると、世界座標系における基準線の方向ベクトルWは、次式(2)となる。
  W=R・・・(2)
Next, write down a specific formula expressing the reference line (hereinafter simply referred to as the reference line) that serves as a reference for drawing the virtual guide in the world coordinate system. Take FIGS. 2 and 3 as an example. First, the direction of the virtual guide viewed from the first viewpoint in FIG. 2 is determined by designating point 141 . The direction is determined as the direction vector N1 in the local coordinate system (hereinafter referred to as the first local coordinate system) corresponding to the first viewpoint. Here, assuming that the rotation matrix representing the orientation of the first local coordinate system at this time is R1 , the direction vector W1 of the reference line in the world coordinate system is given by the following equation (2).
W1 = R1N1 ( 2 )
 そして、第1の視点の世界座標系における座標をOとすると、世界座標系における基準線は、点Oを通り方向ベクトルWと同方向に延びる直線としてその表式を得ることができる。実数パラメータkを使って書き下すと、基準線上の点Xは次式(3)のように求まる。
  X=O+kW・・・(3)
Assuming that the coordinate of the first viewpoint in the world coordinate system is O1, the reference line in the world coordinate system can be expressed as a straight line passing through the point O1 and extending in the same direction as the direction vector W1. . When written down using the real number parameter k , the point X1 on the reference line is obtained by the following equation (3).
X1= O1 + kW1 ( 3 )
 このOとWの情報を使えば、第2の視点102に対応した局所座標系(以下、第2の局所座標系と呼ぶ)における仮想ガイド150の基準線を構成できる。今、第2の局所座標系の世界座標系における向きを表す回転行列をRとし、第2の視点102の世界座標系における位置をOとする。そして、第2の局所座標系における仮想ガイド150の基準線上の点をUと表せば、Uは実数パラメータkを用いた表記において、次式で表される((1)式、(3)式参照)。
  U=R -1(O-O+kW)・・・(4)
By using the information of O1 and W1, the reference line of the virtual guide 150 in the local coordinate system corresponding to the second viewpoint 102 (hereinafter referred to as the second local coordinate system) can be configured. Let R2 be the rotation matrix representing the orientation of the second local coordinate system in the world coordinate system, and O2 be the position of the second viewpoint 102 in the world coordinate system. Then, if a point on the reference line of the virtual guide 150 in the second local coordinate system is represented by U2, U2 is represented by the following equations ((1), (3 ) formula).
U 2 =R 2 −1 (O 1 −O 2 +kW 1 ) (4)
 実数パラメータkの範囲は、仮想ガイドを表示する表示画面において描画に必要な範囲である。例外的に、仮想ガイドの方向が、表示画面に垂直になる場合は、点R -1(O-O)に対応した仮想ガイドの断面のみ表示する。 The range of the real number parameter k is the range required for drawing on the display screen displaying the virtual guide. Exceptionally, when the direction of the virtual guide is perpendicular to the display screen, only the section of the virtual guide corresponding to the point R 2 -1 (O 1 -O 2 ) is displayed.
 他の仮想ガイド表示装置(タブレット端末110)においても、上記と同様に、その仮想ガイド表示装置(タブレット端末110)において表示の基準としている視点の世界座標系における座標と、局所座標系の世界座標系における向きを表す回転行列を使って、基準線を構成することができる。 In the other virtual guide display device (tablet terminal 110), similarly to the above, the coordinates in the world coordinate system of the viewpoint used as the display reference in the virtual guide display device (tablet terminal 110) and the world coordinates in the local coordinate system A rotation matrix representing orientation in the system can be used to construct the reference line.
 本発明は上記した実施形態に限定されるものではなく、様々な変形例が含まれる。例えば、上記した実施形態は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。また、ある実施形態の構成の一部を他の実施形態の構成に置き換えることが可能であり、また、ある実施形態の構成に他の実施形態の構成を加えることも可能である。また、各実施形態の構成の一部について、他の構成の追加・削除・置換をすることが可能である。 The present invention is not limited to the above-described embodiments, and includes various modifications. For example, the above-described embodiments have been described in detail in order to explain the present invention in an easy-to-understand manner, and are not necessarily limited to those having all the described configurations. Also, part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment. Moreover, it is possible to add, delete, or replace a part of the configuration of each embodiment with another configuration.
 また、上記の各構成、機能、処理部、処理手段等は、それらの一部又は全部を、例えば集積回路で設計する等によりハードウェアで実現してもよい。また、上記の各構成、機能等は、プロセッサ1010がそれぞれの機能を実現するプログラム1021を解釈し、実行することによりソフトウエアで実現してもよい。各機能を実現するプログラム、テーブル、ファイル等の情報は、メモリ1020や、ハードディスク、SSD(Solid State Drive)等の記録装置、又は、ICカード、SDカード、DVD等の記録媒体に格納されてもよいし、通信網上の装置に格納されてもよい。 In addition, part or all of the above configurations, functions, processing units, processing means, etc. may be realized by hardware, for example, by designing them as integrated circuits. Moreover, each of the above configurations, functions, and the like may be realized by software by the processor 1010 interpreting and executing the program 1021 for realizing each function. Information such as programs, tables, and files that implement each function may be stored in the memory 1020, a recording device such as a hard disk, SSD (Solid State Drive), or a recording medium such as an IC card, SD card, or DVD. Alternatively, it may be stored in a device on a communication network.
 また、制御線や情報線は説明上必要と考えられるものを示しており、製品上必ずしも全ての制御線や情報線を示しているとは限らない。実際には殆ど全ての構成が相互に接続されていると考えてもよい。 In addition, control lines and information lines indicate what is considered necessary for explanation, and not all control lines and information lines are necessarily indicated on the product. In practice, it may be considered that almost all configurations are interconnected.
10     :第1のユーザ
20     :第2のユーザ
100    :HMD
101    :第1の視点
102    :第2の視点
103    :第3の視点
104,801:カメラ
105    :矢印
106    :画面
110    :タブレット端末
120    :サーバ
140,160,180:視界画像
150,151a,1102,1104,1202,1400:仮想ガイド
170,194,1201,1204:仮想オブジェクト
191,192:指定位置
193    :樹木
1010   :プロセッサ
1011   :位置指定処理部
1012   :仮想ガイド生成表示処理部
1013   :仮想オブジェクト生成処理部
1020   :メモリ
1021   :プログラム
1023   :アプリケーションプログラム
1024   :情報データ
1025   :座標位置情報
1026   :仮想ガイド情報
1027   :仮想オブジェクト情報
1028   :視界画像情報
1032   :左目視線検出センサ
1033   :右目視線検出センサ
1034   :ディスプレイ
1035   :操作入力装置
1036   :マイク
1037   :スピーカ
1038   :バイブレータ
1039   :通信I/F
1040   :センサ群
1041   :測距センサ
1042   :加速度センサ
1043   :ジャイロセンサ
1044   :地磁気センサ
1045   :GPSセンサ
1050   :バス
1101,1105:範囲
1103   :リングオブジェクト
1106   :球オブジェクト
1401,1402,1403,1404,1405:現実物体
1411,1412,1413,1414,1415:タグ
 
10: first user 20: second user 100: HMD
101: first viewpoint 102: second viewpoint 103: third viewpoint 104, 801: camera 105: arrow 106: screen 110: tablet terminal 120: servers 140, 160, 180: view images 150, 151a, 1102, 1104, 1202, 1400: virtual guides 170, 194, 1201, 1204: virtual objects 191, 192: designated position 193: tree 1010: processor 1011: position designation processing unit 1012: virtual guide generation display processing unit 1013: virtual object generation processing Unit 1020 : Memory 1021 : Program 1023 : Application program 1024 : Information data 1025 : Coordinate position information 1026 : Virtual guide information 1027 : Virtual object information 1028 : View image information 1032 : Left-eye line-of-sight detection sensor 1033 : Right-eye line-of-sight detection sensor 1034 : Display 1035: operation input device 1036: microphone 1037: speaker 1038: vibrator 1039: communication I/F
1040: Sensor group 1041: Ranging sensor 1042: Acceleration sensor 1043: Gyroscope sensor 1044: Geomagnetic sensor 1045: GPS sensor 1050: Buses 1101, 1105: Range 1103: Ring object 1106: Sphere object 1401, 1402, 1403, 1404, 1405 : real object 1411, 1412, 1413, 1414, 1415: tag

Claims (16)

  1.  仮想ガイド表示装置であって、
     仮想ガイド表示装置が向いている世界座標系における方向を検出する方向センサと、
     前記仮想ガイド表示装置の前記世界座標系における位置を検出する位置センサと、
     ディスプレイと、
     前記ディスプレイに表示された画像の任意の点を指定する操作を受け付ける操作入力装置と、
     前記方向センサ、前記位置センサ、前記ディスプレイ、及び前記操作入力装置のそれぞれに接続されたプロセッサと、を備え、
     前記プロセッサは、
     前記方向センサ及び前記位置センサからのセンサ出力に基づいて、前記仮想ガイド表示装置からの第1の視点で見た第1の視界画像上の指定された位置を経由し、前記第1の視点と同方向に延びる仮想ガイドを生成し、
     前記第1の視点とは異なる第2の視点で見た第2の視界画像上に前記仮想ガイドを重畳して前記ディスプレイに表示し、
     前記ディスプレイに表示された前記仮想ガイド上の位置を指定する操作を前記操作入力装置が受け付け、
     前記指定された位置を前記世界座標系における位置に変換して出力する、
     ことを特徴とする仮想ガイド表示装置。
    A virtual guide display device,
    a direction sensor that detects the direction in the world coordinate system to which the virtual guide display device is directed;
    a position sensor that detects the position of the virtual guide display device in the world coordinate system;
    a display;
    an operation input device that receives an operation to specify an arbitrary point on the image displayed on the display;
    a processor connected to each of the orientation sensor, the position sensor, the display, and the operation input device;
    The processor
    Via a specified position on a first field-of-view image seen from the first viewpoint from the virtual guide display device based on sensor outputs from the direction sensor and the position sensor, the first viewpoint and generate virtual guides extending in the same direction,
    superimposing the virtual guide on a second view image viewed from a second viewpoint different from the first viewpoint and displaying the virtual guide on the display;
    The operation input device receives an operation to specify a position on the virtual guide displayed on the display,
    converting the specified position into a position in the world coordinate system and outputting it;
    A virtual guide display device characterized by:
  2.  請求項1に記載の仮想ガイド表示装置であって、
     前記プロセッサに接続されたカメラを更に備え、
     前記方向センサは、前記カメラが向いている方向を検出し、
     前記位置センサは、前記カメラの位置を検出し、
     前記第1の視界画像は、前記カメラが前記第1の視点から撮像した視界画像である、
     ことを特徴とする仮想ガイド表示装置。
    The virtual guide display device according to claim 1,
    further comprising a camera connected to the processor;
    the direction sensor detects a direction in which the camera is facing;
    the position sensor detects the position of the camera;
    The first field-of-view image is a field-of-view image captured by the camera from the first viewpoint,
    A virtual guide display device characterized by:
  3.  請求項2に記載の仮想ガイド表示装置であって、
     前記第2の視界画像は、前記カメラが前記第2の視点から撮像した視界画像である、
     ことを特徴とする仮想ガイド表示装置。
    The virtual guide display device according to claim 2,
    The second field-of-view image is a field-of-view image captured by the camera from the second viewpoint,
    A virtual guide display device characterized by:
  4.  請求項3に記載の仮想ガイド表示装置であって、
     他の通信器とデータの送受信を行う通信器を更に備え、
     前記プロセッサは、
     生成した前記仮想ガイドの形状情報及び前記世界座標系の位置を示す位置情報を含む仮想ガイド情報を前記他の通信器に対して送信し、
     前記他の通信器から、前記仮想ガイド上で指定された位置情報が付加された仮想ガイド情報を受信し、
     前記受信した仮想ガイド情報に基づいて、前記ディスプレイに前記仮想ガイド及び指定された位置を重畳して表示する、
     ことを特徴とする仮想ガイド表示装置。
    The virtual guide display device according to claim 3,
    further comprising a communication device that transmits and receives data to and from another communication device;
    The processor
    transmitting virtual guide information including shape information of the generated virtual guide and position information indicating the position of the world coordinate system to the other communication device;
    receiving virtual guide information to which position information specified on the virtual guide is added from the other communication device;
    superimposing and displaying the virtual guide and the specified position on the display based on the received virtual guide information;
    A virtual guide display device characterized by:
  5.  請求項1に記載の仮想ガイド表示装置であって、
     前記第1の視界画像は、現在の視界画像又は過去の視界画像であり、
     前記第2の視界画像は、前記第1の視界画像を撮像した時間とは異なる時間に撮像した視界画像である、
     ことを特徴とする仮想ガイド表示装置。
    The virtual guide display device according to claim 1,
    The first field-of-view image is a current field-of-view image or a past field-of-view image,
    The second field-of-view image is a field-of-view image captured at a time different from the time at which the first field-of-view image was captured,
    A virtual guide display device characterized by:
  6.  請求項1に記載の仮想ガイド表示装置であって、
     前記操作入力装置は、前記第1の視界画像上で範囲の指定操作を受け付け、
     前記プロセッサは、前記範囲を断面形状とする立体的オブジェクトからなる前記仮想ガイドを前記第2の視界画像上に重畳表示する、
     ことを特徴とする仮想ガイド表示装置。
    The virtual guide display device according to claim 1,
    The operation input device receives an operation to specify a range on the first field-of-view image,
    The processor superimposes and displays the virtual guide, which is a three-dimensional object having a cross-sectional shape corresponding to the range, on the second field-of-view image.
    A virtual guide display device characterized by:
  7.  請求項6に記載の仮想ガイド表示装置であって、
     前記プロセッサは、前記立体的オブジェクトからなる仮想ガイドを前記第2の視界画像上に半透明で重畳表示する、
     ことを特徴とする仮想ガイド表示装置。
    The virtual guide display device according to claim 6,
    The processor translucently superimposes a virtual guide composed of the three-dimensional object on the second field-of-view image.
    A virtual guide display device characterized by:
  8.  請求項6に記載の仮想ガイド表示装置であって、
     前記操作入力装置は、前記第2の視界画像上に表示された前記立体的オブジェクトからなる前記仮想ガイド上の位置の指定操作を受け付け、
     前記プロセッサは、前記仮想ガイド上で指定された位置を含むリングオブジェクト或いは指定された位置を含む球オブジェクトを前記仮想ガイドに重畳表示する、
     ことを特徴とする仮想ガイド表示装置。
    The virtual guide display device according to claim 6,
    The operation input device receives an operation of specifying a position on the virtual guide composed of the three-dimensional object displayed on the second field-of-view image,
    The processor superimposes a ring object containing a position specified on the virtual guide or a sphere object containing the specified position on the virtual guide,
    A virtual guide display device characterized by:
  9.  請求項1に記載の仮想ガイド表示装置であって、
     前記プロセッサは、
     前記操作入力装置により前記第1の視界画像上での位置の指定を受け付けると、前記ディスプレイに前記第1の視界画像を表示し、更に前記第1の視界画像における前記指定された位置に仮想オブジェクトを重畳表示する、
     ことを特徴とする仮想ガイド表示装置。
    The virtual guide display device according to claim 1,
    The processor
    When designation of a position on the first field-of-view image is accepted by the operation input device, the first field-of-view image is displayed on the display, and a virtual object is placed at the designated position in the first field-of-view image. superimposed on the
    A virtual guide display device characterized by:
  10.  請求項9に記載の仮想ガイド表示装置であって、
     前記プロセッサは、前記第2の視界画像に前記第1の視界画像に重畳表示された前記仮想オブジェクトを前記第2の視点から見た形状を用いて前記第2の視界画像に重畳表示する、
     ことを特徴とする仮想ガイド表示装置。
    The virtual guide display device according to claim 9,
    The processor superimposes and displays the virtual object superimposed on the first field-of-view image on the second field-of-view image using a shape viewed from the second viewpoint.
    A virtual guide display device characterized by:
  11.  請求項1に記載の仮想ガイド表示装置であって、
     前記プロセッサは、前記ディスプレイに前記第2の視界画像を表示すると共に、当該第2の視界画像よりも小さい画面領域に前記第1の視界画像を表示する、
     ことを特徴とする仮想ガイド表示装置。
    The virtual guide display device according to claim 1,
    The processor displays the second field-of-view image on the display and displays the first field-of-view image on a screen area smaller than the second field-of-view image.
    A virtual guide display device characterized by:
  12.  請求項1に記載の仮想ガイド表示装置であって、
     前記プロセッサは、前記仮想ガイドを点滅表示する、
    ことを特徴とする仮想ガイド表示装置。
    The virtual guide display device according to claim 1,
    The processor blinks the virtual guide.
    A virtual guide display device characterized by:
  13.  請求項1に記載の仮想ガイド表示装置であって、
     前記プロセッサは、前記仮想ガイドと前記第2の視界画像に撮像された物体との間でオクルージョン処理を実行し、前記第2の視界画像に重畳表示する、
     ことを特徴とする仮想ガイド表示装置。
    The virtual guide display device according to claim 1,
    The processor performs occlusion processing between the virtual guide and the object imaged in the second view image, and superimposes it on the second view image.
    A virtual guide display device characterized by:
  14.  請求項1に記載の仮想ガイド表示装置であって、
     前記プロセッサは、前記仮想ガイドと前記第1の視界画像に撮像された物体との交差部分、又は前記仮想ガイドと前記第2の視界画像に撮像された物体との交差部分に目印になるタグを表示し、
     前記操作入力装置は、前記タグの選択操作を受け付ける、
     ことを特徴とする仮想ガイド表示装置。
    The virtual guide display device according to claim 1,
    The processor places a mark tag at the intersection of the virtual guide and the object captured in the first view image or the intersection of the virtual guide and the object captured in the second view image. display and
    The operation input device receives a selection operation of the tag,
    A virtual guide display device characterized by:
  15.  第1の仮想ガイド表示装置及び第2の仮想ガイド表示装置が通信接続された仮想ガイド表示システムであって、
     前記第1の仮想ガイド表示装置は、
     前記第1の仮想ガイド表示装置が向いている世界座標系における方向を検出する第1の方向センサと、
     前記第1の仮想ガイド表示装置の前記世界座標系における位置を検出する第1の位置センサと、
     第1のディスプレイと、
     前記第1のディスプレイに表示された画像の任意の点を指定する操作を受け付ける第1の操作入力装置と、
     前記第2の仮想ガイド表示装置と通信を行うための第1の通信器と、
     前記第1の方向センサ、前記第1の位置センサ、前記第1のディスプレイ、前記第1の操作入力装置、及び前記第1の通信器のそれぞれに接続された第1のプロセッサと、を備え、
     前記第2の仮想ガイド表示装置は、
     前記第2の仮想ガイド表示装置が向いている方向を検出する第2の方向センサと、
     前記第2の仮想ガイド表示装置の位置を検出する第2の位置センサと、
     第2のディスプレイと、
     前記第2のディスプレイに表示された画像の任意の点を指定する操作を受け付ける第2の操作入力装置と、
     前記第2の仮想ガイド表示装置と通信を行うための第2の通信器と、
     前記第2の方向センサ、前記第2の位置センサ、前記第2のディスプレイ、前記第2の操作入力装置、及び前記第2の通信器のそれぞれに接続された第2のプロセッサと、を備え、
     前記第1のプロセッサは、
     前記第1の方向センサ及び前記第1の位置センサからのセンサ出力に基づいて、前記第1の仮想ガイド表示装置からの第1の視点で見た第1の視界画像上の指定された位置を経由し、前記第1の視点と同方向に延びる仮想ガイドを生成し、生成した前記仮想ガイドの形状情報及び前記世界座標系の位置を示す位置情報を含む仮想ガイド情報を前記第1の通信器から前記第2の仮想ガイド表示装置に対して送信し、
     前記第2の仮想ガイド表示装置は、
     前記第2の通信器から前記仮想ガイド情報を受信し、
     前記第2のプロセッサは、前記第1の視点とは異なる第2の視点から見た第2の視界画像上に、受信した前記仮想ガイド情報に基づいて前記仮想ガイドを前記第2のディスプレイに表示し、
     前記第2の操作入力装置は、前記第2のディスプレイに表示された前記仮想ガイド上の位置を指定する操作を受け付け、
     前記指定された位置を前記世界座標系における位置に変換して出力する、
     ことを特徴とする仮想ガイド表示システム。
    A virtual guide display system in which a first virtual guide display device and a second virtual guide display device are connected for communication,
    The first virtual guide display device,
    a first direction sensor for detecting a direction in a world coordinate system to which the first virtual guide display device is directed;
    a first position sensor that detects the position of the first virtual guide display device in the world coordinate system;
    a first display;
    a first operation input device that receives an operation to specify an arbitrary point on the image displayed on the first display;
    a first communication device for communicating with the second virtual guide display device;
    a first processor connected to each of the first direction sensor, the first position sensor, the first display, the first operation input device, and the first communicator;
    The second virtual guide display device,
    a second direction sensor that detects the direction in which the second virtual guide display device is facing;
    a second position sensor that detects the position of the second virtual guide display;
    a second display;
    a second operation input device that receives an operation to specify an arbitrary point on the image displayed on the second display;
    a second communication device for communicating with the second virtual guide display device;
    a second processor connected to each of the second direction sensor, the second position sensor, the second display, the second operation input device, and the second communicator;
    The first processor
    Based on the sensor outputs from the first direction sensor and the first position sensor, a specified position on the first field-of-view image viewed from the first viewpoint from the first virtual guide display device is determined. generating a virtual guide extending in the same direction as the first viewpoint, and transmitting virtual guide information including shape information of the generated virtual guide and position information indicating the position of the world coordinate system to the first communication device; from to the second virtual guide display device,
    The second virtual guide display device,
    receiving the virtual guide information from the second communicator;
    The second processor displays the virtual guide on the second display based on the received virtual guide information on a second view image viewed from a second viewpoint different from the first viewpoint. death,
    The second operation input device receives an operation for designating a position on the virtual guide displayed on the second display,
    converting the specified position into a position in the world coordinate system and outputting it;
    A virtual guide display system characterized by:
  16.  プロセッサを備えた仮想ガイド表示装置で実行される仮想ガイド表示方法であって、
     前記プロセッサは、
     第1の視点の世界座標系の位置及び方向を示す情報を取得するステップと、
     前記第1の視点で見た視界画像上で指定された位置を取得するステップと、
     前記指定された位置を経由し、前記第1の視点と同方向に延びる仮想ガイドを生成するステップと、
     前記第1の視点とは異なる第2の視点で見た視界画像上に、前記仮想ガイドを表示するステップと、
     前記表示された仮想ガイド上の位置の指定を受け付けるステップと、
     前記指定された位置を前記世界座標系における位置に変換して出力するステップと、
     を実行することを特徴とする仮想ガイド表示方法。
     
    A virtual guide display method executed by a virtual guide display device having a processor, comprising:
    The processor
    obtaining information indicative of the position and orientation of the world coordinate system of the first viewpoint;
    obtaining a specified position on the field-of-view image viewed from the first viewpoint;
    generating a virtual guide that passes through the specified position and extends in the same direction as the first viewpoint;
    displaying the virtual guide on a field-of-view image viewed from a second viewpoint different from the first viewpoint;
    receiving a designation of a position on the displayed virtual guide;
    a step of converting the specified position into a position in the world coordinate system and outputting the position;
    A virtual guide display method characterized by executing
PCT/JP2021/004802 2021-02-09 2021-02-09 Virtual guide display device, virtual guide display system, and virtual guide display method WO2022172335A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/004802 WO2022172335A1 (en) 2021-02-09 2021-02-09 Virtual guide display device, virtual guide display system, and virtual guide display method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/004802 WO2022172335A1 (en) 2021-02-09 2021-02-09 Virtual guide display device, virtual guide display system, and virtual guide display method

Publications (1)

Publication Number Publication Date
WO2022172335A1 true WO2022172335A1 (en) 2022-08-18

Family

ID=82838473

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/004802 WO2022172335A1 (en) 2021-02-09 2021-02-09 Virtual guide display device, virtual guide display system, and virtual guide display method

Country Status (1)

Country Link
WO (1) WO2022172335A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08212390A (en) * 1995-02-08 1996-08-20 Canon Inc Method and device for image processing
JP2015207219A (en) * 2014-04-22 2015-11-19 富士通株式会社 Display device, position specification program, and position specification method
JP2016184295A (en) * 2015-03-26 2016-10-20 富士通株式会社 Display control method, display control program, and information processing apparatus
JP2018049629A (en) * 2017-10-10 2018-03-29 株式会社コロプラ Method and device for supporting input in virtual space and program for causing computer to execute the method
US20180300952A1 (en) * 2017-04-17 2018-10-18 Microsoft Technology Licensing, Llc Multi-Step Placement of Virtual Objects

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08212390A (en) * 1995-02-08 1996-08-20 Canon Inc Method and device for image processing
JP2015207219A (en) * 2014-04-22 2015-11-19 富士通株式会社 Display device, position specification program, and position specification method
JP2016184295A (en) * 2015-03-26 2016-10-20 富士通株式会社 Display control method, display control program, and information processing apparatus
US20180300952A1 (en) * 2017-04-17 2018-10-18 Microsoft Technology Licensing, Llc Multi-Step Placement of Virtual Objects
JP2018049629A (en) * 2017-10-10 2018-03-29 株式会社コロプラ Method and device for supporting input in virtual space and program for causing computer to execute the method

Similar Documents

Publication Publication Date Title
JP7268692B2 (en) Information processing device, control method and program
US9401050B2 (en) Recalibration of a flexible mixed reality device
JP6780642B2 (en) Information processing equipment, information processing methods and programs
US20180018792A1 (en) Method and system for representing and interacting with augmented reality content
JP5843340B2 (en) 3D environment sharing system and 3D environment sharing method
US20070035563A1 (en) Augmented reality spatial interaction and navigational system
JP6618681B2 (en) Information processing apparatus, control method and program therefor, and information processing system
US20140198017A1 (en) Wearable Behavior-Based Vision System
US20050024388A1 (en) Image displaying method and apparatus
WO2014016987A1 (en) Three-dimensional user-interface device, and three-dimensional operation method
EP4172740A1 (en) Augmented reality eyewear with speech bubbles and translation
JP2017146651A (en) Image processing method and image processing program
TWI453462B (en) Telescopic observation for virtual reality system and method thereof using intelligent electronic device
JP2012108842A (en) Display system, display processing device, display method, and display program
KR20120017783A (en) Method and apparatus for presenting location information on augmented reality
JP2016122392A (en) Information processing apparatus, information processing system, control method and program of the same
JP2005174021A (en) Method and device for presenting information
EP4172681A1 (en) Augmented reality eyewear with 3d costumes
CN113498531A (en) Head-mounted information processing device and head-mounted display system
JP2006252468A (en) Image processing method and image processing system
JP6020009B2 (en) Head mounted display, method and program for operating the same
Schmalstieg et al. Augmented reality as a medium for cartography
KR20190048810A (en) Apparatus and method for providing augmented reality contents
WO2022172335A1 (en) Virtual guide display device, virtual guide display system, and virtual guide display method
WO2022176450A1 (en) Information processing device, information processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21925589

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21925589

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP