WO2022172335A1 - Dispositif d'affichage de guide virtuel, système d'affichage de guide virtuel et procédé d'affichage de guide virtuel - Google Patents

Dispositif d'affichage de guide virtuel, système d'affichage de guide virtuel et procédé d'affichage de guide virtuel Download PDF

Info

Publication number
WO2022172335A1
WO2022172335A1 PCT/JP2021/004802 JP2021004802W WO2022172335A1 WO 2022172335 A1 WO2022172335 A1 WO 2022172335A1 JP 2021004802 W JP2021004802 W JP 2021004802W WO 2022172335 A1 WO2022172335 A1 WO 2022172335A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual guide
display device
virtual
guide display
viewpoint
Prior art date
Application number
PCT/JP2021/004802
Other languages
English (en)
Japanese (ja)
Inventor
尚久 高見澤
康宣 橋本
治 川前
義憲 岡田
Original Assignee
マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by マクセル株式会社 filed Critical マクセル株式会社
Priority to PCT/JP2021/004802 priority Critical patent/WO2022172335A1/fr
Publication of WO2022172335A1 publication Critical patent/WO2022172335A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a virtual guide display device, a virtual guide display system, and a virtual guide display method for specifying a position in a three-dimensional space.
  • Augmented Reality (AR) technology which adds digital information to the real world and reflects and augments virtual objects (virtual objects) in the virtual space created with CG (Computer Graphics), etc.
  • AR Augmented Reality
  • CG Computer Graphics
  • Virtual guide display devices and virtual guide display systems that can easily handle virtual objects while recognizing a three-dimensional real space are widely used.
  • applications such as remote work support are also expanding, so there are more opportunities than ever for users who are not familiar with the device to specify a position in a three-dimensional space. Improvement is desired.
  • Position specification in a three-dimensional space is mainly performed when specifying an object (physical object or virtual object) that already exists in the space, or arranging a virtual object at a specified position.
  • Patent Document 1 discloses "display means for displaying an image of a virtual space in which virtual object information is common among a plurality of virtual guide display devices.
  • An information processing system comprising a plurality of virtual guide display devices communicably connected to a head-mounted display, which acquires the position and orientation of the head-mounted display in the real space, and based on the acquired position and orientation information , to specify a position in a virtual space, and an information processing system that controls to display an arrow from a head-mounted display to the specified position (summary excerpt).
  • HMD head mounted display
  • Patent Document 1 the position of the place where the line of sight hits is specified from one direction. There is no consideration for specifying a different position on the back side, and there is a problem that an arbitrary position on the three-dimensional space cannot be specified.
  • the present invention provides a virtual guide display device, a virtual guide display system, and a virtual guide display method that can easily and accurately designate an arbitrary position in a three-dimensional space.
  • the present invention is a virtual guide display device comprising: a direction sensor for detecting a direction in the world coordinate system to which the virtual guide display device is directed; a position sensor that detects a position; a display; an operation input device that receives an operation to specify an arbitrary point on an image displayed on the display; and a processor connected to each of said processors, based on sensor outputs from said direction sensor and said position sensor, on a first field of view image viewed from a first viewpoint from said virtual guide display device.
  • the operation input device receives an operation to specify a position on the virtual guide displayed on the display, and the specified position is converted to a position in the world coordinate system output.
  • FIG. 2 is a diagram schematically showing the appearance of a configuration example of a virtual guide display device and a virtual guide display system according to the present embodiment;
  • FIG. 2 is a diagram for explaining a position designation operation on a field-of-view image from a first viewpoint in the embodiment shown in FIG. 1;
  • FIG. 4 is a diagram showing a screen when a position is specified on the field-of-view image from a second viewpoint in the embodiment shown in FIG. 1;
  • FIG. 4 is a diagram for explaining a position designation operation on a field-of-view image from a second viewpoint in the embodiment shown in FIG. 1;
  • FIG. 11 is a diagram for explaining another example of a position designation operation on a field-of-view image from a second viewpoint in the embodiment shown in FIG. 1;
  • FIG. 1 is a diagram schematically showing the appearance of a configuration example of a virtual guide display device and a virtual guide display system according to the present embodiment;
  • FIG. 2 is a diagram for explaining a position designation operation on
  • FIG. 2 is a diagram showing a state in which a position is specified in the embodiment shown in FIG. 1;
  • FIG. 2 is an image diagram for explaining a case where a virtual object is arranged at a designated position in the embodiment shown in FIG. 1;
  • FIG. 2 is an image diagram for explaining a case in which an object at a designated position is specified in the embodiment shown in FIG. 1;
  • FIG. 2 is an image diagram for explaining a case in which an object at a designated position is specified in the embodiment shown in FIG. 1;
  • FIG. 1 is a diagram showing a state in which a position is specified in the embodiment shown in FIG. 1;
  • FIG. 2 is an image diagram for explaining a case where a virtual object is arranged at a designated position in the embodiment shown in FIG. 1;
  • FIG. 2 is an image diagram for explaining a case in which an object at a designated position is specified in the embodiment shown in FIG. 1;
  • FIG. 2 is a diagram schematically showing the appearance of a configuration example of a virtual guide display device and a virtual guide display system according to the present embodiment
  • 4 is a flowchart for explaining the basic operation of the virtual guide display device and virtual guide display system according to the present embodiment
  • 1 is a block diagram showing a configuration example of a virtual guide display device according to this embodiment
  • FIG. FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a diagram for explaining
  • FIG. 10 is a diagram for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image in the virtual guide display device according to the present embodiment
  • FIG. 5 is a diagram for explaining a position specifying operation when arranging a virtual object at a specified position in a three-dimensional space in the virtual guide display device according to the embodiment
  • FIG. 5 is a diagram for explaining a position specifying operation when arranging a virtual object at a specified position in a three-dimensional space in the virtual guide display device according to the embodiment
  • FIG. 5 is a diagram for explaining a position specifying operation when arranging a virtual object at a specified position in a three-dimensional space in the virtual guide display device according to the embodiment
  • FIG. 5 is a diagram for explaining a position specifying operation when arranging a virtual object at a specified position in a three-dimensional space in the virtual guide display device according to the embodiment
  • FIG. 5 is a diagram for explaining a position specifying operation when arranging a virtual
  • FIG. 5 is a diagram for explaining a position specifying operation when arranging a virtual object at a specified position in a three-dimensional space in the virtual guide display device according to the embodiment
  • FIG. 4 is a diagram for explaining a case where two field-of-view images viewed from a first viewpoint and a second viewpoint are displayed on two screens in the virtual guide display device according to the present embodiment
  • FIG. 4 is a diagram for explaining a case where two field-of-view images viewed from a first viewpoint and a second viewpoint are displayed on two screens in the virtual guide display device according to the present embodiment
  • FIG. 10 is a view for explaining position designation by tag selection, in which tags are displayed at intersections between the virtual guide and objects in the visual field image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a view for explaining position designation by tag selection, in which tags are displayed at intersections between the virtual guide and objects in the visual field image in the virtual guide display device according to the present embodiment
  • FIG. 10 is a view for explaining position designation by tag selection
  • FIG. 1 is a diagram schematically showing the appearance of a configuration example of a virtual guide display device and a virtual guide display system according to this embodiment.
  • 2, 3, 4A, 5, and 4B are diagrams for explaining the three-dimensional position designation operation in the embodiment shown in FIG.
  • HMD head-mounted display
  • the HMD 100 comprises a camera 104.
  • the angle of view of the camera 104 is assumed to be the same as the first viewpoint 101 of the first user 10 wearing the HMD 100 .
  • a camera 104 captures a view scene in a three-dimensional space from a first viewpoint 101 to obtain a first view image viewed from the first viewpoint 101 .
  • the first field-of-view image is an image obtained by photographing the field-of-view scenery in the actual three-dimensional space, but it may be an image of virtual reality viewed from the first viewpoint 101 .
  • the HMD 100 captures the scenery from the camera 104 at the second viewpoint 102 of the first user 10 . is captured to obtain a second view image viewed from a second viewpoint 102 .
  • the tablet terminal 110 is operated by the second user 20 .
  • the tablet terminal 110 receives field-of-view images obtained by the HMD 100 and viewed from the first viewpoint 101 and the second viewpoint 102 through wireless communication with the HMD 100 .
  • the tablet terminal 110 displays the received field-of-view image on the screen 106 of the tablet terminal 110 .
  • the server 120 which can process and store a large amount of information at high speed, transmits and receives various types of information such as view images and virtual guide information to and from the HMD 100 and the tablet terminal 110 through wireless communication.
  • a field-of-view image 140 (see FIG. 2) at 101 is transmitted to the tablet terminal 110 .
  • the tablet terminal 110 displays the received view image 140 on the screen 106.
  • the second user 20 designates the position 141 of the point when viewed from the first viewpoint 101 on the displayed field-of-view image 140 .
  • a position 141 of a point on the field-of-view image 140 when viewed from the first viewpoint 101 corresponds to a position 130 on the three-dimensional space.
  • the tablet terminal 110 generates a virtual object (hereinafter referred to as a "virtual guide 150") that extends in the same direction as the first viewpoint 101 via a position 141 specified on the view image 140. A point corresponding to the position 130 to be specified in the three-dimensional space exists on the generated virtual guide 150 .
  • the camera 104 shoots the scenery from the second viewpoint 102, A view image 160 (see FIG. 3) captured at the second viewpoint 102 is transmitted to the tablet terminal 110 .
  • a view image 160 as shown in FIG. 3 is displayed on the screen 106 of the tablet terminal 110 . Further, on the tablet terminal 110, the generated virtual guide 150 is superimposed on the view image 160 and displayed.
  • a position 130 in the three-dimensional space to be specified is identified as a position 161 corresponding to the position 151 specified on the virtual guide 150 on the view image 160, as shown in FIG. That is, a virtual guide 150 is generated that extends in the same direction as the first viewpoint 101 via a specified position 141 (see FIG. 2) on the visual field image viewed from the first viewpoint 101, and By displaying the generated virtual guide 150 on the viewed visual field image and specifying the position 151 (see FIG. 4A) on the displayed virtual guide 150, it is possible to identify any position in the three-dimensional space. be possible.
  • the virtual guide 150 is displayed and arranged in space according to the spatial coordinates from the first viewpoint 101, and the virtual guide 150 fixed in the space is viewed from the second viewpoint 102 to create the final three-dimensional space. It determines the top position 130 .
  • the virtual guide 150 may be generated by using the HMD 100 or the server 120 capable of handling a large amount of information other than the tablet terminal 110 .
  • the server 120 distributes virtual guide information for displaying the virtual guide 150 on the tablet terminal 110 by wireless communication or the like.
  • the process of generating the virtual guide and the process of specifying the position may also be displayed on the HMD 100 so that the first user 10 can see it.
  • a position near the virtual guide 150 may be specified as an extended method of specifying the position. Description will be made with reference to FIG. 4B.
  • the distance in the depth direction along the second viewpoint 102 cannot be determined simply by specifying the position, and the position 152 cannot be determined.
  • a rule is established in advance, for example, the position at which the distance to the virtual guide 150 is the shortest, that is, the shortest distance between the straight line in the direction of the second viewpoint 102 passing through the position 152 and the straight line indicated by the virtual guide 150. Then, the position in the depth direction can be determined according to the rule.
  • Another processing method when the second user 20 designates a position not on the virtual guide 150 is, for example, on the virtual guide 150 closest to the position 152 on the visual field image of the second viewpoint 102 . It is also possible to specify the position of
  • a plurality of processing rules may be defined when the second user 20 designates a position not on the virtual guide 150 so that the second user 20 can designate which rule to apply.
  • FIG. 6 is an image diagram for explaining a case where a virtual object is arranged at the position specified in this embodiment shown in FIG.
  • the tablet terminal 110 determines the designation of, for example, a position 130 in the three-dimensional space by the spatial position designation process described above, and generates a virtual object 170 to be placed on the position 130 .
  • the spatial coordinates of the position 130 are associated with the virtual object 170 .
  • the HMD 100 displays the generated virtual object 170 with spatial coordinates on the field image 180 (see FIG. 6) viewed from the third viewpoint 103 (see FIG. 1), which is a bird's eye view, for example.
  • FIG. 6 shows, as an example of the virtual object 170, a virtual object labeled "This Point" for indicating a destination or gathering place.
  • the first user 10 wearing the HMD 100 can easily visually recognize the destination and meeting place indicated by the second user 20 operating the tablet terminal 110 . That is, by arranging the virtual object 170 at the specified position 130 in the three-dimensional space, the virtual object 170 is generated, and the virtual object 170 is generated for the first user 10 different from the second user 20 who is arranged at the position 130 .
  • Useful information such as instructions indicated by 170 can be accurately and easily notified at the specified position 130 .
  • the virtual object may be generated by using the server 120 other than the tablet terminal 110, and is distributed to the virtual guide display device (HMD 100 in the above example) that can display the virtual object 170 by wireless communication or the like.
  • the server 120 may specify only the spatial coordinates, transmit and receive the spatial coordinate data, and display the virtual object 170 at the spatial coordinate position specified by the HMD 100 .
  • the server 120 may specify the type and display direction of the object, and the HMD 100 may display the object.
  • FIG. 7A is an image diagram illustrating a case of identifying an object at a designated position in the embodiment shown in FIG.
  • positions 191 and 192 in the three-dimensional space are designated by the spatial position designation process described above.
  • the HMD 100 uses the specified position 191 to identify the tree 193 of the physical object on the specified position 191 on the field image 180 seen from the third viewpoint 103, for example.
  • the virtual object 194 located on the specified position 192 is specified on the field image 180 viewed from the third viewpoint 103, for example.
  • FIG. 7A shows an example of a screen displaying a virtual object 194 labeled "This is a memorial tree" for explaining the tree 193.
  • FIG. 7A shows a case in which a physical object such as a tree 193 is specified when viewed from the third viewpoint 103.
  • the first user 10 since the first user 10 who is viewing the physical space walks on the ground, the first user 10 It is difficult for the robot 10 to move to the position of the third viewpoint 103 shown in FIG. 7A, and in practice, the physical object may be identified by viewing from a viewpoint that is possible in the physical space.
  • the image of the third viewpoint 103 is merely an example for explanation.
  • the image of the third viewpoint 103 is merely an example for explanation.
  • a virtual guide 151a based on the position 151 specified at the second viewpoint 102 is displayed, and the position other than the intersection of the two guides is displayed. It may be configured so that the point 162 can be specified.
  • the rule that the sum of the distances to the virtual guide 150 and the virtual guide 151a is the smallest is set in advance and determined. Just do it. This technique improves the convenience of position designation, such as allowing designation of a position deviated from the intersection of the plurality of virtual guides 150 and 151a.
  • this allows the first user 10 wearing the HMD 100 to accurately and easily confirm the physical object or virtual object pointed by the second user 20 operating the tablet terminal 110 . That is, by going back and forth between the first viewpoint and the second viewpoint, or by recognizing the designated point from the difference in appearance between the first or second viewpoint and the n-th viewpoint, the position in the three-dimensional space can be determined. By specifying, it is possible to accurately and easily indicate a physical object or virtual object at a specific position to a user viewing the physical object or virtual object.
  • a worker corresponding to the first user 10 photographs a work place or the like with a camera 104 mounted on an HMD
  • a support instructor corresponding to the second user 20 in the office can easily specify a desired position in the three-dimensional space while viewing the camera-captured image of the work place, etc. on the screen 106 of the tablet terminal 110, etc. . Therefore, information such as work instructions can be given to a worker by a virtual object placed at a specified desired position, or an object at a specified position can be identified and pointed to the worker to inform the worker of the information. Therefore, a support instructor who is unfamiliar with the apparatus can accurately and conveniently provide support such as instructions to remote workers.
  • FIG. 8 is a diagram schematically showing the appearance of another configuration example of the virtual guide display device and the virtual guide display system according to this embodiment.
  • the tablet terminal 110 is equipped with a camera 801 and uses the camera 801 to photograph the scenery from a first viewpoint 101 and a second viewpoint 102 . 2 to 7B are used as the drawings used for explaining the position designation.
  • the tablet terminal 110 is provided with a camera 801 that captures a field of view scenery, captures the field of view of the second user 20 from the first viewpoint 101 with the camera 801, and displays the captured field of view image 140 on the screen.
  • a position 141 is specified on the field of view image 140 displayed by the second user 20 .
  • the tablet terminal 110 generates a virtual guide 150 (see FIG. 3) that extends in the same direction as the first viewpoint 101 via a position 141 specified on the field-of-view image 140 .
  • the camera 801 of the tablet terminal 110 moves the second user 20 from the second viewpoint.
  • the field of view scenery at the viewpoint 102 is photographed, and the photographed field of view image 160 (see FIG. 3) is displayed on the screen 106.
  • a position 130 on the three-dimensional space to be specified can be specified as a position 161 on the field-of-view image 160 .
  • the HMD 100 and the tablet terminal 110 are used as examples of the first virtual guide display device and the second virtual guide display device. You can use any device such as a smartphone, a personal computer, etc.
  • both the first viewpoint 101 and the second viewpoint 102 use field-of-view images of the current landscape.
  • a position is specified using a field image taken at the current viewpoint, a virtual guide is generated and displayed, and a field image taken in the past is used as a second viewpoint. You can specify the position. That is, as the first viewpoint and the second viewpoint, past field-of-view images may be appropriately combined and used.
  • past visual field images may be used for both the first viewpoint and the second viewpoint. It goes without saying that this past field-of-view image must be associated with spatial coordinate data. Also, satellite photographs and map data may be used as long as they have spatial coordinate data. Image data in the server 120, which can store a large amount of information data 1024, may be used as the past view image. In this case, the capacity load on the first virtual guide display device and the second virtual guide display device can be greatly reduced.
  • the server 120 may execute not only processing such as holding past field-of-view images and virtual objects, but also a part of other operation processing such as position designation in a three-dimensional space.
  • FIG. 9 is an example of a flowchart explaining the basic operation of the virtual guide display device and virtual guide display system according to this embodiment.
  • FIG. 9 when a visual field image at a first viewpoint is obtained by the first virtual guide display device (S901), and a position is specified on the obtained visual field image at the first viewpoint (S902), A virtual guide is generated that extends in the same direction as the first viewpoint via the specified position (S903).
  • An example algorithm for generating and displaying a virtual guide is described below.
  • the virtual guide information including the shape information of the virtual guide, the position information indicating the position of the world coordinate system, and the direction information indicating the direction of the world coordinate system is sent to the second virtual guide display device.
  • a virtual guide is displayed on the visual field image at the second viewpoint (S904), and a position is designated on the displayed virtual guide (S905).
  • the position in the three-dimensional space can be specified by the process 900 for specifying the position from the above two viewpoints.
  • the second virtual guide display device converts the specified position into a position in the world coordinate system and outputs the position.
  • This output mode may be displayed on the display 1034 of the second virtual guide display device, or may be displayed in the virtual guide generation display processing unit 1012 (FIG. 10) inside the processor 1010 (FIG. 10) of the second virtual guide display device. 10) to the virtual object generation processing unit 1013 and output to each processing unit that uses the specified position, such as arranging the virtual object at this position. Thereafter, the process proceeds to a process using the output specified position (S906).
  • utilization processes such as arranging a virtual object at a specific position, using it to specify a physical object or virtual object at a specific position, and transmitting and using coordinate information of a specific position to another terminal. be done.
  • a virtual object representing information such as an instruction is arranged, and useful information such as an instruction is given to a user who can visually recognize the virtual object in the three-dimensional space. be able to.
  • the final position of the generated virtual guide can be specified by viewing the generated virtual guide from the second viewpoint. can be accurately and easily specified.
  • useful information can be displayed by a virtual object placed at the specified position, or an object at the specified position can be displayed to the user.
  • pointing it is possible to implement user-friendly instruction support to the user.
  • FIG. 10 is a functional block diagram of a configuration example of a first virtual guide display device and a second virtual guide display device according to this embodiment.
  • the first virtual guide display device (HMD 100) and the second virtual guide display device (tablet terminal 110) include a processor 1010, a memory 1020, a camera 104 (a camera 801 in the tablet terminal 110), and a left eye line detection sensor. 1032, a right eye line of sight detection sensor 1033, a display 1034, an operation input device 1035, a microphone 1036, a speaker 1037, a vibrator 1038, a communication I/F (communication device) 1039, a sensor group 1040, etc. They are interconnected via bus 1050 .
  • the processor 1010 is composed of a CPU, ROM, RAM, etc., and constitutes a controller of the first virtual guide display device (HMD 100) and the second virtual guide display device (tablet terminal 110).
  • the processor 1010 executes processing according to an operating system (OS) 1022 stored as a control program 1021 in the memory 1020 and an application program 1023 for operation control, thereby performing each function in the processor 1010. It controls the parts and implements the functions of the OS, middleware, applications, etc. and other functions.
  • OS operating system
  • application program 1023 for operation control
  • Functional units configured by execution by the processor 1010 include a position designation processing unit 1011, a virtual guide generation display processing unit 1012, and a virtual object generation processing unit 1013.
  • the memory 1020 is composed of a non-volatile storage device or the like, and stores various programs 1021 and information data 1024 handled by the processor 1010 and the like.
  • the information data 1024 includes coordinate position information 1025 indicating a spatial coordinate position such as a designated position, virtual guide information 1026 required to generate and display a virtual guide, virtual object information 1027 representing a virtual object, and physical object information 1027.
  • the view field image information 1028 of the photographed scenery and the like are stored.
  • the cameras 104 and 801 take images of the field of view around the front, and acquire the field of view image by converting the light incident from the lens into an electrical signal with an imaging device.
  • the first user 10 obtains a field image by photographing the field of view with the camera 104 while visually recognizing an actual object in the front surrounding field of view with his/her own eyes.
  • the left-eye line-of-sight detection sensor 1032 and the right-eye line-of-sight detection sensor 1033 detect the line of sight by capturing the movements and orientations of the left and right eyes, respectively.
  • a well-known technology that is generally used as eye tracking processing may be used.
  • a known technique is to detect the line of sight based on the position of the pupil with respect to the position of the corneal reflection, with the position of the reflected light (corneal reflection) on the cornea taken by an infrared camera and the position of the corneal reflection as a reference point.
  • the display 1034 includes a screen 106 configured by a liquid crystal panel, and displays a visual field image, notification information to the user such as an alarm, and the like on the screen 106.
  • the operation input device 1035 may be, for example, a capacitive touch pad stacked on the screen 106 .
  • the touch pad detects an approach or contact operation (touch operation) by a finger, touch pen, or the like.
  • the position can be easily specified by the user performing a touch operation on the position to be specified on the display image.
  • the display 1034 in the case of the HMD 100, in the case of an optical see-through type, for example, a projector that projects a virtual object, notification information to the user, and the like, and a transparent transparent display that displays the projected virtual object, etc. in front of the eyes.
  • a half mirror may be used.
  • the HMD 100 when the HMD 100 is of the video see-through type, it is configured using a display 1034 such as a liquid crystal panel that displays together the physical object in front of the eye photographed by the camera 104 and the virtual object. As a result, the user can use the HMD 100 to superimpose and visually recognize the physical object and the virtual object within the visual field in front of the user's eyes.
  • a display 1034 such as a liquid crystal panel that displays together the physical object in front of the eye photographed by the camera 104 and the virtual object.
  • the operation input device 1035 may be, for example, a keyboard, key buttons, touch keys, or the like.
  • the operation input device 1035 may be provided in a position and form in which the first user 10 can easily perform an input operation in the HMD 100, or may be separated from the main body of the HMD 100 and connected by wire or wirelessly.
  • the input operation screen may be displayed on the screen 106 of the display 1034, and the input operation information may be captured based on the position on the input operation screen to which the line of sight is directed detected by the left-eye line-of-sight detection sensor 1032 and the right-eye line-of-sight detection sensor 1033.
  • a pointer may be displayed on the input operation screen and input operation information may be obtained by operating the pointer with the operation input device 1035 .
  • the operation input device 1035 may utter a voice indicating an input operation by the user, collect the sound with the microphone 1036, and capture the input operation information.
  • the microphone 1036 collects voice from the outside or the user's own voice and converts it into voice data. Instruction information uttered by the user can be taken into the virtual guide display device, and an operation in response to the instruction information can be conveniently executed.
  • a speaker 1037 outputs sound based on the sound data. Thereby, the notification information to the user can be notified by voice. Speaker 1037 can be replaced with headphones.
  • the vibrator 1038 generates vibration under the control of the processor 1010, and converts notification information to the user transmitted by the virtual guide display device into vibration.
  • the vibrator 1038 by causing the vibrator 1038 to vibrate the user's head to which the vibrator 1038 is closely attached, it is possible to reliably transmit the notification to the user.
  • Examples of information to be notified to the user include designation of the position at the first viewpoint 101, generation and display of the virtual guide 150, placement of the virtual object 170, and finally specification of the position in the three-dimensional space. There are notifications and notification contents, and these notification information can improve usability.
  • the communication I/F 1039 is a communication interface that performs wireless communication with other nearby information terminals by short-range wireless communication, wireless LAN, base station communication, or the like. Including antennas, etc.
  • the communication I/F 1039 performs wireless communication between the first virtual guide display device and the second virtual guide display device and with the server 120 .
  • short-range wireless communication Bluetooth (registered trademark), IrDA (Infrared Data Association, registered trademark), Zigbee (registered trademark), HomeRF (Home Radio Frequency, registered trademark), or Wi-Fi (registered trademark) It is performed using a wireless LAN such as.
  • long-distance wireless communication such as W-CDMA (Wideband Code Division Multiple Access, registered trademark) and GSM (Global System for Mobile Communications) may be used.
  • the communication I/F 1039 may use other methods such as optical communication and sound wave communication as means for wireless communication.
  • a light emitting part, a light receiving part, a speaker and a microphone are used instead of the transmitting/receiving antenna.
  • high-speed large-capacity communication networks such as 5G (5th Generation: 5th generation mobile communication system) and local 5G are used for wireless communication. This can dramatically improve usability.
  • the distance sensor 1041 is a sensor that measures the distance between the virtual guide display device and a real object in the outside world.
  • the distance measurement sensor 1041 may use a TOF (Time Of Flight) sensor, or may use a stereo camera or another method. From the three-dimensional data created using the distance measuring sensor 1041, it is possible to create a virtual space in which physical objects are virtually arranged, and it is also possible to arrange virtual objects by designating three-dimensional coordinates in the virtual space. becomes.
  • the acceleration sensor 1042 is a sensor that detects acceleration, which is a change in speed per unit time, and can detect movement, vibration, impact, and the like.
  • the gyro sensor 1043 is a sensor that detects the angular velocity in the rotational direction, and can capture the state of vertical, horizontal, and diagonal postures.
  • the posture and movement of the head of the first user 10 can be detected using the acceleration sensor 1042 and the gyro sensor 1043 mounted on the HMD 100 .
  • the geomagnetic sensor 1044 is a sensor that detects the magnetic force of the earth, and detects the direction in which the virtual guide display device is facing. It is also possible to detect the movement of the virtual guide display device by using a 3-axis type that detects geomagnetism in the vertical direction as well as the front and back directions and the left and right directions, and by capturing changes in geomagnetism with respect to the movement of the virtual guide display device.
  • the gyro sensor 1043 and the geomagnetic sensor 1044 function as direction sensors that detect the orientation of the HMD 100 in the world coordinate system.
  • the GPS sensor 1045 receives signals from GPS (Global Positioning System) satellites in the sky and detects the current position of the virtual guide display device. This makes it possible to fix the position of the moving viewpoint.
  • GPS sensor 1045 is a position sensor that detects a position in the world coordinate system.
  • the position designation processing unit 1011 performs processing for designating a position on the field-of-view image displayed on the display 1034 using the operation input device 1035 .
  • the position of a point or range desired by the user is designated on the field of view image viewed from the first viewpoint 101
  • the desired position on the displayed virtual guide 150 is designated on the field of view image viewed from the second viewpoint 102.
  • a position 152 around the virtual guide may be specified using the virtual guide 150 as a guideline, not only on the virtual guide 150 .
  • the virtual guide generation display processing unit 1012 generates a virtual guide 150 that extends in the same direction as the first viewpoint 101 via points 141 and range positions specified on the field of view image at the first viewpoint 101 . In addition, processing for displaying a virtual guide 150 on the field-of-view image at the second viewpoint 102 is performed.
  • the virtual object generation processing unit 1013 generates a virtual object 170 that is an object in a virtual space different from the real space. Note that the virtual object 170 generated by the external server 120 may be imported into the virtual guide display device through wireless communication.
  • the position designation processing unit 1011 designates the position 141 on the view image 140 viewed from the first viewpoint 101, and the virtual guide generation display processing unit 1012 passes through the designated position 141.
  • a virtual guide 150 extending in the same direction as the first viewpoint 101 is generated.
  • the virtual guide generation and display processing unit 1012 displays the virtual guide 150 generated on the visual field image 160 viewed from the second viewpoint 102, and the position on the virtual guide 150 displayed by the position designation processing unit 1011 is specified. do. Therefore, the virtual guide 150 is generated according to the space coordinates at the first viewpoint 101, and the final position in the three-dimensional space is specified while viewing the virtual guide 150 fixed in the space from the second viewpoint 102. be able to. That is, by specifying a position in space from two different viewpoints while using the virtual guide 150, it is possible to determine an arbitrary position in the three-dimensional space accurately, easily and conveniently.
  • the field-of-view images 140 and 160 viewed from the first viewpoint 101 and the second viewpoint 102 may be images captured by the camera 104 of the HMD 100 or images captured by the camera 801 of the tablet terminal 110 .
  • the virtual object 170 generated by the virtual object generation processing unit 1013 is arranged at the specified coordinate position in the three-dimensional space, and the virtual object with spatial coordinates is displayed on the display 1034 of the first and second virtual guide display devices. By displaying, it is possible to inform the user who operates the first and second virtual guide display devices of the useful information indicated by the virtual object 170 at the appropriate spatial coordinate position.
  • the user can be pointed to the physical object or virtual object at the specified position, It is possible to easily give an accurate object indication to the user.
  • the HMD 100 captures the field of view seen from the first viewpoint 101 and the second viewpoint 102 with the camera 104, and 1. Transmit the second field-of-view image to the tablet terminal 110 .
  • the field of view viewed from the first viewpoint 101 and the second viewpoint 102 is photographed by the camera 801, and the photographed first and second field of view images are acquired.
  • the position designation processing unit 1011 displays the first field-of-view image viewed from the first viewpoint 101.
  • a virtual guide generation display processing unit 1012 generates a virtual guide 150 extending in the same direction as the first viewpoint 101 via a position 140 specified on the field of view image.
  • the virtual guide generation display processing unit 1012 After that, in the tablet terminal 110 , based on the second field-of-view image transmitted from the HMD 100 or the second field-of-view image acquired by the tablet terminal 110 , the virtual guide generation display processing unit 1012 generates images from the second viewpoint 102 .
  • the generated virtual guide 150 is displayed on the viewed second view image, and a position 151 on the virtual guide 150 displayed by the position designation processing unit 1011 is designated.
  • the second virtual guide display device can accurately and easily determine an arbitrary position in the three-dimensional space with good usability.
  • the virtual object generation processing unit 1013 generates a virtual object 170 at a position 151 on the virtual guide 150 specified by the tablet terminal 110, and transmits the generated virtual object with spatial coordinates to the HMD 100.
  • the HMD 100 receives the virtual object with spatial coordinates transmitted from the tablet terminal 110, and displays the received virtual object with spatial coordinates on the visual field image according to the spatial coordinates.
  • the user holding the tablet terminal 110 to inform the user equipped with the HMD 100 of useful information indicated by the virtual object at a suitable spatial coordinate position.
  • the tablet terminal 110 transmits to the HMD 100 spatial coordinate position information indicating a position 151 on the virtual guide 150 specified by the position specification processing unit 1011 .
  • the HMD 100 receives the spatial coordinate position information transmitted from the tablet terminal 110 and displays the position indicated by the received spatial coordinate position information on the visual field image. Therefore, the user holding the tablet terminal 110 can point to the user wearing the HMD 100 a physical object or a virtual object at a designated position in the three-dimensional space.
  • the second embodiment is an embodiment in which a range is specified instead of a point when specifying a position on the view field image in the virtual guide display device.
  • 11A to 11G are diagrams for explaining an operation when specifying a range instead of a point when specifying a position on the field-of-view image viewed from the first viewpoint 101 and the second viewpoint 102.
  • FIG. 11A to 11G the parts shown in FIGS. 2 to 7 and denoted by the same reference numerals have the same operations as those already described in FIGS. 2 to 7, so detailed descriptions thereof will be omitted. .
  • a range 1101 on the field of view image 140 is specified by the position specification processing unit 1011 .
  • the range 1101 is designated, as shown in FIG. 11B, the virtual guide 1102 displayed on the field-of-view image 160 is generated as a three-dimensional object having the range 1101 as a cross section.
  • the position designation processing unit 1011 acquires a position designation operation on the virtual guide 1102, it is possible to specify the position in the three-dimensional space with a range. Therefore, when arranging a wide-spread virtual object, it is possible to specify a range corresponding to the spread, or to specify the range of a physical object or a virtual object and point to that object.
  • a ring object framing the vicinity of the designated position is displayed. It may be displayed as 1103, and the specified position can be visually recognized stereoscopically.
  • the range to select and specify may be distorted. That is, not only when the shape of the range at the first viewpoint 101 is distorted, but also at the second viewpoint 102, a range having a unique shape different from that at the first viewpoint 101 is specified. However, when combined with the range seen from the first viewpoint 101 side, it may become stereoscopically distorted. In this way, when the range to be selected and specified becomes distorted, the sphere object 1106 including the range to be selected and specified may be displayed as the specified range.
  • a sphere object 1106 surrounding a three-dimensional distorted range 1105 is specified as the range.
  • a sphere object 1106 including a selected and specified range 1105 can be displayed as a specified range.
  • the identification can be performed accurately and conveniently.
  • the object may be a closed three-dimensional area other than the spherical object 1106, and may be an object of other geometric shape such as a rectangular parallelepiped.
  • FIG. 12 is a diagram for explaining a position specification operation when arranging a virtual object at a specified position in a three-dimensional space.
  • the parts shown in FIGS. 2 to 7 and denoted by the same reference numerals have the same operations as those already explained in FIGS. 2 to 7, so detailed explanations thereof will be omitted.
  • a case of arranging a virtual object representing a new car for advertisement at a specified position in a three-dimensional space will be described.
  • a virtual guide 1202 is generated and displayed that has a shape of a new car of the virtual object 1201 and extends in the same direction as the first viewpoint 101 in cross section.
  • the position designation processing unit 1011 acquires the designation operation.
  • FIGS. 12C and 12D show the field of view of a virtual object 1204 arranged at a specified position in the three-dimensional space and having a cross-sectional shape representing the shape of a new car, viewed from the second viewpoint 102 and the third viewpoint 103. It is an image.
  • a shape object and a virtual guide reflecting the shape and size of the virtual object 1204 to be placed when the position is specified at the first viewpoint 101 and the second viewpoint 102 are displayed to obtain further convenience. .
  • 13A and 13B show, in addition to the field-of-view image 160 viewed from the second viewpoint 102, a field-of-view image viewed from the first viewpoint 101 on a part of the screen 106 (a screen area smaller than the screen area of the field-of-view image 160).
  • 140 is a diagram showing displaying 140.
  • FIG. 13A and 13B the parts shown in FIGS. 2 to 7 and denoted by the same reference numerals have the same operations as those already explained in FIGS. 2 to 7, so detailed explanations thereof will be omitted. .
  • a It shows a state in which the viewed field image 140 is displayed in a reduced form.
  • the field image 160 viewed from the second viewpoint 102 is displayed on the left side of the screen 106
  • the field of view image 140 viewed from the first viewpoint 101 is displayed on the right side of the screen 106.
  • the vicinity of the position designation portion at the first viewpoint 101 may be enlarged and displayed, A transition video from the first viewpoint 101 to the second viewpoint 102 may be used. If playback control such as repetition is enabled for the transition moving image, the state of the transition can be easily grasped, which is effective in improving the accuracy of position designation.
  • the state designated by the second viewpoint 102 may be synthesized with the field-of-view image 140 of the first viewpoint 101 and reflected.
  • the view image 140 from the first viewpoint 101 is also viewed from the angle of the first viewpoint 101. Synthesize images of virtual objects. Therefore, it is possible to obtain an effect that it is possible to confirm how the object looks from the first viewpoint 101 without having to bother to return to the first viewpoint 101 .
  • the virtual guide 150 As a method of displaying the virtual guide, if processing (so-called occlusion processing) is performed so that the virtual guide 150 is not displayed in the shadow of the real object when viewed from the current viewpoint, the position in the depth direction with respect to the real object is displayed. The relationship becomes easy to understand and the position specification becomes easy. However, if the physical objects are densely packed, the virtual guide 150 is difficult to see, and when the position of the point is specified from the second viewpoint 102, if the virtual guide is a line, it may be thin and difficult to see. On the other hand, the virtual guide 150 may be displayed thicker, blinked, displayed in front of the physical object that shields the virtual guide 150, or may be combined.
  • the original display of the virtual guide 150 and the one displayed in front of the real object may be combined and displayed alternately, or a combination of the thick display and the original thin display may be used.
  • the field of view image photographed by the other camera may be used as the view image 160 at the second viewpoint 102 to specify the spatial coordinate position.
  • Other cameras include, for example, surveillance cameras, cameras used by other people on site, and the like. Needless to say, the field-of-view image must be associated with the spatial coordinate data. Also, satellite photographs and map data may be used as long as they have spatial coordinate data.
  • a virtual space in which physical objects are virtually arranged can be created from three-dimensional data created using the distance measuring sensor 1041, and a virtual object can be arranged by designating three-dimensional coordinates in the virtual space. is possible.
  • the second viewpoint 102 may be obtained by rotating the virtual space. That is, by rotating the three-dimensional data of the object in the virtual space, the position to be specified can be changed by changing the viewpoint without moving from the first viewpoint 101 to the second viewpoint 102 . It becomes possible to specify.
  • a mark may be used to identify a candidate for specification. For example, as shown in FIG. 14A, when a plurality of physical objects 1401, 1402, 1403, 1404, and 1405 exist on a virtual guide 1400, the virtual guide 1400 and the plurality of physical objects 1401 and 1402 are shown in FIG. 14B. , 1403, 1404, and 1405 are displayed, and by selecting tags 1411, 1412, 1413, 1414, and 1415, the tagged physical objects are specified. As a result, even when a plurality of physical objects are densely packed on the virtual guide 1400 and it is difficult to select the designated object, the position can be designated by the tag selection operation, and the desired physical object can be pointed accurately and easily. It becomes possible.
  • the coordinate system of the target space in which the virtual guide is set is assumed to be the world coordinate system.
  • the target space may be a real space or a virtual space.
  • the viewpoint may or may not be the same as the position of the actual virtual guide display. In any case, it is assumed that an image seen from a viewpoint used as a reference for the display is displayed on the virtual guide display device.
  • the orientation of the local coordinate system with respect to the world coordinate system is represented by the rotation matrix R.
  • the coordinate axis direction rotated by R becomes the coordinate axis direction of the local coordinate system.
  • the origin of the local coordinate system, that is, the viewpoint is represented by O, which is the position coordinate in the world coordinate system.
  • U the position coordinate in the local coordinate system corresponding to the position coordinate X in the world coordinate system
  • U the relationship is expressed by the following equation (1).
  • U R ⁇ 1 (X ⁇ O) (1)
  • the rotation matrix R and the origin O of the local coordinate system are updated as the viewpoint moves after initial setting.
  • the reference line (hereinafter simply referred to as the reference line) that serves as a reference for drawing the virtual guide in the world coordinate system.
  • the direction of the virtual guide viewed from the first viewpoint in FIG. 2 is determined by designating point 141 .
  • the direction is determined as the direction vector N1 in the local coordinate system (hereinafter referred to as the first local coordinate system) corresponding to the first viewpoint.
  • the rotation matrix representing the orientation of the first local coordinate system at this time is R1
  • the direction vector W1 of the reference line in the world coordinate system is given by the following equation (2).
  • W1 R1N1 ( 2 )
  • the reference line in the world coordinate system can be expressed as a straight line passing through the point O1 and extending in the same direction as the direction vector W1.
  • the reference line of the virtual guide 150 in the local coordinate system corresponding to the second viewpoint 102 (hereinafter referred to as the second local coordinate system) can be configured.
  • R2 be the rotation matrix representing the orientation of the second local coordinate system in the world coordinate system
  • O2 be the position of the second viewpoint 102 in the world coordinate system.
  • U2 is represented by the following equations ((1), (3 ) formula).
  • U 2 R 2 ⁇ 1 (O 1 ⁇ O 2 +kW 1 ) (4)
  • the range of the real number parameter k is the range required for drawing on the display screen displaying the virtual guide. Exceptionally, when the direction of the virtual guide is perpendicular to the display screen, only the section of the virtual guide corresponding to the point R 2 -1 (O 1 -O 2 ) is displayed.
  • the coordinates in the world coordinate system of the viewpoint used as the display reference in the virtual guide display device (tablet terminal 110) and the world coordinates in the local coordinate system can be used to construct the reference line.
  • the present invention is not limited to the above-described embodiments, and includes various modifications.
  • the above-described embodiments have been described in detail in order to explain the present invention in an easy-to-understand manner, and are not necessarily limited to those having all the described configurations.
  • part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment.
  • part or all of the above configurations, functions, processing units, processing means, etc. may be realized by hardware, for example, by designing them as integrated circuits.
  • each of the above configurations, functions, and the like may be realized by software by the processor 1010 interpreting and executing the program 1021 for realizing each function.
  • Information such as programs, tables, and files that implement each function may be stored in the memory 1020, a recording device such as a hard disk, SSD (Solid State Drive), or a recording medium such as an IC card, SD card, or DVD. Alternatively, it may be stored in a device on a communication network.
  • control lines and information lines indicate what is considered necessary for explanation, and not all control lines and information lines are necessarily indicated on the product. In practice, it may be considered that almost all configurations are interconnected.
  • first user 20 second user 100: HMD 101: first viewpoint 102: second viewpoint 103: third viewpoint 104, 801: camera 105: arrow 106: screen 110: tablet terminal 120: servers 140, 160, 180: view images 150, 151a, 1102, 1104, 1202, 1400: virtual guides 170, 194, 1201, 1204: virtual objects 191, 192: designated position 193: tree 1010: processor 1011: position designation processing unit 1012: virtual guide generation display processing unit 1013: virtual object generation processing Unit 1020 : Memory 1021 : Program 1023 : Application program 1024 : Information data 1025 : Coordinate position information 1026 : Virtual guide information 1027 : Virtual object information 1028 : View image information 1032 : Left-eye line-of-sight detection sensor 1033 : Right-eye line-of-sight detection sensor 1034 : Display 1035: operation input device 1036: microphone 1037: speaker 1038: vibrator 1039: communication I/F 1040: Sensor group 1041: Ranging sensor 1042: Acceleration sensor

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Dispositif d'affichage de guide virtuel utilisant une sortie de capteur provenant d'un capteur de direction pour détecter la direction que le dispositif d'affichage de guide virtuel a prise dans le système de coordonnées universelles et provenant d'un capteur de position pour détecter la position du dispositif d'affichage de guide virtuel dans le système de coordonnées universelles comme base pour générer un guide virtuel s'étendant dans la même direction qu'un premier point de vue depuis le dispositif d'affichage de guide virtuel par l'intermédiaire d'une position spécifiée dans une première image de champ visuel vue depuis le premier point de vue, superpose le guide virtuel sur une seconde image de champ visuel vue depuis un second point de vue différent du premier point de vue et affiche le résultat sur un affichage, reçoit une opération spécifiant une position sur le guide virtuel affiché sur l'affichage, convertit la position spécifiée en une position dans le système de coordonnées universelles, et fournit le résultat.
PCT/JP2021/004802 2021-02-09 2021-02-09 Dispositif d'affichage de guide virtuel, système d'affichage de guide virtuel et procédé d'affichage de guide virtuel WO2022172335A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/004802 WO2022172335A1 (fr) 2021-02-09 2021-02-09 Dispositif d'affichage de guide virtuel, système d'affichage de guide virtuel et procédé d'affichage de guide virtuel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/004802 WO2022172335A1 (fr) 2021-02-09 2021-02-09 Dispositif d'affichage de guide virtuel, système d'affichage de guide virtuel et procédé d'affichage de guide virtuel

Publications (1)

Publication Number Publication Date
WO2022172335A1 true WO2022172335A1 (fr) 2022-08-18

Family

ID=82838473

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/004802 WO2022172335A1 (fr) 2021-02-09 2021-02-09 Dispositif d'affichage de guide virtuel, système d'affichage de guide virtuel et procédé d'affichage de guide virtuel

Country Status (1)

Country Link
WO (1) WO2022172335A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08212390A (ja) * 1995-02-08 1996-08-20 Canon Inc 画像処理方法とその装置
JP2015207219A (ja) * 2014-04-22 2015-11-19 富士通株式会社 表示装置、位置特定プログラム、および位置特定方法
JP2016184295A (ja) * 2015-03-26 2016-10-20 富士通株式会社 表示制御方法、表示制御プログラム、及び情報処理装置
JP2018049629A (ja) * 2017-10-10 2018-03-29 株式会社コロプラ 仮想空間において入力を支援するための方法および装置、ならびに当該方法をコンピュータに実行させるプログラム
US20180300952A1 (en) * 2017-04-17 2018-10-18 Microsoft Technology Licensing, Llc Multi-Step Placement of Virtual Objects

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08212390A (ja) * 1995-02-08 1996-08-20 Canon Inc 画像処理方法とその装置
JP2015207219A (ja) * 2014-04-22 2015-11-19 富士通株式会社 表示装置、位置特定プログラム、および位置特定方法
JP2016184295A (ja) * 2015-03-26 2016-10-20 富士通株式会社 表示制御方法、表示制御プログラム、及び情報処理装置
US20180300952A1 (en) * 2017-04-17 2018-10-18 Microsoft Technology Licensing, Llc Multi-Step Placement of Virtual Objects
JP2018049629A (ja) * 2017-10-10 2018-03-29 株式会社コロプラ 仮想空間において入力を支援するための方法および装置、ならびに当該方法をコンピュータに実行させるプログラム

Similar Documents

Publication Publication Date Title
JP7268692B2 (ja) 情報処理装置、制御方法及びプログラム
US9401050B2 (en) Recalibration of a flexible mixed reality device
JP6780642B2 (ja) 情報処理装置、情報処理方法及びプログラム
US20180018792A1 (en) Method and system for representing and interacting with augmented reality content
US7830334B2 (en) Image displaying method and apparatus
JP5843340B2 (ja) 3次元環境共有システム及び3次元環境共有方法
US20070035563A1 (en) Augmented reality spatial interaction and navigational system
JP6618681B2 (ja) 情報処理装置及びその制御方法及びプログラム、並びに、情報処理システム
US20140198017A1 (en) Wearable Behavior-Based Vision System
JPWO2014016987A1 (ja) 3次元ユーザインタフェース装置及び3次元操作方法
WO2022006116A1 (fr) Lunettes à réalité augmentée avec bulles de texte et traduction
TWI453462B (zh) 智慧型電子裝置之虛擬望遠系統及其方法
JP2012108842A (ja) 表示システム、表示処理装置、表示方法、および表示プログラム
KR20120017783A (ko) 증강 현실에서 위치 정보를 표시하는 방법 및 장치
JP2016122392A (ja) 情報処理装置、情報処理システム、その制御方法及びプログラム
JP2005174021A (ja) 情報提示方法及び装置
EP4172681A1 (fr) Lunettes à réalité augmentée avec des costumes en 3d
CN113498531A (zh) 头戴式信息处理装置以及头戴式显示系统
JP2006252468A (ja) 画像処理方法、画像処理装置
Schmalstieg et al. Augmented reality as a medium for cartography
JP2014071277A (ja) ヘッドマウントディスプレイ、それを作動させる方法およびプログラム
KR20190048810A (ko) 증강현실용 컨텐츠 제공 장치 및 방법
WO2022172335A1 (fr) Dispositif d'affichage de guide virtuel, système d'affichage de guide virtuel et procédé d'affichage de guide virtuel
WO2022176450A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
KR101729923B1 (ko) 스크린 영상과 증강현실 영상을 이용한 입체영상 구현 방법 및 이를 실행하는 서버 및 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21925589

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21925589

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP