WO2022201433A1 - ウェアラブル端末装置、プログラムおよび表示方法 - Google Patents
ウェアラブル端末装置、プログラムおよび表示方法 Download PDFInfo
- Publication number
- WO2022201433A1 WO2022201433A1 PCT/JP2021/012570 JP2021012570W WO2022201433A1 WO 2022201433 A1 WO2022201433 A1 WO 2022201433A1 JP 2021012570 W JP2021012570 W JP 2021012570W WO 2022201433 A1 WO2022201433 A1 WO 2022201433A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- virtual image
- terminal device
- area
- display
- wearable terminal
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 44
- 230000033001 locomotion Effects 0.000 claims description 33
- 230000008569 process Effects 0.000 claims description 20
- 230000003287 optical effect Effects 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 6
- 230000009471 action Effects 0.000 claims description 4
- 238000004891 communication Methods 0.000 description 18
- 238000012545 processing Methods 0.000 description 18
- 238000003860 storage Methods 0.000 description 17
- 230000000007 visual effect Effects 0.000 description 17
- 238000001514 detection method Methods 0.000 description 15
- 230000010365 information processing Effects 0.000 description 12
- 238000013507 mapping Methods 0.000 description 8
- 230000001133 acceleration Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 210000003128 head Anatomy 0.000 description 3
- 238000005452 bending Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000015654 memory Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000003936 working memory Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
Definitions
- the present disclosure relates to wearable terminal devices, programs, and display methods.
- VR virtual reality
- MR mixed reality
- AR augmented reality
- a wearable terminal device has a display unit that covers the user's field of vision when worn by the user. By displaying a virtual image and/or a virtual space on this display unit according to the user's position and orientation, a visual effect as if they exist is realized (for example, US Patent Application Publication No. 2019. /0087021, and U.S. Patent Application Publication No. 2019/0340822).
- MR is a technology that allows the user to experience a mixed reality in which the real space and the virtual image are fused by displaying a virtual image that appears to exist at a predetermined position in the real space while allowing the user to view the real space.
- VR is a technology that allows the user to feel as if he/she is in the virtual space by making the user visually recognize the virtual space instead of the real space in MR.
- a virtual image displayed in VR and MR has a predetermined display position in the space where the user is located, and is displayed on the display unit and viewed by the user when the display position is within the user's viewing area. .
- a wearable terminal device of the present disclosure is a wearable terminal device that is worn by a user and includes at least one processor.
- the at least one processor causes a display to display a virtual image positioned in space and having a first plane and a second plane opposite the first plane.
- the virtual image has a first area and a strip-shaped second area smaller than the first area on the first surface, and a third area larger than the second area on the second surface.
- the at least one processor changes the display mode of the virtual image to a predetermined display mode in response to a predetermined operation on the third area.
- the program of the present disclosure is located in a space in a computer provided in a wearable terminal device worn by a user and used, and has a first surface and a second surface opposite to the first surface. to be displayed on the display unit.
- the virtual image has a first area and a strip-shaped second area smaller than the first area on the first surface, and a third area larger than the second area on the second surface.
- the process of displaying the virtual image on the display unit includes a process of changing the display mode of the virtual image to a predetermined display mode according to a predetermined operation on the third area.
- the display method of the present disclosure is a display method in a wearable terminal device worn by a user.
- a virtual image positioned in space and having a first surface and a second surface opposite to the first surface is displayed on the display unit.
- the virtual image has a first area and a strip-shaped second area smaller than the first area on the first surface, and a third area larger than the second area on the second surface.
- the display mode of the virtual image is changed to a predetermined display mode according to a predetermined operation on the third area.
- FIG. 1 is a schematic perspective view showing the configuration of a wearable terminal device according to a first embodiment
- FIG. 3 is a diagram showing an example of a visual recognition area visually recognized by a user wearing a wearable terminal device and a virtual image; It is a figure explaining the visual recognition area
- 4 is a flowchart showing a control procedure of virtual image display processing; It is a figure explaining operation for displaying the image of a front side on the back side of a virtual image. It is a figure explaining operation for displaying the image of a front side on the back side of a virtual image.
- FIG. 3 is a schematic diagram showing the configuration of a display system according to a second embodiment
- FIG. 2 is a block diagram showing the main functional configuration of the information processing device
- the wearable terminal device 10 includes a main body 10a, a visor 141 (display member) attached to the main body 10a, and the like.
- the body part 10a is an annular member whose circumference is adjustable.
- Various devices such as a depth sensor 153 and a camera 154 are built inside the main body 10a.
- the main body 10a When the main body 10a is worn on the head, the user's field of vision is covered by the visor 141. As shown in FIG.
- the visor 141 has optical transparency. A user can visually recognize the real space through the visor 141 .
- An image such as a virtual image is projected from a laser scanner 142 (see FIG. 4) incorporated in the main body 10a and displayed on the display surface of the visor 141 facing the user's eyes.
- a user visually recognizes the virtual image by the reflected light from the display surface.
- a visual effect as if the virtual image exists in the real space is obtained.
- the user visually recognizes the virtual image 30 at a predetermined position in the space 40 and facing a predetermined direction.
- the space 40 is a real space that the user visually recognizes through the visor 141 . Since the virtual image 30 is projected onto the visor 141 having optical transparency, the virtual image 30 is visually recognized as a translucent image superimposed on the physical space.
- the virtual image 30 is assumed to be a planar window screen.
- the virtual image 30 has a front surface 30A as a first surface and a back surface 30B as a second surface.
- the rear surface 30B has a third region R3 (region corresponding to the first region R1) larger than the second region R2 and a fourth region R4 corresponding to the second region R2.
- necessary information is displayed on the front surface 30A, and no information is normally displayed on the back surface 30B.
- the second area R2 is an area where the function bar 31 is displayed.
- the function bar 31 is a so-called title bar, but may be, for example, a toolbar, menu bar, scroll bar, language bar, task bar, status bar, or the like.
- the second area R2 is an area where a title (see FIG. 8) related to the display contents of the first area R1 is displayed, and icons such as the window shape change button 32 and the close button 33 are displayed.
- the wearable terminal device 10 detects the user's visible area 41 based on the position and orientation of the user in the space 40 (in other words, the position and orientation of the wearable terminal device 10).
- the visible area 41 is an area in the space 40 located in front of the user U wearing the wearable terminal device 10 .
- the visual recognition area 41 is an area within a predetermined angular range from the front of the user U in the horizontal direction and the vertical direction.
- the shape of a cut surface obtained by cutting a solid corresponding to the shape of the visible region 41 along a plane perpendicular to the front direction of the user U is a rectangle.
- the shape of the visible region 41 may be determined so that the shape of the cut end is other than rectangular (for example, circular or elliptical).
- the shape of the visible region 41 (for example, the angular range in the left-right direction and the up-down direction from the front) can be specified, for example, by the following method.
- the field of view is adjusted (hereinafter referred to as "calibration") according to a predetermined procedure at a predetermined timing such as the initial start-up.
- a range that can be visually recognized by the user is specified, and the virtual image 30 is subsequently displayed within that range.
- the shape of the visible range specified by this calibration can be used as the shape of the visible region 41 .
- the calibration is not limited to being performed according to the above-described predetermined procedure, and the calibration may be performed automatically during normal operation of the wearable terminal device 10 .
- the range in which the display is performed is considered to be outside the user's field of view, and the field of view (and the shape of the visible area 41) is changed. may be adjusted.
- the displayed range is considered to be within the user's field of view. may be used to adjust the field of view (and the shape of the viewing area 41).
- the shape of the visible region 41 may be predetermined and fixed at the time of shipment, etc., without being based on the adjustment result of the field of view.
- the shape of the visual recognition area 41 may be determined within the maximum displayable range in terms of the optical design of the display unit 14 .
- the virtual image 30 is generated in a state in which the display position and orientation in the space 40 are determined according to the user's predetermined operation.
- the wearable terminal device 10 causes the visor 141 to project and display the virtual image 30 whose display position is determined inside the visible region 41 among the generated virtual images 30 .
- the visible area 41 is indicated by a dashed line.
- the display position and orientation of the virtual image 30 on the visor 141 are updated in real time according to changes in the user's viewing area 41 . That is, the display position and orientation of the virtual image 30 change according to the change in the visible area 41 so that the user recognizes that "the virtual image 30 is positioned in the space 40 at the set position and orientation.” . For example, when the user moves from the front side to the back side of the virtual image 30, the shape (angle) of the displayed virtual image 30 gradually changes according to this movement. Further, when the user turns to the direction of the virtual image 30 after turning around to the back side of the virtual image 30, the back side 30B of the virtual image 30 is displayed so that the back side 30B can be visually recognized.
- the virtual image 30 whose display position is outside the visible area 41 is no longer displayed, and if there is a virtual image 30 whose display position is within the visible area 41, the virtual image 30 is newly displayed. Is displayed.
- the wearable terminal device 10 when the user holds his/her hand (or finger) forward, the direction in which the hand is extended is detected by the wearable terminal device 10 , and the virtual line 51 extending in that direction and the pointer 52 are positioned on the visor 141 . It is displayed on the display surface and visually recognized by the user. A pointer 52 is displayed at the intersection of the virtual line 51 and the virtual image 30 . If the virtual line 51 does not intersect the virtual image 30 , the pointer 52 may be displayed at the intersection of the virtual line 51 and the wall surface of the space 40 or the like.
- the display of the virtual line 51 may be omitted and the pointer 52 may be directly displayed at a position corresponding to the position of the user's fingertip. (See FIG. 7).
- the direction of the virtual line 51 and the position of the pointer 52 can be adjusted by changing the direction in which the user extends his hand.
- a predetermined gesture while adjusting the pointer 52 to be positioned on a predetermined operation target (for example, the function bar 31, the window shape change button 32, the close button 33, etc.) included in the virtual image 30, A gesture is detected by the wearable terminal device 10, and a predetermined operation can be performed on the operation target.
- the virtual image 30 can be closed (deleted) by performing a gesture of selecting an operation target (for example, a gesture of pinching a fingertip) while the pointer 52 is aligned with the close button 33 .
- the virtual image 30 can be moved in the depth direction and the left and right direction. can. Operations on the virtual image 30 are not limited to these.
- the wearable terminal device 10 of the present embodiment realizes a visual effect as if the virtual image 30 exists in the real space, receives the user's operation on the virtual image 30, and displays the virtual image 30. can be reflected. That is, the wearable terminal device 10 of this embodiment provides MR.
- the wearable terminal device 10 includes a CPU 11 (Central Processing Unit), a RAM 12 (Random Access Memory), a storage unit 13, a display unit 14, a sensor unit 15, a communication unit 16, and the like. connected by 4, except for the visor 141 of the display unit 14, is built in the main body 10a, and operates by power supplied from a battery also built in the main body 10a.
- the CPU 11 is a processor that performs various arithmetic processing and controls the operation of each unit of the wearable terminal device 10 .
- the CPU 11 performs various control operations by reading and executing a program 131 stored in the storage unit 13 .
- the CPU 11 executes, for example, visible area detection processing and display control processing.
- the visual recognition area detection process is a process of detecting the user's visual recognition area 41 in the space 40 .
- the display control process is a process of displaying on the display unit 14 the virtual image 30 positioned inside the visual recognition area 41 among the virtual images 30 positioned in the space 40 .
- CPU 11 Although a single CPU 11 is shown in FIG. 4, it is not limited to this. Two or more processors such as CPUs may be provided, and the processing executed by the CPU 11 of this embodiment may be shared by these two or more processors.
- the RAM 12 provides working memory space to the CPU 11 and stores temporary data.
- the storage unit 13 is a non-temporary recording medium readable by the CPU 11 as a computer.
- the storage unit 13 stores a program 131 executed by the CPU 11, various setting data, and the like.
- the program 131 is stored in the storage unit 13 in the form of computer-readable program code.
- a non-volatile storage device such as an SSD (Solid State Drive) having a flash memory is used.
- the data stored in the storage unit 13 includes virtual image data 132 related to the virtual image 30 and the like.
- the virtual image data 132 includes data related to the display content of the virtual image 30 (for example, image data), display position data, orientation data, and the like.
- the display unit 14 has a visor 141 , a laser scanner 142 , and an optical system that guides the light output from the laser scanner 142 to the display surface of the visor 141 .
- the laser scanner 142 irradiates the optical system with pulsed laser light whose on/off is controlled for each pixel according to a control signal from the CPU 11 while scanning in a predetermined direction.
- the laser light incident on the optical system forms a display screen made up of a two-dimensional pixel matrix on the display surface of the visor 141 .
- the method of the laser scanner 142 is not particularly limited, for example, a method of operating a mirror by MEMS (Micro Electro Mechanical Systems) to scan laser light can be used.
- the laser scanner 142 has, for example, three light emitting units that emit RGB color laser light.
- the display unit 14 can perform color display by projecting light from these light emitting units onto the visor 141 .
- the sensor unit 15 includes an acceleration sensor 151, an angular velocity sensor 152, a depth sensor 153, a camera 154, an eye tracker 155, and the like. Note that the sensor unit 15 may further include sensors not shown in FIG.
- the acceleration sensor 151 detects acceleration and outputs the detection result to the CPU 11 . From the detection results of the acceleration sensor 151, the translational motion of the wearable terminal device 10 in the orthogonal three-axis directions can be detected.
- the angular velocity sensor 152 detects angular velocity and outputs the detection result to the CPU 11. Rotational motion of the wearable terminal device 10 can be detected from the detection result of the angular velocity sensor 152 .
- the depth sensor 153 is an infrared camera that detects the distance to the subject by the ToF (Time of Flight) method, and outputs the distance detection result to the CPU 11.
- the depth sensor 153 is provided on the front surface of the main body 10a so as to capture an image of the visible area 41. As shown in FIG. Performing three-dimensional mapping of the entire space 40 (that is, obtaining a three-dimensional structure) by repeatedly performing measurements by the depth sensor 153 each time the user's position and orientation changes in the space 40 and synthesizing the results. can be done.
- the camera 154 captures an image of the space 40 using a group of RGB imaging elements, acquires color image data as the image capturing result, and outputs the color image data to the CPU 11 .
- the camera 154 is provided on the front surface of the main body 10a so as to photograph the visible area 41. As shown in FIG.
- the output image from the camera 154 is used to detect the position and orientation of the wearable terminal device 10, and is also transmitted from the communication unit 16 to an external device to display the visible area 41 of the user of the wearable terminal device 10 on the external device. It is also used to
- the eye tracker 155 detects the line of sight of the user and outputs the detection result to the CPU 11 .
- the sight line detection method is not particularly limited. can use a method of identifying an object that is visually recognizing.
- a part of the configuration of the eye tracker 155 may be provided on the periphery of the visor 141 or the like.
- the communication unit 16 is a communication module having an antenna, a modulation/demodulation circuit, a signal processing circuit, and the like.
- the communication unit 16 performs data transmission/reception by wireless communication with an external device according to a predetermined communication protocol.
- the CPU 11 performs the following control operations.
- the CPU 11 performs three-dimensional mapping of the space 40 based on the distance data from the depth sensor 153 to the subject.
- the CPU 11 repeats this three-dimensional mapping each time the position and orientation of the user changes, and updates the results each time.
- the CPU 11 performs three-dimensional mapping in units of a continuous space 40 . Therefore, when the user moves between a plurality of rooms partitioned by walls or the like, the CPU 11 recognizes each room as one space 40 and performs three-dimensional mapping separately for each room.
- the CPU 11 detects the user's visible area 41 in the space 40 . Specifically, the CPU 11 controls the user (wearable device) in the space 40 based on the detection results from the acceleration sensor 151, the angular velocity sensor 152, the depth sensor 153, the camera 154, and the eye tracker 155, and the accumulated three-dimensional mapping results. Identify the position and orientation of the terminal device 10). Then, the visual recognition area 41 is detected (identified) based on the identified position and orientation and the predetermined shape of the visual recognition area 41 . In addition, the CPU 11 continuously detects the user's position and orientation in real time, and updates the visual recognition area 41 in conjunction with changes in the user's position and orientation. Note that detection of the visible region 41 may be performed using detection results obtained by some of the acceleration sensor 151 , the angular velocity sensor 152 , the depth sensor 153 , the camera 154 and the eye tracker 155 .
- the CPU 11 generates virtual image data 132 related to the virtual image 30 according to user's operation. That is, when the CPU 11 detects a predetermined operation (gesture) for instructing the generation of the virtual image 30, the CPU 11 identifies the display content (for example, image data), the display position, and the orientation of the virtual image. generates virtual image data 132 including
- the CPU 11 causes the display unit 14 to display the virtual image 30 whose display position is determined inside the visible area 41 .
- the CPU 11 identifies the virtual image 30 whose display position is set inside the visible area 41 based on the information on the display position included in the virtual image data 132, and the visible area 41 at that time and the specified virtual image.
- Image data of the display screen to be displayed on the display unit 14 is generated based on the positional relationship with the display position of 30 .
- the CPU 11 causes the laser scanner 142 to perform a scanning operation based on this image data, and forms a display screen including the specified virtual image 30 on the display surface of the visor 141 .
- the CPU 11 displays the virtual image 30 on the display surface of the visor 141 so that the virtual image 30 can be viewed in the space 40 viewed through the visor 141 .
- the CPU 11 updates the display contents of the display unit 14 in real time in accordance with the movement of the user (change in the visual recognition area 41). If the setting is such that the virtual image data 132 is retained even when the wearable terminal device 10 is powered off, the existing virtual image data 132 is read when the wearable terminal device 10 is activated next time, If there is a virtual image 30 positioned inside the viewing area 41 , it is displayed on the display unit 14 .
- the virtual image data 132 may be generated based on the instruction data acquired from the external device via the communication unit 16, and the virtual image 30 may be displayed based on the virtual image data 132.
- the virtual image data 132 itself may be acquired from an external device via the communication unit 16 and the virtual image 30 may be displayed based on the virtual image data 132 .
- the image of the camera 154 of the wearable terminal device 10 is displayed on the external device operated by the remote instructor, and an instruction to display the virtual image 30 is received from the external device, and the instructed virtual image 30 is displayed on the wearable terminal device 10. It may be displayed on the display unit 14 .
- the virtual image 30 indicating the work content is displayed near the work target, and the remote instructor can instruct the user of the wearable terminal device 10 to perform the work.
- CPU 11 detects the position and orientation of the user's hand (and/or fingers) based on the images captured by depth sensor 153 and camera 154, and displays virtual line 51 extending in the detected direction and pointer 52 on display unit 14. display.
- the CPU 11 detects a gesture of the user's hand (and/or finger) based on the image captured by the depth sensor 153 and the camera 154, and detects the content of the detected gesture and the position of the pointer 52 at that time. Execute the process.
- the fourth region R4 on the back surface 30B of the virtual image 30 (corresponding to the second region R2 on the front surface 30A)
- a technique is disclosed in which a front surface 30A and a back surface 30B of the virtual image 30 are reversed by performing a predetermined gesture while the pointer 52 is aligned with the area to be displayed (see FIG. 6).
- the fourth area R4 is small and it is difficult to align the pointer 52 with the fourth area R4
- the virtual image 30 exists in the depths of the visual recognition area 41, the size of the virtual image 30 becomes small, so the above problem becomes conspicuous.
- the CPU 11 of the wearable terminal device 10 of the present embodiment displays the image of the front surface 30A on the back surface 30B of the virtual image 30 in response to a predetermined operation on the third region R3 of the back surface 30B of the virtual image 30.
- the operation for displaying the image of the front surface 30A on the rear surface 30B of the virtual image 30 can be facilitated.
- An example of the operation for displaying the image of the front surface 30A on the back surface 30B of the virtual image 30 will be described below with reference to FIGS. 5 to 15.
- the virtual image display processing of FIG. 5 includes at least the feature of displaying the image of the front surface 30A on the rear surface 30B of the virtual image 30 when a predetermined operation is performed on the third region R3 of the rear surface 30B of the virtual image 30. It is a thing.
- the CPU 11 detects the visible area 41 based on the user's position and orientation (step S101).
- the CPU 11 determines whether or not there is a virtual image 30 whose display position is determined inside the detected visible area 41 (step S102).
- step S102 When it is determined in step S102 that there is no virtual image 30 whose display position is determined inside the detected visible area 41 (step S102; NO), the CPU 11 advances the process to step S109.
- step S102 when it is determined in step S102 that the virtual image 30 whose display position is determined inside the detected visual recognition area 41 is present (step S102; YES), the CPU 11 displays the virtual image 30 on the display unit 14. display (step S103).
- the CPU 11 determines whether or not there is a virtual image 30 facing the back surface 30B (a virtual image 30 with the back surface 30B facing the user) among the virtual images 30 displayed on the display unit 14 . It is determined (step S104).
- step S104 When it is determined in step S104 that there is no virtual image 30 facing the back surface 30B among the virtual images 30 displayed on the display unit 14 (step S104; NO), the CPU 11 advances the process to step S109.
- step S104 when it is determined in step S104 that the virtual image 30 facing the back surface 30B is included in the virtual images 30 displayed on the display unit 14 (step S104; YES), the CPU 11 It is determined whether or not a predetermined operation has been performed (step S105).
- a pointer 52 is displayed at a position corresponding to the position of the user's fingertip, as shown in FIG.
- a gesture for example, a gesture of bending a finger
- Make a gesture to move for example, move left or right.
- the above predetermined operation may be a series of operations up to the gesture of selecting the third region R3.
- the third region R3 is selected while the pointer 52 is aligned with the third region R3.
- the image of the front surface 30A is displayed on the back surface 30B of the virtual image 30.
- the gesture for selecting the third region R3 is not limited to the gesture of bending the fingers, and may be, for example, a gesture of pinching the fingertips or a gesture of making a hand into a fist.
- Gestures for moving the hand are not limited to gestures for moving the hand left and right. It may be a pointing (supination) gesture.
- the gesture of moving the hand may be a flipping motion that imitates the motion of turning over a piece of paper. This turning operation will be described in detail below.
- the CPU 11 causes the virtual image 30 to move a predetermined distance when the moving distance of the hand accompanying the flipping motion reaches a predetermined distance or more. is performed (step S105; YES), and as shown in FIG. 15, the image of the front surface 30A is displayed on the rear surface 30B of the virtual image 30 (step 106; described later).
- the CPU 11 determines that the predetermined operation has not been performed on the virtual image 30 (step S105; NO).
- FIGS. 12 and 14 until the movement distance of the hand associated with the turning action reaches a predetermined distance, the CPU 11 controls the movement distance of the hand, the moving direction, and the start position of the turning movement, as shown in FIGS.
- FIGS. 12 and 14 show a state in which the icon 34 at the upper left of the surface 30A (see FIG. 15) appears as a result of the flipping operation.
- the CPU 11 displays the virtual image 30 in a state in which the entire back surface 30B before the turning motion is performed appears.
- the CPU 11 determines that a prescribed operation has been performed on the virtual image 30 (step S105; YES), and the speed of movement of the hand accompanying the flipping motion is If the predetermined speed has not been reached, it may be determined that a predetermined operation has not been performed on the virtual image 30 (step S105; NO). Further, when a moving image is being reproduced and displayed in the first region R1 of the surface 30A of the virtual image 30, the CPU 11 may stop the reproducing and displaying of the moving image while the turning operation is being performed. As a result, it is possible to prevent the moving image reproduced and displayed in the first region R1 of the surface 30A from advancing while the virtual image 30 is being turned over.
- step S105 when it is determined in step S105 that a predetermined operation has not been performed on the virtual image 30 facing the back surface 30B (step S105; NO), the CPU 11 starts the process. Proceed to step S109.
- step S105 when it is determined in step S105 that a predetermined operation has been performed on the virtual image 30 facing the back surface 30B (step S105; YES), the CPU 11 displays the image of the front surface 30A on the back surface 30B of the virtual image 30. (step 106). Specifically, as shown in FIGS. 6 and 7, when it is determined that the above-described predetermined operation has been performed on the third region R3 of the virtual image 30, the CPU 11 causes the virtual image 30, as shown in FIG. The image of the front surface 30A is displayed on the back surface 30B of the image 30.
- the CPU 11 determines whether there is another virtual image 30 facing the back surface 30B (step S107).
- step S107 When it is determined in step S107 that there is no other virtual image 30 facing the back surface 30B (step S107; NO), the CPU 11 advances the process to step S109.
- step S107 when it is determined in step S107 that there is another virtual image 30 facing the back surface 30B (step S107; YES), the CPU 11 displays the image of the front surface 30A on the back surface 30B of the other virtual image 30 as well. (step 108). Specifically, as shown in FIG. 9, there is a virtual image 30 facing the back surface 30B (virtual image 30 at the upper left of the viewing area 41) in addition to the virtual image 30 that is the target of the above-described predetermined operation. In this case, the CPU 11 displays the image of the front surface 30A on the back surface 30B of the virtual image 30 as well, as shown in FIG.
- the CPU 11 determines whether or not an instruction to end the display operation by the wearable terminal device 10 has been issued (step S109).
- step S109 if it is determined that an instruction to end the display operation by the wearable terminal device 10 has not been issued (step S109; NO), the CPU 11 returns the process to step S101, and repeats the processes thereafter.
- step S109 when it is determined that an instruction to end the display operation by the wearable terminal device 10 has been issued (step S109; YES), the CPU 11 ends the virtual image display processing.
- the second embodiment differs from the first embodiment in that an external information processing device 20 executes part of the processing that was executed by the CPU 11 of the wearable terminal device 10 in the first embodiment. Differences from the first embodiment will be described below, and descriptions of common points will be omitted.
- the display system 1 includes a wearable terminal device 10 and an information processing device 20 (server) communicatively connected to the wearable terminal device 10 . At least part of the communication path between the wearable terminal device 10 and the information processing device 20 may be based on wireless communication.
- the hardware configuration of the wearable terminal device 10 can be the same as that of the first embodiment, but the processor for performing the same processing as the processing performed by the information processing device 20 may be omitted.
- the information processing device 20 includes a CPU 21, a RAM 22, a storage section 23, an operation display section 24, a communication section 25, etc. These sections are connected by a bus 26.
- the CPU 21 is a processor that performs various arithmetic processes and controls the operation of each part of the information processing device 20 .
- the CPU 21 performs various control operations by reading and executing the program 231 stored in the storage unit 23 .
- the RAM 22 provides a working memory space to the CPU 21 and stores temporary data.
- the storage unit 23 is a non-temporary recording medium readable by the CPU 21 as a computer.
- the storage unit 23 stores a program 231 executed by the CPU 21, various setting data, and the like.
- the program 231 is stored in the storage unit 23 in the form of computer-readable program code.
- a non-volatile storage device such as an SSD with flash memory or a HDD (Hard Disk Drive) is used.
- the operation display unit 24 includes a display device such as a liquid crystal display and an input device such as a mouse and keyboard.
- the operation display unit 24 performs various displays such as the operation status of the display system 1 and processing results on the display device.
- the operation status of the display system 1 may include an image captured in real time by the camera 154 of the wearable terminal device 10 .
- the operation display unit 24 converts a user's input operation to the input device into an operation signal and outputs the operation signal to the CPU 21 .
- the communication unit 25 communicates with the wearable terminal device 10 to transmit and receive data.
- the communication unit 25 receives data including part or all of the detection result by the sensor unit 15 of the wearable terminal device 10, information related to user operations (gestures) detected by the wearable terminal device 10, and the like.
- the communication unit 25 may be capable of communicating with devices other than the wearable terminal device 10 .
- the CPU 21 of the information processing device 20 executes at least part of the processing that was executed by the CPU 11 of the wearable terminal device 10 in the first embodiment.
- the CPU 21 may perform three-dimensional mapping of the space 40 based on detection results from the depth sensor 153 .
- the CPU 21 may detect the user's visual recognition area 41 in the space 40 based on the detection results of each part of the sensor part 15 .
- the CPU 21 may generate the virtual image data 132 related to the virtual image 30 according to the user's operation of the wearable terminal device 10 .
- the CPU 21 may detect the position and orientation of the user's hand (and/or fingers) based on images captured by the depth sensor 153 and the camera 154 .
- the result of the above processing by the CPU 21 is transmitted to the wearable terminal device 10 via the communication section 25.
- the CPU 11 of the wearable terminal device 10 operates each unit (for example, the display unit 14) of the wearable terminal device 10 based on the received processing result. Further, the CPU 21 may transmit a control signal to the wearable terminal device 10 and perform display control of the display section 14 of the wearable terminal device 10 .
- the device configuration of the wearable terminal device 10 can be simplified, and the manufacturing cost can be reduced.
- the information processing apparatus 20 with higher performance it is possible to increase the speed and accuracy of various processes related to MR. Therefore, it is possible to improve the accuracy of the 3D mapping of the space 40, improve the display quality of the display unit 14, and improve the response speed of the display unit 14 to user's actions.
- the visor 141 having optical transparency is used to allow the user to visually recognize the real space, but the present invention is not limited to this.
- a visor 141 having a light shielding property may be used to allow the user to view the image of the space 40 photographed by the camera 154 . That is, the CPU 11 may cause the display unit 14 to display the image of the space 40 captured by the camera 154 and the virtual image 30 superimposed on the image of the space 40 .
- Such a configuration can also realize MR that fuses the virtual image 30 with the real space.
- the wearable terminal device 10 is not limited to having the annular body portion 10a illustrated in FIG. 1, and may have any structure as long as it has a display portion that can be visually recognized by the user when worn. For example, it may be configured to cover the entire head like a helmet. Moreover, like eyeglasses, it may have a frame to be hung on the ear and various devices may be built in the frame.
- the input operation may be accepted by a controller that the user holds in his hand or wears on his body.
- the virtual image 30 has a first aspect in which the image of the front surface 30A can be displayed on the back surface 30B based on a predetermined operation on the third region R3 of the virtual image 30, and a front surface on the back surface 30B by invalidating the predetermined operation.
- 30A can be set to either a second mode in which the image 30A cannot be displayed, and the virtual image set to the first mode when the virtual image 30 is displayed on the display unit 30 and the virtual image 30 set in the second mode may be displayed in a recognizable mode.
- the back side 30B of the virtual image 30 set in the first mode is displayed in blue, while the back side 30B of the virtual image 30 set in the second mode is displayed in red.
- the image of the front surface 30A is displayed on the back surface 30B of the virtual image 30 by performing the above-described predetermined operation on the third region R3 on the back surface 30B of the virtual image 30.
- performing the predetermined operation on the first region R1 of the surface 30A of the virtual image 30 allows the surface 30A of the virtual image 30 to be displayed.
- the image of the back surface 30B may be displayed.
- the virtual image 30 is enlarged (or reduced) to a predetermined size, or the virtual image 30 is displayed in a reduced size.
- the display mode of the virtual image 30 may be changed to a predetermined display mode, such as rotating the image 30 at a predetermined angle.
- the virtual image 30 has the first region R1, which is the main region, on the front surface 30A and the strip-shaped second region R2 smaller than the first region R1.
- An example has been shown in which the third region R3 corresponding to the first region R1 is provided as a large region, and an image of the front surface 30A is displayed on the back surface 30B in response to a predetermined operation on the third region R3.
- a region larger than the region R2 is not limited to the third region R3.
- the wearable terminal device 10 displays the image of the front surface 30A on the back surface 30B in response to a predetermined operation on the third area R3 and the fourth area R4 of the back surface 30B, that is, the entire area of the back surface 30B, as areas larger than the second area R2.
- the third region R3 and the fourth region R4 may not be distinguished, and the back surface 30B may consist of only one region.
- the third region R3, which is the target region of the predetermined operation for displaying the image of the front surface 30A on the back surface 30B, corresponds to the first region R1 and has the same size as the first region R1. is not limited to this, and the third region R3 may be a region smaller than the first region R1 as long as it is larger than the second region R2.
- the size of the fourth region R4 of the back surface 30B corresponding to the second region R2 of the front surface 30A is set to be larger than the second region R2, and according to a predetermined operation on the fourth region R4, the back surface An image of surface 30A may be displayed on 30B.
- the present disclosure can be used for wearable terminal devices, programs, and display methods.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
図1に示すように、ウェアラブル端末装置10は、本体部10a、および当該本体部10aに取り付けられたバイザー141(表示部材)などを備える。
ウェアラブル端末装置10は、CPU11(Central Processing Unit)と、RAM12(Random Access Memory)と、記憶部13と、表示部14と、センサー部15と、通信部16などを備え、これらの各部はバス17により接続されている。図4に示す構成要素のうち表示部14のバイザー141を除いた各部は、本体部10aに内蔵されており、同じく本体部10aに内蔵されているバッテリーから供給される電力により動作する。
次に、第2の実施形態に係る表示システム1の構成について説明する。第2の実施形態は、第1の実施形態においてウェアラブル端末装置10のCPU11が実行していた処理の一部を外部の情報処理装置20が実行する点で第1の実施形態と異なる。以下では、第1の実施形態との相違点について説明し、共通する点については説明を省略する。
なお、上記実施形態は例示であり、様々な変更が可能である。
例えば、上記の各実施形態では、ユーザに現実空間を視認させるために、光透過性を有するバイザー141を用いたが、これに限られない。例えば、遮光性を有するバイザー141を用い、カメラ154により撮影された空間40の画像をユーザに視認させてもよい。すなわち、CPU11は、カメラ154により撮影された空間40の画像、および当該空間40の画像に重ねられた仮想画像30を表示部14に表示させてもよい。このような構成によっても、現実空間に仮想画像30を融合させるMRを実現できる。
また、仮想画像30の裏面30Bの第3領域R3に対して上述した所定の操作を行うことで、例えば、当該仮想画像30を所定の大きさに拡大表示(または縮小表示)させたり、当該仮想画像30を所定の角度に回転表示させるといったように、当該仮想画像30の表示態様を所定の表示態様に変化させるようにしてもよい。
また、裏面30Bに表面30Aの画像を表示するための所定の操作の対象領域である第3領域R3が、第1領域R1と対応し第1領域R1と同等の大きさである場合を例示したがこれに限られず、第3領域R3は、第2領域R2よりも大きければ、第1領域R1よりも小さい領域であってよい。
また、表面30Aの第2領域R2に対応する裏面30Bの第4領域R4の大きさが第2領域R2よりも大きい領域となるようにし、当該第4領域R4に対する所定の操作に応じて、裏面30Bに表面30Aの画像が表示されるようにしてもよい。
10 ウェアラブル端末装置
10a 本体部
11 CPU(プロセッサ)
12 RAM
13 記憶部
131 プログラム
132 仮想画像データ
14 表示部
141 バイザー(表示部材)
142 レーザースキャナー
15 センサー部
151 加速度センサー
152 角速度センサー
153 深度センサー
154 カメラ
155 アイトラッカー
16 通信部
17 バス
20 情報処理装置
21 CPU
22 RAM
23 記憶部
231 プログラム
24 操作表示部
25 通信部
26 バス
30 仮想画像
30A 表面
30B 裏面
31 機能バー
32 ウィンドウ形状変更ボタン
33 クローズボタン
34 アイコン
40 空間
41 視認領域
51 仮想線
52 ポインタ
R1 第1領域
R2 第2領域
R3 第3領域
R4 第4領域
U ユーザ
Claims (20)
- ユーザが装着して使用するウェアラブル端末装置であって、
少なくとも一つのプロセッサを備え、
前記少なくとも一つのプロセッサは、空間内に位置し、第1の面と前記第1の面と反対側の第2の面とを有する仮想画像を表示部に表示させ、
前記仮想画像は、前記第1の面に第1領域と当該第1領域よりも小さい帯状の第2領域とを有し、前記第2の面に前記第2領域よりも大きい第3領域を有し、
前記少なくとも一つのプロセッサは、前記第3領域に対する所定の操作に応じて、前記仮想画像の表示態様を所定の表示態様に変化させる、ウェアラブル端末装置。 - 前記表示部は、光透過性を有する表示部材を備え、
前記少なくとも一つのプロセッサは、前記表示部材を通して視認される前記空間に前記仮想画像が視認されるように、前記仮想画像を前記表示部材の表示面に表示させる、請求項1に記載のウェアラブル端末装置。 - 前記空間を撮影するカメラを備え、
前記少なくとも一つのプロセッサは、前記カメラにより撮影された前記空間の画像、および当該空間の画像に重ねられた前記仮想画像を前記表示部に表示させる、請求項1に記載のウェアラブル端末装置。 - 前記第2領域は、機能バーが表示される領域である、請求項1~3のいずれか一項に記載のウェアラブル端末装置。
- 前記第2領域は、前記第1領域の表示内容に関連するタイトルまたはアイコンが表示される領域である、請求項1~4のいずれか一項に記載のウェアラブル端末装置。
- 前記少なくとも一つのプロセッサは、前記所定の操作が可能である状態の前記仮想画像が前記表示部に複数表示されている場合、当該複数の仮想画像のうちの一の仮想画像に対する前記所定の操作に応じて、他の仮想画像の表示態様を前記所定の表示態様に変化させる、請求項1~5のいずれか一項に記載のウェアラブル端末装置。
- 前記所定の操作は、前記ユーザの手が伸びる方向に表示される仮想線と前記仮想画像との交わる箇所を当該仮想画像に対する指定位置とするポインティング操作を含む、請求項1~6のいずれか一項に記載のウェアラブル端末装置。
- 前記所定の操作は、前記ユーザの指の現実空間における位置と前記仮想画像とが重なる箇所を当該仮想画像に対する指定位置とするポインティング操作を含む、請求項1~7のいずれか一項に記載のウェアラブル端末装置。
- 前記所定の操作は、前記ポインティング操作による前記指定位置を選択する選択操作を含む、請求項7または8に記載のウェアラブル端末装置。
- 前記少なくとも一つのプロセッサは、前記所定の操作に応じて、前記仮想画像の前記第2の面に前記第1の面の画像を表示させる、請求項9に記載のウェアラブル端末装置。
- 前記所定の操作は、更に、前記ポインティング操作及び前記選択操作がなされた状態での前記ユーザの手の所定の動作を含み、
前記少なくとも一つのプロセッサは、前記所定の操作に応じて、前記仮想画像の前記第2の面に前記第1の面の画像を表示させる、請求項9に記載のウェアラブル端末装置。 - 前記ユーザの手の所定の動作は、紙をめくる動作を模しためくり動作である、請求項11に記載のウェアラブル端末装置。
- 前記少なくとも一つのプロセッサは、前記めくり動作に伴う前記ユーザの手の移動速度が所定速度以上の場合、前記仮想画像の前記第2の面に前記第1の面の画像を表示させる、請求項12に記載のウェアラブル端末装置。
- 前記少なくとも一つのプロセッサは、前記めくり動作に伴う前記ユーザの手の移動距離が所定距離以上の場合、前記仮想画像の前記第2の面に前記第1の面の画像を表示させる、請求項12または13に記載のウェアラブル端末装置。
- 前記少なくとも一つのプロセッサは、前記めくり動作に伴う前記ユーザの手の移動距離が前記所定距離に達するまでの間は、当該ユーザの手の移動距離に応じて当該移動距離に応じた第2の面の一部がめくられ当該第2の面の一部に対応する第1の面の一部が現れた態様の前記仮想画像を表示させる、請求項14に記載のウェアラブル端末装置。
- 前記少なくとも一つのプロセッサは、前記めくり動作に伴う前記ユーザの手の移動距離が前記所定距離に達する前に当該めくり動作が解除された場合、当該めくり動作が行われる前の前記第2の面全てが現れた態様の前記仮想画像を表示させる、請求項15に記載のウェアラブル端末装置。
- 前記少なくとも一つのプロセッサは、前記仮想画像の前記第1領域において動画の再生表示がなされている場合、前記めくり動作がなされている間は当該動画の再生表示を停止させる、請求項12~16のいずれか一項に記載のウェアラブル端末装置。
- 前記仮想画像は、前記所定の表示態様に変化可能な第1の態様と、前記所定の表示態様に変化不可である第2の態様と、のいずれかの態様に設定可能であり、
前記少なくとも一つのプロセッサは、前記仮想画像を前記表示部に表示させる際に、前記第1の態様に設定された仮想画像と、前記第2の態様に設定された仮想画像と、を識別可能な態様で表示させる、請求項1~17のいずれか一項に記載のウェアラブル端末装置。 - ユーザが装着して使用するウェアラブル端末装置に設けられたコンピュータに、
空間内に位置し、第1の面と前記第1の面と反対側の第2の面とを有する仮想画像を表示部に表示させる処理を実行させ、
前記仮想画像は、前記第1の面に第1領域と当該第1領域よりも小さい帯状の第2領域とを有し、前記第2の面に前記第2領域よりも大きい第3領域を有し、
前記仮想画像を表示部に表示させる処理は、前記第3領域に対する所定の操作に応じて、当該仮想画像の表示態様を所定の表示態様に変化させる処理を含む、プログラム。 - ユーザが装着して使用するウェアラブル端末装置における表示方法であって、
空間内に位置し、第1の面と前記第1の面と反対側の第2の面とを有する仮想画像を表示部に表示させる表示制御ステップを含み、
前記仮想画像は、前記第1の面に第1領域と当該第1領域よりも小さい帯状の第2領域とを有し、前記第2の面に前記第2領域よりも大きい第3領域を有し、
前記表示制御ステップでは、前記第3領域に対する所定の操作に応じて、前記仮想画像の表示態様を所定の表示態様に変化させる、表示方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/012570 WO2022201433A1 (ja) | 2021-03-25 | 2021-03-25 | ウェアラブル端末装置、プログラムおよび表示方法 |
JP2023508317A JPWO2022201433A1 (ja) | 2021-03-25 | 2021-03-25 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/012570 WO2022201433A1 (ja) | 2021-03-25 | 2021-03-25 | ウェアラブル端末装置、プログラムおよび表示方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022201433A1 true WO2022201433A1 (ja) | 2022-09-29 |
Family
ID=83395387
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/012570 WO2022201433A1 (ja) | 2021-03-25 | 2021-03-25 | ウェアラブル端末装置、プログラムおよび表示方法 |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPWO2022201433A1 (ja) |
WO (1) | WO2022201433A1 (ja) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016205974A (ja) * | 2015-04-21 | 2016-12-08 | 株式会社ミツトヨ | 測定システムおよびユーザインタフェース装置 |
US20190371279A1 (en) * | 2018-06-05 | 2019-12-05 | Magic Leap, Inc. | Matching content to a spatial 3d environment |
JP2020177482A (ja) * | 2019-04-19 | 2020-10-29 | キヤノンメディカルシステムズ株式会社 | 医用情報処理装置及び医用情報処理方法 |
-
2021
- 2021-03-25 JP JP2023508317A patent/JPWO2022201433A1/ja active Pending
- 2021-03-25 WO PCT/JP2021/012570 patent/WO2022201433A1/ja active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016205974A (ja) * | 2015-04-21 | 2016-12-08 | 株式会社ミツトヨ | 測定システムおよびユーザインタフェース装置 |
US20190371279A1 (en) * | 2018-06-05 | 2019-12-05 | Magic Leap, Inc. | Matching content to a spatial 3d environment |
JP2020177482A (ja) * | 2019-04-19 | 2020-10-29 | キヤノンメディカルシステムズ株式会社 | 医用情報処理装置及び医用情報処理方法 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022201433A1 (ja) | 2022-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110045816B (zh) | 近眼显示器和系统 | |
CN108780360B (zh) | 虚拟现实导航 | |
US8388146B2 (en) | Anamorphic projection device | |
US11755122B2 (en) | Hand gesture-based emojis | |
US20140152558A1 (en) | Direct hologram manipulation using imu | |
KR20150093831A (ko) | 혼합 현실 환경에 대한 직접 상호작용 시스템 | |
US10878285B2 (en) | Methods and systems for shape based training for an object detection algorithm | |
KR20220120649A (ko) | 인공 현실 콘텐츠의 가변 초점 디스플레이를 갖는 인공 현실 시스템 | |
KR20130108643A (ko) | 응시 및 제스처 인터페이스를 위한 시스템들 및 방법들 | |
US11954268B2 (en) | Augmented reality eyewear 3D painting | |
CN108459702A (zh) | 基于手势识别与视觉反馈的人机交互方法与系统 | |
CN113498531A (zh) | 头戴式信息处理装置以及头戴式显示系统 | |
WO2022201433A1 (ja) | ウェアラブル端末装置、プログラムおよび表示方法 | |
US20230007227A1 (en) | Augmented reality eyewear with x-ray effect | |
WO2022201430A1 (ja) | ウェアラブル端末装置、プログラムおよび表示方法 | |
US11423585B2 (en) | Velocity-based controls | |
WO2023276058A1 (ja) | 部分画像の表示位置を変更するウェアラブル端末装置 | |
WO2022208612A1 (ja) | ウェアラブル端末装置、プログラムおよび表示方法 | |
WO2022269888A1 (ja) | ウェアラブル端末装置、プログラム、表示方法および仮想画像配信システム | |
WO2022208600A1 (ja) | ウェアラブル端末装置、プログラムおよび表示方法 | |
WO2023275919A1 (ja) | ウェアラブル端末装置、プログラムおよび表示方法 | |
WO2022208595A1 (ja) | ウェアラブル端末装置、プログラムおよび報知方法 | |
WO2024047720A1 (ja) | 仮想画像共有方法および仮想画像共有システム | |
JP2023168746A (ja) | 情報処理装置、情報処理システム、情報処理方法、プログラム | |
WO2020081677A9 (en) | Mobile platform as a physical interface for interaction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21933041 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2023508317 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18551857 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21933041 Country of ref document: EP Kind code of ref document: A1 |