WO2023242981A1 - Head-mounted display, head-mounted display system, and display method for head-mounted display - Google Patents

Head-mounted display, head-mounted display system, and display method for head-mounted display Download PDF

Info

Publication number
WO2023242981A1
WO2023242981A1 PCT/JP2022/023903 JP2022023903W WO2023242981A1 WO 2023242981 A1 WO2023242981 A1 WO 2023242981A1 JP 2022023903 W JP2022023903 W JP 2022023903W WO 2023242981 A1 WO2023242981 A1 WO 2023242981A1
Authority
WO
WIPO (PCT)
Prior art keywords
head
display
mounted display
image
user
Prior art date
Application number
PCT/JP2022/023903
Other languages
French (fr)
Japanese (ja)
Inventor
伸和 近藤
仁 秋山
康宣 橋本
眞弓 中出
宏司 中森
Original Assignee
マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by マクセル株式会社 filed Critical マクセル株式会社
Priority to PCT/JP2022/023903 priority Critical patent/WO2023242981A1/en
Publication of WO2023242981A1 publication Critical patent/WO2023242981A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B69/00Training appliances or apparatus for special sports
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/02Viewing or reading apparatus
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators

Definitions

  • the present invention relates to a head mounted display, a system using the head mounted display, and a display method in the head mounted display.
  • a head mounted display may be referred to as an HMD.
  • Augmented reality (hereinafter sometimes referred to as AR) is known as a technology that supplements (augments) scenes and images of the real world by overlaying them with digital information. It is used in a variety of fields such as engineering, civil engineering, and retail.
  • an HMD that supports AR is used.
  • This HMD is, for example, a device that is worn on the head and displays augmented reality images (hereinafter sometimes referred to as AR images) on a goggle-like display.
  • this device is equipped with a plurality of sensors such as a camera and a position measurement sensor, a CPU that performs image processing, a battery, and the like.
  • Patent Document 1 discloses a learning support system including a glasses-type device and a control system.
  • the glasses-type device includes a display section and an imaging section (camera) that captures a visual field image of the learner, and is worn by the learner.
  • the control system is connected to the glasses-type device via a network and controls the functions of the glasses-type device.
  • the calculation unit of the control system superimposes a model video, which is a video of the instructor's work movement that serves as a model for the learner's work movement, on the visual field image captured by the camera and displays it on the display unit.
  • the display contents of the model video are dynamically changed according to the characteristics of the work movements of the learners involved.
  • Patent Document 2 discloses a technique in which a model video is displayed on a display of a user terminal together with a user's training video, and the model video and the user's actual training video can be compared on the display.
  • Patent Document 1 and Patent Document 2 do not specifically disclose a technique for changing the display angle.
  • the present invention provides a head-mounted display, a head-mounted display system, and a head-mounted display system that allow the user to easily switch to an angle where the user can easily see the difference between the model and the automatic production when displaying a self-image and a model image in a superimposed manner.
  • the purpose of this invention is to provide a display method for a head-mounted display.
  • the head mounted display includes a processor and a display.
  • the processor superimposes and displays a self-image showing the user's movement and a model image showing the model movement on the display from a viewpoint corresponding to the rotation angle of the neck of the user wearing the head-mounted display on the head.
  • the head mounted display system includes a distribution device and a head mounted display.
  • the distribution device distributes a model video showing a model operation.
  • the head mounted display includes a processor and a display.
  • the processor superimposes and displays the user's self-image showing the user's actions and the model image distributed from the distribution device on the display from a viewpoint corresponding to the rotation angle of the neck of the user wearing the head-mounted display on his head.
  • the display method of the head-mounted display is a method using a processor.
  • a self-image showing a user's movement and a model image showing a model movement are superimposed and displayed on a display at a viewpoint corresponding to the rotation angle of the neck of a user wearing a head-mounted display on his head.
  • a head-mounted display a head-mounted display system, and a head-mounted display system that allow a user to easily switch to an angle where the user can easily see the difference between the model and the automatic production when displaying a self-image and a model image in a superimposed manner.
  • a display method for a head-mounted display is provided.
  • FIG. 2 is a block diagram illustrating an example of the hardware configuration and functions of an HMD.
  • FIG. 3 is a diagram for explaining an overview of superimposed display from a viewpoint above the user.
  • FIG. 2 is a diagram for explaining an overview of superimposed display from behind the user.
  • FIG. 6 is a diagram for explaining an example of rotation of an image and an observed image.
  • FIG. 6 is a diagram for explaining an example of rotation of an image and an observed image.
  • 12 is a flowchart for explaining an example of processing related to superimposed display.
  • FIG. 3 is a diagram for explaining an example of initial settings.
  • FIG. 6 is a diagram illustrating an example of setting a viewpoint direction based on a rotation angle of the neck.
  • FIG. 6 is a diagram illustrating an example of setting a viewpoint direction based on a rotation angle of the neck.
  • FIG. 7 is a diagram illustrating an example of a change in the tempo of a model video when the model video is made to follow the tempo of a practitioner.
  • 12 is a flowchart illustrating an example of a process for causing a model video to follow the tempo of a practitioner. It is a diagram showing an example of the configuration of an HMD system. It is a flowchart for explaining an example of processing of an HMD system. It is a figure showing an example of a display of HMD.
  • the viewpoints of the self-image and the model video are An HMD 1 that can be easily changed and displayed is provided. Therefore, it is expected that the user wearing the HMD 1 can efficiently learn the movements of the model video by moving the user's own image to match the model video while appropriately changing the viewpoint.
  • the HMD 1 includes a main control section 11, a storage section 21, a sensor section 31, a communication processing section 41, a video processing section 51, an audio processing section 61, an operation input section 71, Equipped with These components are connected via a data bus for exchanging each data.
  • the HMD 1 also includes a battery (not shown) that serves as a power source.
  • the main control unit 11 functions as a main processor, and is configured using, for example, a CPU (Central Processing Unit).
  • the main control unit 11 may be a main body that executes predetermined processing, and may be configured using another semiconductor device such as a GPU (Graphics Processing Unit), for example.
  • a CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the storage unit 21 is used to store data, and the storage unit 21 includes a program storage unit 22, a data storage unit 23, and a program function unit 24.
  • the program storage section 22 and the data storage section 23 can be configured using, for example, a ROM (Read Only Memory), a flash memory that stores initial setting information, and the like.
  • the program storage unit 22 stores programs used for processing of the HMD 1
  • the data storage unit 23 stores data used for processing of the HMD 1.
  • the program function section 24 is configured using a RAM (Random Access Memory), and the main control section 11 loads a program into the program function section 24 and executes it.
  • the sensor unit 31 includes, for example, a GPS receiving sensor (GPS receiving unit 32) that can be used to acquire position information, a geomagnetic sensor 33, an acceleration sensor 34, a gyro sensor 35, and a sensor that can detect the distance to an object. It can be configured using a distance sensor 36, a human sensor 37, etc., and can be used to grasp data such as the condition of the wearer and the positions of surrounding objects.
  • GPS receiving unit 32 GPS receiving unit 32
  • GPS receiving unit 32 GPS receiving unit 32
  • GPS receiving unit 32 GPS receiving unit 32
  • GPS receiving unit 32 that can be used to acquire position information
  • a geomagnetic sensor 33 an acceleration sensor 34
  • a gyro sensor 35 a sensor that can detect the distance to an object. It can be configured using a distance sensor 36, a human sensor 37, etc., and can be used to grasp data such as the condition of the wearer and the positions of surrounding objects.
  • the sensors listed here are just examples, and only need to be able to execute a pre
  • the communication processing unit 41 is configured to include an interface used for communication, and includes, as an example, a LAN communication unit 42 and a telephone network communication unit 43.
  • the LAN communication unit 42 can be configured using, for example, a wireless LAN interface that is an interface for wireless LAN communication.
  • the telephone network communication unit 43 can be configured using, for example, a telephone network interface that is an interface with a telephone network.
  • the configuration of the communication processing unit 41 described here is only an example, and it is sufficient that the HMD 1 can execute a predetermined process, and can be changed as appropriate.
  • the communication processing unit 41 may be configured using, for example, a short-range communication interface that is an interface for short-range communication. Furthermore, if communication by telephone is not required, the telephone network communication section 43 may be omitted.
  • the video processing unit 51 is used for video processing, and includes an imaging unit 52 and a display unit 53 (display).
  • the imaging unit 52 is configured to acquire real images, and is configured by, for example, a camera.
  • the display unit 53 is configured as an appropriate display that outputs video.
  • the display unit 53 can display an image acquired by a camera, for example.
  • the video processing section 51 includes a virtual mirror image display section 56.
  • the virtual mirror image display section 56 is used to display the own image and the model image in a superimposed manner, and includes an HMD angle detection section 56a, a rotation processing section, and a superimposition display section 56c.
  • the HMD angle detection unit 56a is a program used to detect the angle of the HMD 1 based on data from the sensor unit 31 and the like.
  • the rotation processing unit (3D rotation processing unit 56b) is a program used to perform three-dimensional rotation processing on the self-image and the model image to be displayed in a superimposed manner.
  • the superimposed display section 56c is a program used to display the own image and the model image in a superimposed manner on the display. These programs are stored in the storage section 21 and executed by the main control section 11 as appropriate. Note that the processing of the virtual mirror image display section 56 will be explained in detail later.
  • the audio processing section 61 is used for inputting and outputting audio, and includes an audio input section 62 and an audio output section 63.
  • the audio input section 62 is used for audio input, and can be configured using a microphone.
  • the microphone can be provided as appropriate so that the wearer's voice can be input when the HMD 1 is worn.
  • the audio output unit 63 is used for audio output, and can be configured using a speaker, for example.
  • the speaker can be provided close to the ear of the wearer when the HMD 1 is worn.
  • the operation input unit 71 is configured for the user to input operation details, and can be configured as appropriate using buttons, a touch panel display, etc., as an example.
  • FIG. 2A is a diagram for explaining an overview of superimposed display from a perspective above the user.
  • FIG. 2B is a diagram for explaining an overview of superimposed display from behind the user as a viewpoint.
  • the HMD 1 superimposes a self-image and a model image on the display from a viewpoint corresponding to the orientation of the user's head. Therefore, when the user wearing the HMD 1 rotates his or her neck, viewpoints are displayed in a superimposed manner according to the angle of rotation of the neck.
  • a mirror image display is performed. For example, if it is determined that the user's head is turned to the left, the self-image from the left viewpoint and the model image are displayed superimposed, indicating that the user's head is turned to the right. If it is determined that the user's own image viewed from the right side and the model image are superimposed and displayed. Furthermore, if it is determined that the user's head is directed toward the front, the user's own image viewed from the front and the model image are displayed in a superimposed manner.
  • the HMD 1 acquires the orientation of the user's head when the main control unit 11 executes the HMD angle detection unit 56a.
  • the HMD angle detection unit 56a an example of a method for obtaining the orientation of the user's head will be specifically described.
  • the HMD angle detection unit 56a uses the configuration of the sensor unit 31 to determine the rotation angle of the user's head.
  • the HMD angle detection unit 56a can determine the rotation angle with respect to the front side, for example, by combining the detection results of the geomagnetic sensor 33 that detects direction and the acceleration sensor 34 that detects tilt. Further, the HMD angle detection unit 56a may use, for example, the angular velocity detected by the gyro sensor 35 to determine the rotation angle with respect to the front side.
  • the HMD angle detection unit 56a may determine the rotation angle of the head based on head tracking, for example, based on data from a tracking camera placed in the usage environment.
  • the HMD 1 acquires data from the tracking camera by wireless communication (for example, short-range wireless communication), for example.
  • 3A and 3B are diagrams for explaining an example of rotation of an image and an observed image.
  • the user can check the superimposed display of the self-image and the model image from various viewpoints by appropriately rotating the neck and changing the direction of the head.
  • the user who performs a certain action confirms that the difference between the self-image seen from the front and the model image is small by turning his head to the front, and then turns his head while maintaining his posture.
  • the rotation processing unit (3D rotation processing unit 56b
  • the user can easily check the difference between the model video and the model video from various viewpoints by simply rotating his or her head. Can be learned efficiently.
  • FIG. 4 is a flowchart for explaining an example of processing related to superimposed display.
  • a user who starts practicing first makes initial settings to specify a viewpoint angle to be observed and a virtual mirror surface to be displayed as a set (S401).
  • initial settings will be described with reference to FIG. 5.
  • the user selects the viewpoint of the object to be mirror image displayed from among the predetermined viewpoint directions.
  • the front side and both left and right viewpoints are turned ON and are subject to mirror image display, and the left and right rear viewpoints are turned OFF and are not subject to mirror image display.
  • the predetermined viewpoint directions are shown as front (0°), 90° left, 90° right, 135° left, and 135° right. The angle may be changed as appropriate.
  • a setting may be made in which the viewpoint from above is the subject of mirror image display.
  • the user makes settings to associate the rotation angle of the neck with the angle of the viewpoint for displaying the mirror image.
  • This setting will be explained with reference to FIG. 5B.
  • the rotation angle is 0° when the user is pointing his or her head straight ahead, and a mirror image of the front viewpoint is displayed when the rotation angle is between 60° to the left and 60° to the right.
  • settings are made to display a mirror image of the left viewpoint when the rotation angle is 60° to the left and 110° left, and to display a mirror image of the right viewpoint when the rotation angle is 60° to the right.
  • a mirror image is displayed corresponding to the rotation angle of the user's neck.
  • the left rear viewpoint is displayed as a mirror image
  • the right rear viewpoint is displayed.
  • Settings are also made to display a mirror image, but settings related to viewpoint directions that are turned OFF and are not subject to mirror image display may be omitted.
  • settings regarding the direction of the viewpoint from above may be made based on the rotation angle in the vertical direction. .
  • the angle is set so that the model image is displayed superimposed on the self-image from the front view of the user's head (that is, the third-person perspective, which is a view from the front of the user's face).
  • the front viewpoint is set to be displayed
  • the orientation of HMD 1 is 45° left to 135° left
  • the left viewpoint is set to be displayed.
  • Settings may also be made.
  • the orientation of the HMD 1 is between 45° to the right and 135° to the right
  • settings may be made to display the right viewpoint.
  • the number of viewpoints that can be the target may be taken into consideration.
  • the angle of the viewpoint from which the user's head is viewed from the front may be set. That is, by narrowing the angular range and increasing the number of possible viewpoints, the viewpoint for viewing the user's head from the front may be more appropriately set.
  • the data initialized in S401 is stored in the storage unit 21.
  • the main control unit 11 then performs processing by referring to the initialized data.
  • the HMD 1 (specifically, the main control unit 11) plays back the model video and first compares it to the self-video from the front viewpoint in a fixed coordinate system.
  • the model video is displayed in a superimposed manner on the display unit 53 (S403).
  • the main control unit 11 detects the orientation of the user's head (in other words, the rotation angle of the neck) using the HMD angle detection unit 56a (S404). The main control unit 11 also determines whether the detection is due to an original operation (S405). That is, the main control unit 11 determines whether or not a predetermined operation is performed.
  • the main control unit 11 determines whether the user's action detected in S404 is related to the action registered in the storage unit 21 in advance. With this, for example, when registering a gesture as an operation command in advance, the operation command and the original motion can be distinguished and processed. In addition, in making this determination, the main control unit 11 can use, for example, data of the self-image. In S405, if it is determined that the user's motion is not the original motion (that is, if it is determined that the user's motion is an operation command), the process returns to S404 (S405-N). On the other hand, if the main control unit 11 determines that the user's motion is the original motion (that is, determines that the user's motion is not an operation command), it performs the process of S406 (S405-Y). .
  • the main control unit 11 executes the rotation processing unit 56b to perform rotation processing on the superimposed image, and displays a mirror image in the set direction (S406). That is, the main control unit 11 rotates the self-image and the model image to perform superimposed display of the viewpoint direction corresponding to the orientation of the user's head. At this time, the main control unit 11 performs the process by referring to the data initialized in S401. During the user's practice, the HMD 1 repeats the processes from S404 to S406 in real time to perform superimposed display in an appropriate viewpoint direction desired by the user.
  • the model video can be stored in the storage unit 21 in advance, and the main control unit 11 can perform processing using the model video stored in the storage unit 21.
  • the model video may be stored in the storage unit 21 at the initial setting timing, for example.
  • the main control unit 11 may acquire the model video via the communication processing unit 41 and process it.
  • the self-image is generated based on data from a sensing device worn by the user or a camera that photographs the user.
  • the photographing camera is appropriately placed in the environment in which the user practices.
  • the HMD 1 can use the communication processing unit 41 to acquire data acquired by a sensing device or a photographing camera through wireless communication such as short-range wireless communication.
  • the type of sensing device includes a known sensor that can measure a user's motion.
  • the self-image and the model image can be three-dimensional skeleton images, and can be generated using an appropriate program.
  • This skeletal image can be generated, for example, by representing joints with nodes and constructing edges between the nodes.
  • the self-image and the model image may be real images. Further, superimposed display may be performed in which one video is a real video and the other video is a skeleton video.
  • the HMD 1 may perform a process of estimating the entire skeletal image by performing appropriate estimation process based on data from a sensing device or a photographing camera. For example, in a case where the user's movements are captured in a limited manner due to the arrangement of sensing devices and shooting cameras, the HMD 1 generates an entire skeletal image through estimation processing based on the acquired data, and displays the generated skeletal image. It's okay.
  • horizontal reversal processing may be performed on the own image and the model image that are displayed in a superimposed manner.
  • the HMD 1 superimposes and displays a self-image showing the user's movement and a model image showing the model movement
  • An HMD 1 is provided that superimposes and displays a self-image and a model image from a viewpoint corresponding to the rotation angle of the neck on a display.
  • the HMD 1 of this embodiment may be configured to be able to switch between a predetermined viewpoint display mode and a tracking display mode.
  • the predetermined viewpoint display mode is a display mode in which a self-image and a model image from a predetermined viewpoint are displayed in a superimposed manner according to the rotation angle of the user's neck, as described above. Therefore, for example, as described above, if the front side and both left and right viewpoints are set to ON in the initial settings in S401, in the predetermined viewpoint display mode, the , a mirror image display of any one of the front-side viewpoint, left-side viewpoint, and right-side viewpoint is performed.
  • the tracking display mode is a display mode that performs superimposed display in which the viewpoints of the self-image and the model image are continuously changed in accordance with the rotation of the user's neck. Therefore, in the follow-up display mode, regardless of the contents of the initial setting in S401, a mirror image display that follows the rotation angle of the user's neck is performed, and a superimposed display of a viewpoint viewing the user's head from the front is performed. For example, when a user rotates his or her head from the front side to the left, a mirror image is displayed from a viewpoint that follows the angle of rotation of the user's neck, regardless of the initial settings.
  • the HMD 1 may be configured to be switchable between a predetermined viewpoint display mode and a tracking display mode, for example, by a user operation via the operation input unit 71. Furthermore, the HMD 1 may be configured to switch between the predetermined viewpoint display mode and the tracking display mode by recognizing a predetermined gesture for switching the display mode.
  • information regarding gestures recognized as operation commands is stored in advance in the storage unit 21, as an example.
  • the HMD 1 uses information about the self-image obtained from the sensing device or the camera and the information stored in the storage unit 21 to execute a gesture that is an operation command. recognize.
  • the HMD 1 of this embodiment may be configured to be able to switch between a playback display mode and a fixed display mode.
  • the reproduction display mode is a display mode in which a model video is reproduced and displayed.
  • the fixed display mode is a mode in which the model video is displayed in a fixed manner. While practicing in the playback display mode, the user can switch to the fixed display mode at an appropriate time (for example, when performing an action that he/she feels unfamiliar with), fix the model image, and then rotate his or her head to take the appropriate viewpoint. By displaying the image in a superimposed manner, it is possible to easily check the difference from the model image at this timing. Note that switching between the playback display mode and the fixed display mode may be performed by using the operation input section 71 or by using gestures, as in the above description.
  • the HMD 1 of this embodiment may perform superimposed display regarding a plurality of viewpoints simultaneously.
  • a symmetrical display may be performed; for example, when the user rotates his or her head to the left, in addition to displaying a display corresponding to the rotation of the user's neck, if the user rotates the head by the same angle to the right.
  • a display may be performed when the image is rotated. That is, when the user rotates his or her head in one direction in the left or right direction, the HMD 1 superimposes and displays the self-image and the model image from the viewpoint corresponding to that direction on the display, and furthermore, when the user rotates his or her head in the other direction by the same amount.
  • a process may be performed in which the self-image and the model image from viewpoints in opposite directions are displayed in a superimposed manner on the display when it is assumed that the image has been rotated. Note that the positions where each superimposed display is performed are different positions on the display.
  • the HMD 1 superimposes and displays a self-image from a front viewpoint and a model video on the display, and further displays a self-image and a model video from a rear viewpoint. It may be displayed superimposed on top.
  • each superimposed display is placed at a different position on the display.
  • the HMD 1 may be configured to be able to switch between the respective display modes using the same operation input unit or gesture as described above.
  • the HMD 1 of the present embodiment may be configured to be able to set an area to be displayed in a superimposed manner on the display, and the area to be displayed in a superimposed manner is set by a user operation using the operation input unit 71 at the time of initial setting, for example. It's okay.
  • an area for displaying a superimposed viewpoint on the left side of the front may be set on the left side of the display, and an area for superimposing displaying a viewpoint on the right side of the front may be set on the right side of the display.
  • respective areas may be set for the left and right displays.
  • areas for superimposing display of the front viewpoint and upper viewpoint may be similarly set on the display, and these areas are set at the same positions as areas for superimposing display of the left and right viewpoints, for example. It's okay.
  • the HMD 1 may be configured to be able to set a portion of the model video to be played back at a slower tempo.
  • the portion to be played back at a slower tempo may be set as appropriate, but as an example, it may be set using the operation input unit 71 or gestures, similar to the above explanation.
  • the HMD 1 may be configured to automatically set the portion where the tempo is slowed down using an evaluation model that evaluates the degree of learning of the model image.
  • the evaluation model can be generated by machine learning using the content practiced by the user and the content of the model, for example, as a model for evaluating whether the content of the model is reproduced.
  • the portion to be played back at a slower tempo may be set, for example, at the initial setting timing in S401, or may be set during practice.
  • the model video may follow the tempo of the practitioner (user). That is, the model image may be played back in accordance with the tempo of the user's movements.
  • the tempo of the model video in this part slows down to follow the user's tempo.
  • the portions indicated by B are easy portions or previously learned portions, and as the user practices at a faster tempo, the tempo of the model video follows suit and becomes faster.
  • the part indicated by C is a complicated part, and as the user practices in slow motion, the tempo of the model video follows and slows down.
  • the user can practice efficiently by rotating his or her head and checking the superimposed display from a viewpoint where it is easy to see the difference from the model video. can. In this way, by making the model video follow the user's tempo, it is possible to practice efficiently. Next, this process will be explained in detail with reference to FIG. 6B.
  • the HMD 1 determines whether the practitioner has started a practice movement (S601).
  • the HMD 1 determines, for example, the start of the practice movement of the practitioner by comparing the movement with the movement in the model image (that is, by determining whether the movement matches the movement in the model image). do.
  • the HMD 1 detects the start of a practice motion
  • the HMD 1 detects the start position of the practitioner (that is, the position corresponding to the start of practice in the model video) and the tempo of the practitioner.
  • the HMD 1 activates the model video and outputs the model video from the position corresponding to the start of practice (S602).
  • the HMD 1 monitors the tempo of the practitioner who is practicing (S603). Then, the HMD 1 determines a change in the tempo of the practitioner (S604), and when the tempo of the practitioner changes, changes the tempo of the model video to match the tempo of the practitioner (S605). By repeating the processes of S603 to S605, the HMD 1 causes the tempo of the model video to appropriately follow the tempo of the practitioner.
  • the HMD 1 can be configured to be switchable by using the operation input unit 71 or by making a gesture, as described above.
  • the HMD 1 of the present embodiment when the difference between the self-image and the model image that are superimposed and displayed is equal to or greater than a threshold, a difference occurs between the self-image and the model image from the viewpoint that is superimposed and displayed. You may also perform processing to notify the user of the current status. By recognizing this notification, the user can easily confirm that the difference from the model video is large.
  • the manner of notification is not particularly limited, and as an example, notification may be performed by audio output using an audio output unit. Further, a notification to that effect may be displayed on the display.
  • a second embodiment will be described with reference to FIGS. 7-8. Functions similar to those in other embodiments are denoted by the same reference numerals, and descriptions similar to those in other embodiments may be omitted.
  • a head mounted display system (sometimes referred to as an HMD system 703) including a distribution device 701 and an HMD 702 will be described.
  • the distribution device 701 is a device that distributes a model video. As shown in FIG. 7, in this embodiment, the distribution device 701 is a server, and the server acquires a model video shot at a remote location via a network (NW in FIG. 7). Then, the server performs a service of distributing the model video to the user side, and the HMD 702 displays the model video obtained from the server.
  • NW network
  • the HMD 702 may be configured to have the same functions as in the first embodiment, and can perform the same superimposed display as in the first embodiment. Further, the HMD 702 can perform superimposed display of self-images and model images from various viewpoints in the same manner as in the first embodiment.
  • a smartphone acquires a model video distributed from a distribution device 701, and the HMD 702 acquires the model video via the smartphone. Furthermore, in the present embodiment, the HMD 702 can generate a self-image of the user by using the camera of the smartphone as a camera that photographs the user's movements.
  • the HMD system 703 can be changed as appropriate.
  • the server is the distribution device 701
  • an HMD system 703 in which the distribution device 701 is a smartphone may also be provided. That is, the smartphone may acquire the model video via appropriate communication and distribute the model video.
  • an HMD system 703 may be provided in which the smartphone stores a model video and the HMD 702 acquires the model video stored in the smartphone.
  • an HMD system 703 may be provided in which a model video is directly delivered to the HMD 702 from a delivery device 701 (for example, a server) without using a smartphone.
  • the distribution device 701 may acquire the own video and the model video, generate data to be displayed in a superimposed manner, and distribute the generated data.
  • the HMD 702 may perform superimposed display based on this data.
  • the distribution device 701 in response to the start of practice, the distribution device 701 (in this example, the server) converts the self-video and the model video into 3D, and displays the data to be superimposed (the self-video and the model video). superimposed data) is generated (S801). Then, the generated data is continuously transferred to the HMD 702 (S802). On the HMD 702 side, the same processing as described above (that is, the content explained using FIG. 4) is performed (S803 to S806).
  • any other appropriate device may be used as the distribution device 701.
  • the programs used in each process example may be independent programs, or a plurality of programs may constitute one application program. Furthermore, the order in which each process is performed may be changed.
  • Some or all of the functions of the present invention described above may be realized by hardware, for example, by designing an integrated circuit.
  • the functions may be realized in software by having a microprocessor unit, CPU, etc. interpret and execute operating programs for realizing the respective functions.
  • the scope of software implementation is not limited, and hardware and software may be used together.
  • a part or all of each function may be realized by a server. Note that the server only needs to be able to execute functions in cooperation with other components via communication, and may be, for example, a local server, a cloud server, an edge server, a network service, etc., and its form does not matter. Information such as programs, tables, files, etc.
  • each function may be stored in a memory, a recording device such as a hard disk, an SSD (Solid State Drive), or a recording medium such as an IC card, SD card, or DVD. However, it may also be stored in a device on a communication network.
  • a recording device such as a hard disk, an SSD (Solid State Drive), or a recording medium such as an IC card, SD card, or DVD.
  • a recording medium such as an IC card, SD card, or DVD.
  • it may also be stored in a device on a communication network.
  • Appropriate information indicating the viewpoint to be superimposed may be displayed on the display. For example, when superimposing a viewpoint regarding 90 degrees to the left, text information such as "90 degrees to the left" may be displayed on the display together with this superimposed display.
  • the HMD (1, 702) has a display mode in which the user's own image and a model image from a viewpoint with the largest difference are superimposed and displayed regardless of the rotation of the user's neck, and a display mode in which the user's neck is
  • the display mode may be configured to be able to switch between a display mode in which the self-image and the model image are superimposed and displayed at a viewpoint corresponding to the rotation.
  • the HMD (1, 702) may be configured to be able to switch between these display modes using the operation input unit 71 or gestures, as described above.
  • the HMD (1, 702) may be of a glasses type or a goggle type.
  • the HMD (1, 702) can be equipped with a smartphone as appropriate, uses the display screen of the smartphone as a display (display part 53), and has an out-camera of the smartphone (i.e., installed on the opposite side from the display screen).
  • the configuration may also be such that a camera (a camera that can be used) can be used as a camera (the imaging unit 52).
  • the HMD (1, 702) may have a configuration that can display on the display an image in which the model image and the own image are linked (synchronized) as shown in FIG.
  • the HMD (1, 702) may be configured to be able to switch between this display and the superimposed display; for example, as explained above, the HMD (1, 702) may be configured to be able to switch between this display and the superimposed display using the operation input unit 71 or gestures. may be done.
  • Appropriate operations such as switching the display mode may be performed by voice input using the voice input unit 62 and appropriate voice recognition.
  • the voice data used for voice recognition is stored in advance in the storage section 21, and the main control section 11 performs voice recognition by referring to the voice data stored in advance in the storage section 21.
  • HMD head mounted display
  • Main control unit processor
  • Display section display
  • Virtual mirror image display section 56a HMD angle detection section
  • 3D rotation processing section 56c Superimposition display section

Abstract

This head-mounted display comprises a processor and a display. The processor displays actual video, which shows an action of a user, and an example video, which shows an example of an action, in a superimposed manner on the display, from a viewpoint corresponding to the angle of rotation of the neck of the user who is wearing the head-mounted display on the head.

Description

ヘッドマウントディスプレイ、ヘッドマウントディスプレイシステム、および、ヘッドマウントディスプレイの表示方法Head-mounted display, head-mounted display system, and display method of head-mounted display
 本発明は、ヘッドマウントディスプレイ(Head Mounted Display)、該ヘッドマウントディスプレイを用いたシステム、および、該ヘッドマウントディスプレイにおける表示方法に関する。なお、以下、ヘッドマウントディスプレイをHMDと記すことがある。 The present invention relates to a head mounted display, a system using the head mounted display, and a display method in the head mounted display. Note that, hereinafter, a head mounted display may be referred to as an HMD.
 拡張現実(以下、ARと記載することがある)は、現実の世界を映した光景や映像にデジタル情報を重ねて補足(拡張)する技術として知られており、例えば、ゲーム、カメラアプリ、建築や土木、小売りなどの様々な分野で利用されている。ARを体験するためには、一例として、ARに対応するHMDが使用される。このHMDは、一例として、頭部に装着し、ゴーグル状のディスプレイに拡張現実画像(以下、AR画像と記載することがある)を表示する装置である。この装置には、一例として、カメラ、位置測定センサなどの複数のセンサや画像処理を行うCPU、バッテリーなどが搭載されている。 Augmented reality (hereinafter sometimes referred to as AR) is known as a technology that supplements (augments) scenes and images of the real world by overlaying them with digital information. It is used in a variety of fields such as engineering, civil engineering, and retail. In order to experience AR, for example, an HMD that supports AR is used. This HMD is, for example, a device that is worn on the head and displays augmented reality images (hereinafter sometimes referred to as AR images) on a goggle-like display. For example, this device is equipped with a plurality of sensors such as a camera and a position measurement sensor, a CPU that performs image processing, a battery, and the like.
 そして、この種の技術がユーザの技能習得や学習支援に用いられることがある。特許文献1は、メガネ型デバイスと、制御システムと、を備える学習支援システムを開示する。メガネ型デバイスは、表示部と、学習者の視野映像を撮像する撮像部(カメラ)と、を備えており、学習者に装着される。制御システムは、メガネ型デバイスにネットワークを介して接続され、メガネ型デバイスの機能を制御する。制御システムの演算部は、学習者の作業動作のお手本となる指導者の作業動作の動画であるお手本動画を、カメラが撮像した視野映像上に重ね合わせて表示部に表示させるとともに、視野映像に含まれる学習者の作業動作の特徴に応じて、動的にお手本動画の表示内容を変化させる。 This type of technology is sometimes used to assist users in acquiring skills and learning. Patent Document 1 discloses a learning support system including a glasses-type device and a control system. The glasses-type device includes a display section and an imaging section (camera) that captures a visual field image of the learner, and is worn by the learner. The control system is connected to the glasses-type device via a network and controls the functions of the glasses-type device. The calculation unit of the control system superimposes a model video, which is a video of the instructor's work movement that serves as a model for the learner's work movement, on the visual field image captured by the camera and displays it on the display unit. The display contents of the model video are dynamically changed according to the characteristics of the work movements of the learners involved.
 また、この種の技術がトレーニングに用いられることがある。特許文献2は、お手本映像を、ユーザのトレーニング映像と共にユーザ端末のディスプレイに表示させ、お手本映像とユーザの実際のトレーニング映像とをディスプレイ上で見比べることができる技術を開示する。 This type of technique is also sometimes used for training. Patent Document 2 discloses a technique in which a model video is displayed on a display of a user terminal together with a user's training video, and the model video and the user's actual training video can be compared on the display.
特開2020-144233号公報JP 2020-144233 Publication 特開2020-195573号公報Japanese Patent Application Publication No. 2020-195573
 しかしながら、特に全身が関わる運動系では、お手本と自映像を重ね合わせながら練習する場合、正面、もしくは予め観測可能に設定した視点や角度からでは、お手本と自分の動作の差分(修正ポイント)をリアルタイムで観測しづらいケースが生じることがある。ここで、表示角度を適宜に変更することが有効であると考えられるが、特許文献1および特許文献2は、表示角度を変更する技術を具体的に開示しないと考えられる。 However, especially in exercise systems that involve the whole body, when practicing while superimposing the model and self-image, it is difficult to see the difference between the model and your own movements (correction points) in real time from the front or from a viewpoint or angle that has been set so that it can be observed in advance. Sometimes cases occur that are difficult to observe. Here, it is thought that it is effective to change the display angle appropriately, but it is considered that Patent Document 1 and Patent Document 2 do not specifically disclose a technique for changing the display angle.
 そこで、本発明は、自映像と手本映像を重畳表示する場合において、簡単な方法で、ユーザがお手本と自動作の差分が見やすい角度へ切替えることができるヘッドマウントディスプレイ、ヘッドマウントディスプレイシステム、および、ヘッドマウントディスプレイの表示方法を提供することを目的とする。 SUMMARY OF THE INVENTION The present invention provides a head-mounted display, a head-mounted display system, and a head-mounted display system that allow the user to easily switch to an angle where the user can easily see the difference between the model and the automatic production when displaying a self-image and a model image in a superimposed manner. The purpose of this invention is to provide a display method for a head-mounted display.
 本発明の第1の態様によれば、下記のヘッドマウントディスプレイが提供される。すなわち、ヘッドマウントディスプレイは、プロセッサと、ディスプレイと、を備える。プロセッサは、ユーザの動作を示す自映像と手本の動作を示す手本映像を、ヘッドマウントディスプレイを頭部に装着したユーザの首の回転角度に応じた視点でディスプレイに重畳表示する。 According to the first aspect of the present invention, the following head mounted display is provided. That is, the head mounted display includes a processor and a display. The processor superimposes and displays a self-image showing the user's movement and a model image showing the model movement on the display from a viewpoint corresponding to the rotation angle of the neck of the user wearing the head-mounted display on the head.
 本発明の第2の態様によれば、下記のヘッドマウントディスプレイシステムが提供される。すなわち、ヘッドマウントディスプレイシステムは、配信装置と、ヘッドマウントディスプレイと、を備える。配信装置は、手本の動作を示す手本映像を配信する。ヘッドマウントディスプレイは、プロセッサと、ディスプレイと、を備える。プロセッサは、ユーザの動作を示す自映像と配信装置から配信される手本映像を、ヘッドマウントディスプレイを頭部に装着したユーザの首の回転角度に応じた視点でディスプレイに重畳表示する。 According to the second aspect of the present invention, the following head mounted display system is provided. That is, the head mounted display system includes a distribution device and a head mounted display. The distribution device distributes a model video showing a model operation. The head mounted display includes a processor and a display. The processor superimposes and displays the user's self-image showing the user's actions and the model image distributed from the distribution device on the display from a viewpoint corresponding to the rotation angle of the neck of the user wearing the head-mounted display on his head.
 本発明の第3の態様によれば、下記のヘッドマウントディスプレイの表示方法が提供される。すなわち、ヘッドマウントディスプレイの表示方法は、プロセッサを用いて行う方法である。この方法は、ユーザの動作を示す自映像と手本の動作を示す手本映像を、ヘッドマウントディスプレイを頭部に装着したユーザの首の回転角度に応じた視点でディスプレイに重畳表示する。 According to the third aspect of the present invention, the following display method for a head mounted display is provided. That is, the display method of the head-mounted display is a method using a processor. In this method, a self-image showing a user's movement and a model image showing a model movement are superimposed and displayed on a display at a viewpoint corresponding to the rotation angle of the neck of a user wearing a head-mounted display on his head.
 本発明によれば、自映像と手本映像を重畳表示する場合において、簡単な方法で、ユーザがお手本と自動作の差分が見やすい角度へ切替えることができるヘッドマウントディスプレイ、ヘッドマウントディスプレイシステム、および、ヘッドマウントディスプレイの表示方法が提供される。 According to the present invention, there is provided a head-mounted display, a head-mounted display system, and a head-mounted display system that allow a user to easily switch to an angle where the user can easily see the difference between the model and the automatic production when displaying a self-image and a model image in a superimposed manner. , a display method for a head-mounted display is provided.
HMDのハードウェア構成および機能の一例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of the hardware configuration and functions of an HMD. ユーザの上部を視点として重畳表示の概要を説明するための図である。FIG. 3 is a diagram for explaining an overview of superimposed display from a viewpoint above the user. ユーザの背後を視点として重畳表示の概要を説明するための図である。FIG. 2 is a diagram for explaining an overview of superimposed display from behind the user. 映像の回転および観測映像の一例について説明するための図である。FIG. 6 is a diagram for explaining an example of rotation of an image and an observed image. 映像の回転および観測映像の一例について説明するための図である。FIG. 6 is a diagram for explaining an example of rotation of an image and an observed image. 重畳表示に関する処理の一例について説明するためのフローチャートである。12 is a flowchart for explaining an example of processing related to superimposed display. 初期設定の一例について説明するための図である。FIG. 3 is a diagram for explaining an example of initial settings. 首の回転角度に基づく視点方向の設定の一例を示す図である。FIG. 6 is a diagram illustrating an example of setting a viewpoint direction based on a rotation angle of the neck. 首の回転角度に基づく視点方向の設定の一例を示す図である。FIG. 6 is a diagram illustrating an example of setting a viewpoint direction based on a rotation angle of the neck. 練習者のテンポに手本映像を追従させる場合における手本映像のテンポの変化の一例を示す図である。FIG. 7 is a diagram illustrating an example of a change in the tempo of a model video when the model video is made to follow the tempo of a practitioner. 練習者のテンポに手本映像を追従させる処理の一例を説明するためのフローチャートである。12 is a flowchart illustrating an example of a process for causing a model video to follow the tempo of a practitioner. HMDシステムの構成の一例を示す図である。It is a diagram showing an example of the configuration of an HMD system. HMDシステムの処理の一例について説明するためのフローチャートである。It is a flowchart for explaining an example of processing of an HMD system. HMDの表示の一例を示す図である。It is a figure showing an example of a display of HMD.
 以下、本発明の実施形態の例を、図面を用いて説明する。全図を通じて同様の構成には同一の符号を付し、重複説明を省略することがある。第1実施形態によれば、ユーザの動作を示す自映像と、お手本の動作を示す手本映像と、を拡張現実(Augmented Reality)として重畳表示する場合において、自映像と手本映像の視点を簡単に変更して表示することができるHMD1が提供される。従って、HMD1を装着したユーザが、適宜に視点を変更しつつ、自映像と手本映像を一致させるように動作することで、手本映像の動作の効率良い習得が期待できる。 Examples of embodiments of the present invention will be described below with reference to the drawings. The same components are denoted by the same reference numerals throughout all the figures, and redundant explanation may be omitted. According to the first embodiment, when a self-image showing a user's action and a model video showing a model action are superimposed and displayed as augmented reality, the viewpoints of the self-image and the model video are An HMD 1 that can be easily changed and displayed is provided. Therefore, it is expected that the user wearing the HMD 1 can efficiently learn the movements of the model video by moving the user's own image to match the model video while appropriately changing the viewpoint.
<第1実施形態>
図1-図6を参照しながら、第1実施形態について説明する。先ず、図1を参照しながら、HMDのハードウェア構成の一例について説明する。図1に示すように、HMD1は、主制御部11と、記憶部21と、センサ部31と、通信処理部41と、映像処理部51と、音声処理部61と、操作入力部71と、を備える。これらの構成は、各データなどをやり取りするためのデータバスを介して接続されている。また、HMD1は、電源となるバッテリー(不図示)を備える。
<First embodiment>
A first embodiment will be described with reference to FIGS. 1 to 6. First, an example of the hardware configuration of an HMD will be described with reference to FIG. As shown in FIG. 1, the HMD 1 includes a main control section 11, a storage section 21, a sensor section 31, a communication processing section 41, a video processing section 51, an audio processing section 61, an operation input section 71, Equipped with These components are connected via a data bus for exchanging each data. The HMD 1 also includes a battery (not shown) that serves as a power source.
 主制御部11(プロセッサ)は、メインプロセッサとして機能し、一例として、CPU(Central Processing Unit)を用いて構成される。なお、主制御部11は、所定の処理を実行する主体であればよく、一例として、GPU(Graphics Processing Unit)等の他の半導体デバイスを用いて構成されてもよい。 The main control unit 11 (processor) functions as a main processor, and is configured using, for example, a CPU (Central Processing Unit). Note that the main control unit 11 may be a main body that executes predetermined processing, and may be configured using another semiconductor device such as a GPU (Graphics Processing Unit), for example.
 記憶部21は、データの記憶に用いられ、記憶部21には、プログラム記憶部22と、データ記憶部23と、プログラム機能部24と、が配置されている。プログラム記憶部22およびデータ記憶部23は、一例として、ROM(Readonly memory)、初期設定情報などを記憶するフラッシュメモリ等を用いて構成することができる。プログラム記憶部22は、HMD1の処理に用いるプログラムを記憶し、データ記憶部23は、HMD1の処理に用いるデータを記憶する。プログラム機能部24は、RAM(Random access memory)を用いて構成され、主制御部11は、プログラム機能部24にプログラムを読み込んで実行する。 The storage unit 21 is used to store data, and the storage unit 21 includes a program storage unit 22, a data storage unit 23, and a program function unit 24. The program storage section 22 and the data storage section 23 can be configured using, for example, a ROM (Read Only Memory), a flash memory that stores initial setting information, and the like. The program storage unit 22 stores programs used for processing of the HMD 1, and the data storage unit 23 stores data used for processing of the HMD 1. The program function section 24 is configured using a RAM (Random Access Memory), and the main control section 11 loads a program into the program function section 24 and executes it.
 センサ部31は、一例として、位置情報の取得に用いることができるGPS受信センサ(GPS受信部32)、地磁気センサ33、加速度センサ34、ジャイロセンサ35、物体との距離を検出することができる測距センサ36、人感センサ37等を用いて構成することができ、装着者の状態や周囲の物体の位置などのデータの把握に使用することができる。ただし、ここで列挙されたセンサは一例であり、所定の処理を実行することができればよく、列挙されたセンサが適宜に省略されたり、これら以外の種類のセンサが含まれてもよい。 The sensor unit 31 includes, for example, a GPS receiving sensor (GPS receiving unit 32) that can be used to acquire position information, a geomagnetic sensor 33, an acceleration sensor 34, a gyro sensor 35, and a sensor that can detect the distance to an object. It can be configured using a distance sensor 36, a human sensor 37, etc., and can be used to grasp data such as the condition of the wearer and the positions of surrounding objects. However, the sensors listed here are just examples, and only need to be able to execute a predetermined process, and the listed sensors may be omitted as appropriate, or other types of sensors may be included.
 通信処理部41は、通信に用いるインターフェースを含んで構成され、一例として、LAN通信部42と、電話網通信部43と、を有する。LAN通信部42は、一例として、無線LAN通信におけるインターフェースである無線LANインターフェースを用いて構成することができる。電話網通信部43は、一例として、電話網とのインターフェースである電話網インターフェースを用いて構成することができる。ただし、ここで説明した通信処理部41の構成は、一例であり、HMD1が所定の処理を実行することができればよく、適宜に変更することができる。通信処理部41は、例えば、近距離通信におけるインターフェースである近距離通信インターフェースを用いて構成されてもよい。また、電話による通信が不要である場合では、電話網通信部43が省略されてもよい。 The communication processing unit 41 is configured to include an interface used for communication, and includes, as an example, a LAN communication unit 42 and a telephone network communication unit 43. The LAN communication unit 42 can be configured using, for example, a wireless LAN interface that is an interface for wireless LAN communication. The telephone network communication unit 43 can be configured using, for example, a telephone network interface that is an interface with a telephone network. However, the configuration of the communication processing unit 41 described here is only an example, and it is sufficient that the HMD 1 can execute a predetermined process, and can be changed as appropriate. The communication processing unit 41 may be configured using, for example, a short-range communication interface that is an interface for short-range communication. Furthermore, if communication by telephone is not required, the telephone network communication section 43 may be omitted.
 映像処理部51は、映像処理に用いられ、撮像部52と、表示部53(ディスプレイ)と、を備える。撮像部52は、現実の映像を取得する構成であり、一例として、カメラにより構成される。表示部53は、映像を出力する適宜のディスプレイとして構成される。表示部53は、一例として、カメラが取得した画像を表示することができる。 The video processing unit 51 is used for video processing, and includes an imaging unit 52 and a display unit 53 (display). The imaging unit 52 is configured to acquire real images, and is configured by, for example, a camera. The display unit 53 is configured as an appropriate display that outputs video. The display unit 53 can display an image acquired by a camera, for example.
 また、映像処理部51は、仮想鏡像表示部56を備える。仮想鏡像表示部56は、自映像と手本映像の重畳表示に用いられ、HMD角度検出部56aと、回転処理部と、重畳表示部56cと、を含む。HMD角度検出部56aは、センサ部31などからのデータに基づいて、HMD1の角度を検出することに用いるプログラムである。回転処理部(3D回転処理部56b)は、重畳表示する自映像と手本映像に3次元的な回転処理を行うことに用いるプログラムである。重畳表示部56cは、自映像と手本映像をディスプレイに重畳表示することに用いるプログラムである。これらのプログラムは、記憶部21に記憶され、主制御部11により適宜に実行される。なお、仮想鏡像表示部56の処理については、後で詳しく説明する。 Furthermore, the video processing section 51 includes a virtual mirror image display section 56. The virtual mirror image display section 56 is used to display the own image and the model image in a superimposed manner, and includes an HMD angle detection section 56a, a rotation processing section, and a superimposition display section 56c. The HMD angle detection unit 56a is a program used to detect the angle of the HMD 1 based on data from the sensor unit 31 and the like. The rotation processing unit (3D rotation processing unit 56b) is a program used to perform three-dimensional rotation processing on the self-image and the model image to be displayed in a superimposed manner. The superimposed display section 56c is a program used to display the own image and the model image in a superimposed manner on the display. These programs are stored in the storage section 21 and executed by the main control section 11 as appropriate. Note that the processing of the virtual mirror image display section 56 will be explained in detail later.
 音声処理部61は、音声の入出力に用いられ、音声入力部62と、音声出力部63と、を備える。音声入力部62は、音声入力に用いられ、マイクを用いて構成することができる。マイクは、HMD1の装着時において装着者の音声が入力されるように適宜に設けることができる。音声出力部63は、音声出力に用いられ、一例として、スピーカを用いて構成することができる。スピーカは、一例として、HMD1の装着時において装着者の耳に近接するように設けることができる。 The audio processing section 61 is used for inputting and outputting audio, and includes an audio input section 62 and an audio output section 63. The audio input section 62 is used for audio input, and can be configured using a microphone. The microphone can be provided as appropriate so that the wearer's voice can be input when the HMD 1 is worn. The audio output unit 63 is used for audio output, and can be configured using a speaker, for example. For example, the speaker can be provided close to the ear of the wearer when the HMD 1 is worn.
 操作入力部71は、ユーザが操作内容を入力する構成であり、一例として、ボタンやタッチパネルディスプレイなどにより適宜に構成することができる。 The operation input unit 71 is configured for the user to input operation details, and can be configured as appropriate using buttons, a touch panel display, etc., as an example.
 次に、図2を参照しながら、HMDを装着したユーザが見る映像について説明する。図2Aは、ユーザの上部を視点として重畳表示の概要を説明するための図である。図2Bは、ユーザの背後を視点として重畳表示の概要を説明するための図である。 Next, with reference to FIG. 2, the video seen by the user wearing the HMD will be described. FIG. 2A is a diagram for explaining an overview of superimposed display from a perspective above the user. FIG. 2B is a diagram for explaining an overview of superimposed display from behind the user as a viewpoint.
 図2Aおよび図2Bに示すように、HMD1は、ディスプレイ上に、ユーザの頭部の向きに応じた視点の自映像と手本映像の重畳表示を行う。従って、HMD1を装着したユーザが首を回転させることで、首の回転角度に応じた視点の重畳表示が行われる。 As shown in FIGS. 2A and 2B, the HMD 1 superimposes a self-image and a model image on the display from a viewpoint corresponding to the orientation of the user's head. Therefore, when the user wearing the HMD 1 rotates his or her neck, viewpoints are displayed in a superimposed manner according to the angle of rotation of the neck.
 ここで、ユーザが左右方向に首を回転させて左右方向に頭部を向けた場合、また、ユーザが正面側に頭部を向けている場合、鏡像表示が行われる。例えば、ユーザの頭部が左側に向けられていることが判定された場合では、左方からの視点の自映像と手本映像が重畳表示され、ユーザの頭部が右側に向けられていることが判定された場合では、右方からの視点の自映像と手本映像が重畳表示される。また、ユーザの頭部が正面側に向けられていることが判定された場合では、正面側からの視点の自映像と手本映像が重畳表示される。 Here, if the user turns his or her head in the left-right direction by rotating his or her head in the left-right direction, or if the user turns his or her head in the front direction, a mirror image display is performed. For example, if it is determined that the user's head is turned to the left, the self-image from the left viewpoint and the model image are displayed superimposed, indicating that the user's head is turned to the right. If it is determined that the user's own image viewed from the right side and the model image are superimposed and displayed. Furthermore, if it is determined that the user's head is directed toward the front, the user's own image viewed from the front and the model image are displayed in a superimposed manner.
 また、ユーザが上下方向に首を回転させて上方を向いた場合、上方(この例では、真上)からの視点の自映像と手本映像が重畳表示される。 Furthermore, when the user turns his or her head in the vertical direction to face upward, the self-image viewed from above (directly above in this example) and the model image are superimposed and displayed.
 HMD1は、主制御部11によるHMD角度検出部56aの実行において、ユーザの頭部の向きを取得する。ここで、ユーザの頭部の向きの取得方法の例について、具体的に説明する。 The HMD 1 acquires the orientation of the user's head when the main control unit 11 executes the HMD angle detection unit 56a. Here, an example of a method for obtaining the orientation of the user's head will be specifically described.
 HMD角度検出部56aは、一例として、センサ部31の構成を用いて、ユーザの頭部の回転角度を求める。HMD角度検出部56aは、例えば、方角を検出する地磁気センサ33と、傾きを検出する加速度センサ34と、の検出結果を組み合わせることで、正面側を基準とする回転角度を求めることができる。また、HMD角度検出部56aは、例えば、ジャイロセンサ35が検出する角速度を利用して、正面側を基準とする回転角度を求めてもよい。 For example, the HMD angle detection unit 56a uses the configuration of the sensor unit 31 to determine the rotation angle of the user's head. The HMD angle detection unit 56a can determine the rotation angle with respect to the front side, for example, by combining the detection results of the geomagnetic sensor 33 that detects direction and the acceleration sensor 34 that detects tilt. Further, the HMD angle detection unit 56a may use, for example, the angular velocity detected by the gyro sensor 35 to determine the rotation angle with respect to the front side.
 HMD角度検出部56aは、例えば、使用環境に配置するトラッキングカメラからのデータに基づいて、ヘッドトラッキングに基づく頭部の回転角度を求めてもよい。この場合、HMD1は、一例として、無線通信(例えば、近距離無線通信)により、トラッキングカメラからのデータを取得する。 The HMD angle detection unit 56a may determine the rotation angle of the head based on head tracking, for example, based on data from a tracking camera placed in the usage environment. In this case, the HMD 1 acquires data from the tracking camera by wireless communication (for example, short-range wireless communication), for example.
 なお、ここで説明した回転角度を求める手法は、一例であり、他の公知の方法(例えば、他の公知のヘッドトラッキング技術)が用いられてもよい。 Note that the method of determining the rotation angle described here is just an example, and other known methods (for example, other known head tracking techniques) may be used.
 次に、図3を参照しながら、重畳表示される自映像と手本映像の差分について説明する。図3Aおよび図3Bは、映像の回転および観測映像の一例について説明するための図である。 Next, with reference to FIG. 3, the difference between the superimposed self-image and the model image will be explained. 3A and 3B are diagrams for explaining an example of rotation of an image and an observed image.
 図3Aおよび図3Bに示すように、ユーザは、適宜に首を回転させて頭部の向きを変えることで、様々な視点からの自映像と手本映像の重畳表示を確認することができる。図3の例では、ある動作をするユーザは、正面側に頭部を向けることで、正面側から見た自映像と手本映像の差分が小さいことを確認し、姿勢を保ったまま首を回転させて頭部を左側に向けることで、左側から見た自映像と手本映像の差分が大きいことを確認することができる。すなわち、この例の場合では、図3Aに示すように、HMD1は、回転処理部(3D回転処理部56b)を実行して、垂直方向の回転軸を設定し、自映像と手本映像をこの回転軸周りに回転させて、左側から見る視点の自映像と手本映像を重畳表示する。その結果として、図3Bの点線部分に示すように、正面から見る視点では差分が大きいことが確認できなかったが、側面観測により、手本映像と比較して身体が前傾姿勢であることが容易に確認できる。 As shown in FIGS. 3A and 3B, the user can check the superimposed display of the self-image and the model image from various viewpoints by appropriately rotating the neck and changing the direction of the head. In the example in Figure 3, the user who performs a certain action confirms that the difference between the self-image seen from the front and the model image is small by turning his head to the front, and then turns his head while maintaining his posture. By rotating and turning the head to the left, you can confirm that there is a large difference between the self-image seen from the left and the model image. That is, in the case of this example, as shown in FIG. 3A, the HMD 1 executes the rotation processing unit (3D rotation processing unit 56b), sets the vertical rotation axis, and converts the self-image and the model image into this image. Rotate around the rotation axis to display the self-image viewed from the left side and the model image superimposed. As a result, as shown by the dotted line in Figure 3B, although we could not confirm that the difference was large from the frontal perspective, side observation revealed that the body was leaning forward compared to the model image. Can be easily confirmed.
 従って、ユーザは、首を適宜に回転させるという簡単な手法で、手本映像との差分を様々な視点からチェックすることができるので、手本映像の習得を容易に確認し、手本映像を効率的に習得することができる。 Therefore, the user can easily check the difference between the model video and the model video from various viewpoints by simply rotating his or her head. Can be learned efficiently.
 次に、図4を参照しながら、重畳表示にあたって行う処理について、詳しく説明する。図4は、重畳表示に関する処理の一例について説明するためのフローチャートである。 Next, with reference to FIG. 4, the process performed for superimposed display will be described in detail. FIG. 4 is a flowchart for explaining an example of processing related to superimposed display.
 図4に示すように、練習を開始するユーザ(練習者)は、先ず、観測したい視点角度と表示する仮想鏡面をセットで指定する初期設定を行う(S401)。ここで、図5を参照しながら、初期設定の一例について説明する。 As shown in FIG. 4, a user (practitioner) who starts practicing first makes initial settings to specify a viewpoint angle to be observed and a virtual mirror surface to be displayed as a set (S401). Here, an example of initial settings will be described with reference to FIG. 5.
 図5Aに示すように、ユーザは、初期設定において、所定の視点方向のうちで、鏡像表示する対象の視点を選択する。この例では、正面側、および、左右両側の視点がONとされて鏡像表示の対象となり、左右後方側の視点がOFFとされて鏡像表示の対象とならない。なお、この例では、所定の視点方向として、正面(0°)、左90°、右90°、左135°、右135°を示すが、これらに限定されず、対象となり得る視点の数や角度は、適宜に変更されてもよい。また、図示されていないが、上方からの視点が鏡像表示の対象となる設定が行われてもよい。 As shown in FIG. 5A, in the initial settings, the user selects the viewpoint of the object to be mirror image displayed from among the predetermined viewpoint directions. In this example, the front side and both left and right viewpoints are turned ON and are subject to mirror image display, and the left and right rear viewpoints are turned OFF and are not subject to mirror image display. In this example, the predetermined viewpoint directions are shown as front (0°), 90° left, 90° right, 135° left, and 135° right. The angle may be changed as appropriate. Further, although not illustrated, a setting may be made in which the viewpoint from above is the subject of mirror image display.
 更に、ユーザは、初期設定において、首の回転角度と鏡像表示を行う視点の角度を対応付ける設定を行う。図5Bを参照しながら、この設定について説明する。この例では、固定座標系において、ユーザが真正面に頭部を向けている状態を回転角度0°として、回転角度が左60°~右60°であるときに正面側の視点の鏡像表示をし、回転角度が左60°~左110°であるときに左側の視点の鏡像表示をし、回転角度が右60°~右110°であるときに右側の視点の鏡像表示をする設定が行われている。そして、手本映像の練習時おいては、ユーザの首の回転角度に対応する鏡像表示が行われる。 Further, in the initial settings, the user makes settings to associate the rotation angle of the neck with the angle of the viewpoint for displaying the mirror image. This setting will be explained with reference to FIG. 5B. In this example, in a fixed coordinate system, the rotation angle is 0° when the user is pointing his or her head straight ahead, and a mirror image of the front viewpoint is displayed when the rotation angle is between 60° to the left and 60° to the right. , settings are made to display a mirror image of the left viewpoint when the rotation angle is 60° to the left and 110° left, and to display a mirror image of the right viewpoint when the rotation angle is 60° to the right. ing. When practicing the model video, a mirror image is displayed corresponding to the rotation angle of the user's neck.
 なお、この例では、回転角度が左110°~左150°であるときに左後方側の視点の鏡像表示をし、回転角度が右110°~右150°であるときに右後方側の視点の鏡像表示をする設定も行われているが、鏡像表示の対象とならないOFFとされた視点方向に関する設定は、省略されてもよい。また、この例では、省略されているが、上方からの視点が鏡像表示の対象となる設定が行われる場合では、上下方向の回転角度に基づく上方からの視点方向に関する設定が行われてもよい。 In this example, when the rotation angle is between 110° and 150° left, the left rear viewpoint is displayed as a mirror image, and when the rotation angle is between 110° and right 150°, the right rear viewpoint is displayed. Settings are also made to display a mirror image, but settings related to viewpoint directions that are turned OFF and are not subject to mirror image display may be omitted. Furthermore, although omitted in this example, if settings are made such that the viewpoint from above is the subject of mirror image display, settings regarding the direction of the viewpoint from above may be made based on the rotation angle in the vertical direction. .
 また、初期設定では、ユーザの頭部を前方から見る視点(すなわち、3人称視点であり、ユーザの顔を正面から見る視点)の自映像と手本映像を重畳表示するような角度の設定が行われてもよい。例えば、HMD1の向きが左45°~右45°であるときに正面の視点の表示をする設定がされ、HMD1の向きが左45°~左135°であるときに、左側の視点の表示をする設定が行われてもよい。同様に、HMD1の向きが右45°~右135°のときに、右側の視点の表示をする設定が行われてもよい。また、対象となり得る視点の数が考慮されてもよく、一例として、HMD1が20°間隔(例えば、左10°~右10°、左10°~左30°等)の視点を設定することで、ユーザの頭部を前方から見る視点の角度の設定が行われてもよい。すなわち、角度範囲を狭めつつ、対象となり得る視点の数を増やすことで、ユーザの頭部を前方から見る視点の設定が、より適切に行われてもよい。 In addition, in the initial settings, the angle is set so that the model image is displayed superimposed on the self-image from the front view of the user's head (that is, the third-person perspective, which is a view from the front of the user's face). May be done. For example, when the orientation of HMD 1 is 45° left to 45° right, the front viewpoint is set to be displayed, and when the orientation of HMD 1 is 45° left to 135° left, the left viewpoint is set to be displayed. Settings may also be made. Similarly, when the orientation of the HMD 1 is between 45° to the right and 135° to the right, settings may be made to display the right viewpoint. Further, the number of viewpoints that can be the target may be taken into consideration. , the angle of the viewpoint from which the user's head is viewed from the front may be set. That is, by narrowing the angular range and increasing the number of possible viewpoints, the viewpoint for viewing the user's head from the front may be more appropriately set.
 ところで、首を回転させる際に、ユーザが望む視点方向に容易に切替えることができない場合も考えらえる。そこで、図5Cに示すように、HMD1の向きの角度をθ1、鏡像表示を行う視点の角度をθ2とするときに、θ1×α=θ2の関係に基づく設定が行われてもよい。ここで、αは1よりも大きい適宜の値(係数)とすることができる。このような関係を持たせる設定により、ユーザが望む視点方向に切り替えるにあたって首の回転角度を抑えることができ、視点方向の容易な切替えが可能となる。 Incidentally, when rotating the neck, there may be cases where the user cannot easily switch to the desired viewpoint direction. Therefore, as shown in FIG. 5C, when the orientation angle of the HMD 1 is θ1 and the angle of the viewpoint for mirror image display is θ2, settings may be made based on the relationship θ1×α=θ2. Here, α can be an appropriate value (coefficient) larger than 1. By setting such a relationship, it is possible to suppress the rotation angle of the user's neck when switching to a desired viewpoint direction, and it is possible to easily switch the viewpoint direction.
 S401において初期設定したデータは、記憶部21に記憶される。そして、主制御部11は、初期設定したデータを参照して処理を行う。 The data initialized in S401 is stored in the storage unit 21. The main control unit 11 then performs processing by referring to the initialized data.
 初期設定が終了し、ユーザが練習を開始した場合(S402)、HMD1(詳細には、主制御部11)は、手本映像を再生し、先ず固定座標系で真正面からの視点の自映像と手本映像を表示部53に重畳表示する(S403)。 When the initial settings are completed and the user starts practicing (S402), the HMD 1 (specifically, the main control unit 11) plays back the model video and first compares it to the self-video from the front viewpoint in a fixed coordinate system. The model video is displayed in a superimposed manner on the display unit 53 (S403).
 そして、主制御部11は、HMD角度検出部56aを用いて、ユーザの頭部の向き(言い換えれば、首の回転角度)を検出する(S404)。また、主制御部11は、本来の動作による検出であるか判別する(S405)。すなわち、主制御部11は、予め決められた動作によるかどうかについての判別を行う。 Then, the main control unit 11 detects the orientation of the user's head (in other words, the rotation angle of the neck) using the HMD angle detection unit 56a (S404). The main control unit 11 also determines whether the detection is due to an original operation (S405). That is, the main control unit 11 determines whether or not a predetermined operation is performed.
 詳細には、S405では、主制御部11は、S404で検出するユーザの動作が予め記憶部21に登録されている動作に関係するのかどうかについて判別する。これにより、例えば、操作コマンドとしてのジェスチャーを予め登録する場合、操作コマンドと本来の動作を区別して処理することができる。なお、この判別にあたって、主制御部11は、一例として、自映像のデータを用いることができる。S405において、ユーザの動作が本来の動作ではないと判別された場合(すなわち、ユーザの動作が操作コマンドであると判別された場合)、S404に処理が戻る(S405-N)。その一方で、主制御部11は、ユーザの動作が本来の動作であると判別した場合(すなわち、ユーザの動作が操作コマンドではないと判別した場合)、S406の処理を行う(S405-Y)。 Specifically, in S405, the main control unit 11 determines whether the user's action detected in S404 is related to the action registered in the storage unit 21 in advance. With this, for example, when registering a gesture as an operation command in advance, the operation command and the original motion can be distinguished and processed. In addition, in making this determination, the main control unit 11 can use, for example, data of the self-image. In S405, if it is determined that the user's motion is not the original motion (that is, if it is determined that the user's motion is an operation command), the process returns to S404 (S405-N). On the other hand, if the main control unit 11 determines that the user's motion is the original motion (that is, determines that the user's motion is not an operation command), it performs the process of S406 (S405-Y). .
 主制御部11は、回転処理部56bを実行して重畳映像に回転処理を行い、設定した方向の鏡像表示を行う(S406)。すなわち、主制御部11は、自映像と手本映像を回転することで、ユーザの頭部の向きに対応する視点方向の重畳表示を行う。このとき、主制御部11は、S401において初期設定したデータを参照して処理を行う。HMD1は、ユーザの練習時において、S404からS406に係る処理をリアルタイムで繰り返すことにより、ユーザが望む適切な視点方向の重畳表示を行う。 The main control unit 11 executes the rotation processing unit 56b to perform rotation processing on the superimposed image, and displays a mirror image in the set direction (S406). That is, the main control unit 11 rotates the self-image and the model image to perform superimposed display of the viewpoint direction corresponding to the orientation of the user's head. At this time, the main control unit 11 performs the process by referring to the data initialized in S401. During the user's practice, the HMD 1 repeats the processes from S404 to S406 in real time to perform superimposed display in an appropriate viewpoint direction desired by the user.
 なお、一例として、手本映像は、予め記憶部21に記憶させておき、主制御部11は、記憶部21に記憶された手本映像を用いて処理を行うことができる。手本映像は、例えば、初期設定のタイミングで記憶部21に記憶されてもよい。その一方で、主制御部11は、通信処理部41を介して手本映像を取得して処理を行ってもよい。 Note that, as an example, the model video can be stored in the storage unit 21 in advance, and the main control unit 11 can perform processing using the model video stored in the storage unit 21. The model video may be stored in the storage unit 21 at the initial setting timing, for example. On the other hand, the main control unit 11 may acquire the model video via the communication processing unit 41 and process it.
 自映像は、一例として、ユーザが身に付けて使用するセンシングデバイスや、ユーザを撮影する撮影カメラからのデータに基づいて生成する。ここで、撮影カメラは、ユーザが練習する環境に適宜に配置される。HMD1は、一例として、通信処理部41を用いて、センシングデバイスや撮影カメラが取得するデータを、近距離無線通信などの無線通信で取得することができる。なお、センシングデバイスの種類としては、ユーザの動作を計測することができる公知のセンサが挙げられる。 For example, the self-image is generated based on data from a sensing device worn by the user or a camera that photographs the user. Here, the photographing camera is appropriately placed in the environment in which the user practices. For example, the HMD 1 can use the communication processing unit 41 to acquire data acquired by a sensing device or a photographing camera through wireless communication such as short-range wireless communication. Note that the type of sensing device includes a known sensor that can measure a user's motion.
 自映像と手本映像は、3次元の骨格映像とすることができ、適宜のプログラムを用いて生成することができる。この骨格映像は、例えば、関節をノードで表し、ノード間にエッジを構築することで生成することができる。その一方で、自映像と手本映像は、現実の映像であってもよい。また、一方の映像を現実の映像として、他方の映像を骨格映像とするような重畳表示が行われてもよい。 The self-image and the model image can be three-dimensional skeleton images, and can be generated using an appropriate program. This skeletal image can be generated, for example, by representing joints with nodes and constructing edges between the nodes. On the other hand, the self-image and the model image may be real images. Further, superimposed display may be performed in which one video is a real video and the other video is a skeleton video.
 また、HMD1は、センシングデバイスや撮影カメラのデータに基づいて、適宜の推定処理を行うことにより、全体の骨格映像を推定する処理を行ってもよい。例えば、センシングデバイスや撮影カメラの配置により、ユーザの動作が限定的に取得される場合では、HMD1は、取得するデータに基づく推定処理により全体の骨格映像を生成し、生成した骨格映像を表示してもよい。 Additionally, the HMD 1 may perform a process of estimating the entire skeletal image by performing appropriate estimation process based on data from a sensing device or a photographing camera. For example, in a case where the user's movements are captured in a limited manner due to the arrangement of sensing devices and shooting cameras, the HMD 1 generates an entire skeletal image through estimation processing based on the acquired data, and displays the generated skeletal image. It's okay.
 また、重畳表示する自映像や手本映像に左右反転処理が行われてもよい。 Further, horizontal reversal processing may be performed on the own image and the model image that are displayed in a superimposed manner.
 以上の説明により、本実施形態によれば、ユーザの動作を示す自映像と、手本の動作を示す手本映像と、を重畳表示するHMD1であって、HMD1を頭部に装着したユーザの首の回転角度に応じた視点の自映像と手本映像をディスプレイに重畳表示するHMD1が提供される。 As described above, according to the present embodiment, the HMD 1 superimposes and displays a self-image showing the user's movement and a model image showing the model movement, An HMD 1 is provided that superimposes and displays a self-image and a model image from a viewpoint corresponding to the rotation angle of the neck on a display.
 (1)更に、本実施形態のHMD1は、所定視点表示モードと、追従表示モードと、を切替可能に構成されてもよい。ここで、所定視点表示モードは、上記の説明と同様に、ユーザの首の回転角度に応じて所定の視点の自映像と手本映像を重畳表示する表示モードである。従って、例えば、上記の説明のように、S401の初期設定で、正面側、および、左右両側の視点がONとされている場合、所定視点表示モードでは、ユーザの頭部の回転角度に応じて、正面側の視点、左側の視点、および、右側の視点のうちの何れか1つの視点に関する鏡像表示が行われる。 (1) Furthermore, the HMD 1 of this embodiment may be configured to be able to switch between a predetermined viewpoint display mode and a tracking display mode. Here, the predetermined viewpoint display mode is a display mode in which a self-image and a model image from a predetermined viewpoint are displayed in a superimposed manner according to the rotation angle of the user's neck, as described above. Therefore, for example, as described above, if the front side and both left and right viewpoints are set to ON in the initial settings in S401, in the predetermined viewpoint display mode, the , a mirror image display of any one of the front-side viewpoint, left-side viewpoint, and right-side viewpoint is performed.
 その一方で、追従表示モードは、ユーザの首の回転に追従して自映像と手本映像の視点が連続的に変更される重畳表示を行う表示モードである。従って、追従表示モードでは、S401の初期設定の内容に関わらず、ユーザの首の回転角度に追従した鏡像表示が行われ、ユーザの頭部を前方から見る視点の重畳表示が行われる。例えば、ユーザが正面側から左側に首を回転させる場合、初期設定の内容に関わらず、ユーザの首の回転角度に追従した角度の視点からの鏡像表示が行われる。 On the other hand, the tracking display mode is a display mode that performs superimposed display in which the viewpoints of the self-image and the model image are continuously changed in accordance with the rotation of the user's neck. Therefore, in the follow-up display mode, regardless of the contents of the initial setting in S401, a mirror image display that follows the rotation angle of the user's neck is performed, and a superimposed display of a viewpoint viewing the user's head from the front is performed. For example, when a user rotates his or her head from the front side to the left, a mirror image is displayed from a viewpoint that follows the angle of rotation of the user's neck, regardless of the initial settings.
 なお、所定視点表示モードと追従表示モードとの切替えは、適宜の手法により実行することができる。HMD1は、例えば、操作入力部71を介したユーザ操作によって所定視点表示モードと追従表示モードが切替可能に構成されてもよい。また、HMD1は、表示モードの切替えを行う所定のジェスチャーを認識することによって所定視点表示モードと追従表示モードを切替えるように構成されてもよい。ここで、操作コマンドとして認識するジェスチャーに関する情報は、一例として、記憶部21に予め記憶される。そして、HMD1は、一例として、表示モードの切替えにあたって、センシングデバイスや撮影カメラから取得される自映像に関する情報と、記憶部21に記憶されている情報と、を用いて、操作コマンドであるジェスチャーを認識する。 Note that switching between the predetermined viewpoint display mode and the tracking display mode can be performed using an appropriate method. The HMD 1 may be configured to be switchable between a predetermined viewpoint display mode and a tracking display mode, for example, by a user operation via the operation input unit 71. Furthermore, the HMD 1 may be configured to switch between the predetermined viewpoint display mode and the tracking display mode by recognizing a predetermined gesture for switching the display mode. Here, information regarding gestures recognized as operation commands is stored in advance in the storage unit 21, as an example. For example, when switching the display mode, the HMD 1 uses information about the self-image obtained from the sensing device or the camera and the information stored in the storage unit 21 to execute a gesture that is an operation command. recognize.
 (2)更に、本実施形態のHMD1は、再生表示モードと、固定表示モードと、を切替可能に構成されてもよい。ここで、再生表示モードは、手本映像を再生して表示する表示モードである。その一方で、固定表示モードは、手本映像を固定して表示するモードである。ユーザは、再生表示モードで練習している際に、適宜のタイミング(例えば、不慣れと感じる動作をするタイミング)で固定表示モードに切替えて手本映像を固定し、首を回転して適宜の視点の重畳表示をさせることで、このタイミングにおける手本映像との差分を容易に確認することができる。なお、再生表示モードと固定表示モードの切替えは、上記の説明と同様に、操作入力部71を用いる方法であっても、ジェスチャーを用いる方法であってもよい。 (2) Furthermore, the HMD 1 of this embodiment may be configured to be able to switch between a playback display mode and a fixed display mode. Here, the reproduction display mode is a display mode in which a model video is reproduced and displayed. On the other hand, the fixed display mode is a mode in which the model video is displayed in a fixed manner. While practicing in the playback display mode, the user can switch to the fixed display mode at an appropriate time (for example, when performing an action that he/she feels unfamiliar with), fix the model image, and then rotate his or her head to take the appropriate viewpoint. By displaying the image in a superimposed manner, it is possible to easily check the difference from the model image at this timing. Note that switching between the playback display mode and the fixed display mode may be performed by using the operation input section 71 or by using gestures, as in the above description.
 (3)更に、本実施形態のHMD1は、複数の視点に関する重畳表示を同時に行ってもよい。その一例として、左右対称的な表示が行われてもよく、例えば、ユーザが左に首を回転させた際に、ユーザの首の回転に応じた表示に加えて、ユーザが同じ角度だけ右に回転させた場合の表示が行われてもよい。すなわち、HMD1は、ユーザが左右方向の一方に首を回転するときに、その方向に対応する視点の自映像と手本映像をディスプレイに重畳表示し、さらに、ユーザが同じ分だけ他方に首を回転したと仮定した場合における反対方向の視点の自映像と手本映像をディスプレイに重畳表示する処理を行ってもよい。なお、それぞれの重畳表示を行う位置は、ディスプレイ上において異なる位置とされる。 (3) Furthermore, the HMD 1 of this embodiment may perform superimposed display regarding a plurality of viewpoints simultaneously. As an example, a symmetrical display may be performed; for example, when the user rotates his or her head to the left, in addition to displaying a display corresponding to the rotation of the user's neck, if the user rotates the head by the same angle to the right. A display may be performed when the image is rotated. That is, when the user rotates his or her head in one direction in the left or right direction, the HMD 1 superimposes and displays the self-image and the model image from the viewpoint corresponding to that direction on the display, and furthermore, when the user rotates his or her head in the other direction by the same amount. A process may be performed in which the self-image and the model image from viewpoints in opposite directions are displayed in a superimposed manner on the display when it is assumed that the image has been rotated. Note that the positions where each superimposed display is performed are different positions on the display.
 また、HMD1は、複数の視点に関する重畳表示として、一例として、正面側の視点の自映像と手本映像をディスプレイ上に重畳表示し、さらに、後方側の視点の自映像と手本映像をディスプレイ上に重畳表示してもよい。ここで、それぞれの重畳表示は、ディスプレイ上において異なる位置とされる。 In addition, as a superimposed display regarding multiple viewpoints, the HMD 1 superimposes and displays a self-image from a front viewpoint and a model video on the display, and further displays a self-image and a model video from a rear viewpoint. It may be displayed superimposed on top. Here, each superimposed display is placed at a different position on the display.
 このように、左右対称的な重畳表示や、正面側の視点および後方側の視点の重畳表示を行うことで、それぞれの視点からの自映像と手本映像の差分を同時に確認することができる。なお、1つの視点に関する重畳表示を行う表示モードと、複数の視点に関する重畳表示を同時に行う表示モードは、切替え可能とされてもよい。ここで、HMD1は、上記と同様の操作入力部やジェスチャーを用いる手法により、それぞれの表示モードを切替えることができるように構成されてもよい。 In this way, by performing bilaterally symmetrical superimposed display or superimposed display of the front and rear viewpoints, it is possible to simultaneously check the difference between the self-image and the model image from each viewpoint. Note that the display mode in which superimposed display regarding one viewpoint is performed and the display mode in which superimposed display regarding a plurality of viewpoints is performed simultaneously may be switchable. Here, the HMD 1 may be configured to be able to switch between the respective display modes using the same operation input unit or gesture as described above.
 (4)本実施形態のHMD1は、ディスプレイ上に重畳表示するエリアを設定可能に構成されてもよく、重畳表示するエリアが、例えば、初期設定時に操作入力部71を用いたユーザ操作により設定されてもよい。ここで、一例として、正面よりも左側に関する視点の重畳表示を行うエリアがディスプレイの左側に設定され、正面よりも右側に関する視点の重畳表示を行うエリアがディスプレイの右側に設定されてもよい。また、例えば、HMD1がメガネ型である場合、左右のディスプレイにそれぞれのエリアが設定されてもよい。また、正面の視点および上方の視点の重畳表示を行うエリアがディスプレイ上に同様に設定されてもよく、これらのエリアは、例えば、左右の視点に関する重畳表示を行うエリアと同様の位置に設定されてもよい。 (4) The HMD 1 of the present embodiment may be configured to be able to set an area to be displayed in a superimposed manner on the display, and the area to be displayed in a superimposed manner is set by a user operation using the operation input unit 71 at the time of initial setting, for example. It's okay. Here, as an example, an area for displaying a superimposed viewpoint on the left side of the front may be set on the left side of the display, and an area for superimposing displaying a viewpoint on the right side of the front may be set on the right side of the display. Further, for example, if the HMD 1 is of a glasses type, respective areas may be set for the left and right displays. Further, areas for superimposing display of the front viewpoint and upper viewpoint may be similarly set on the display, and these areas are set at the same positions as areas for superimposing display of the left and right viewpoints, for example. It's okay.
 (5)また、本実施形態のHMD1によれば、手本映像の一部のテンポ(BPM)を変更して再生することもできる。次に、テンポの変更について説明する。 (5) Furthermore, according to the HMD 1 of this embodiment, it is also possible to change the tempo (BPM) of a part of the model video and play it back. Next, changing the tempo will be explained.
 ユーザが手本映像に合わせて練習するときに、動作が難しい部分や未修得であると考える部分については、手本映像のテンポを遅くして再生することで、ユーザの習得が捗ることが考えられる。そこで、HMD1は、手本映像においてテンポを遅くして再生する部分の設定が行えるように構成されてもよい。 When a user practices along with a model video, it is thought that by playing the model video at a slower tempo for parts that are difficult to perform or parts that the user thinks they have not mastered, the user will be able to learn more quickly. It will be done. Therefore, the HMD 1 may be configured to be able to set a portion of the model video to be played back at a slower tempo.
 なお、テンポを遅くして再生する部分は、適宜に設定されればよいが、一例として、上記の説明と同様に、操作入力部71やジェスチャーにより設定することができる。また、HMD1は、手本画像の習得度合いを評価する評価モデルを用いて、テンポを遅くする部分を自動的に設定するように構成されてもよい。なお、評価モデルは、手本の内容が再現されているかどうかについて評価するモデルとして、一例として、ユーザが練習した内容と、手本の内容と、を用いた機械学習により、生成することができる。また、テンポを遅くして再生する部分は、例えば、S401の初期設定のタイミングで設定されてもよいし、練習中に設定されてもよい。 Note that the portion to be played back at a slower tempo may be set as appropriate, but as an example, it may be set using the operation input unit 71 or gestures, similar to the above explanation. Furthermore, the HMD 1 may be configured to automatically set the portion where the tempo is slowed down using an evaluation model that evaluates the degree of learning of the model image. The evaluation model can be generated by machine learning using the content practiced by the user and the content of the model, for example, as a model for evaluating whether the content of the model is reproduced. . Further, the portion to be played back at a slower tempo may be set, for example, at the initial setting timing in S401, or may be set during practice.
 また、図6Aに示すように、手本映像が練習者(ユーザ)のテンポに追従するようにしてもよい。すなわち、ユーザが動作するテンポに合わせて手本画像が再生されてもよい。図6Aの場合、Aで示す部分でユーザが静止して姿勢を確認するので、この部分の手本映像のテンポがユーザのテンポに追従して遅くなる。また、Bで示す部分は、簡単な部分や既修得部であり、ユーザがテンポを速くして練習することにより、手本映像のテンポも追従して速くなる。また、Cで示す部分は、複雑な部分であり、ユーザがスローモーションで練習することにより、手本映像のテンポも追従して遅くなる。ここで、Aで示す部分とCで示す部分では、ユーザは、首を回転させ、手本映像との差分が見えやすい視点からの重畳表示を確認することで、効率の良い練習を行うことができる。このように、ユーザのテンポに手本映像を追従させることで、効率の良い練習を行うことができる。次に、図6Bを参照しながら、この処理について詳しく説明する。 Furthermore, as shown in FIG. 6A, the model video may follow the tempo of the practitioner (user). That is, the model image may be played back in accordance with the tempo of the user's movements. In the case of FIG. 6A, since the user stands still in the part indicated by A to check his posture, the tempo of the model video in this part slows down to follow the user's tempo. Further, the portions indicated by B are easy portions or previously learned portions, and as the user practices at a faster tempo, the tempo of the model video follows suit and becomes faster. Further, the part indicated by C is a complicated part, and as the user practices in slow motion, the tempo of the model video follows and slows down. Here, in the parts indicated by A and C, the user can practice efficiently by rotating his or her head and checking the superimposed display from a viewpoint where it is easy to see the difference from the model video. can. In this way, by making the model video follow the user's tempo, it is possible to practice efficiently. Next, this process will be explained in detail with reference to FIG. 6B.
 先ず、HMD1(詳細には、主制御部11)は、練習者が練習動作を開始したかどうかについて判定する(S601)。ここで、HMD1は、一例として、手本映像の動作と比較することで(すなわち、手本映像の動作に一致する動作を行ったかについて判定することで)、練習者の練習動作の開始を判定する。そして、HMD1は、練習動作の開始を検出した場合、練習者の開始位置(すなわち、手本映像の練習開始に対応する位置)、および、練習者のテンポを検出する。また、HMD1は、手本映像を起動して、練習開始に対応する位置からの手本映像を出力する(S602)。 First, the HMD 1 (specifically, the main control unit 11) determines whether the practitioner has started a practice movement (S601). Here, the HMD 1 determines, for example, the start of the practice movement of the practitioner by comparing the movement with the movement in the model image (that is, by determining whether the movement matches the movement in the model image). do. When the HMD 1 detects the start of a practice motion, the HMD 1 detects the start position of the practitioner (that is, the position corresponding to the start of practice in the model video) and the tempo of the practitioner. Furthermore, the HMD 1 activates the model video and outputs the model video from the position corresponding to the start of practice (S602).
 HMD1は、練習している練習者のテンポを監視する(S603)。そして、HMD1は、練習者のテンポの変化を判定し(S604)、練習者のテンポが変更した場合に練習者のテンポに合わせて手本映像のテンポを変更する(S605)。HMD1は、S603~S605の処理を繰り返すことにより、練習者のテンポに手本映像のテンポを適切に追従させる。 The HMD 1 monitors the tempo of the practitioner who is practicing (S603). Then, the HMD 1 determines a change in the tempo of the practitioner (S604), and when the tempo of the practitioner changes, changes the tempo of the model video to match the tempo of the practitioner (S605). By repeating the processes of S603 to S605, the HMD 1 causes the tempo of the model video to appropriately follow the tempo of the practitioner.
 なお、テンポを変更しない手本映像の再生、設定された部分のテンポを変更する手本映像の再生、および、練習者のテンポに追従させる手本映像の再生は、適宜に切替え可能である。HMD1は、例えば、練習開始時において、上記の説明と同様に、操作入力部71を用いることやジェスチャーを行うことで切替え可能に構成することができる。 Note that playback of a model video without changing the tempo, playback of a model video that changes the tempo of a set portion, and playback of a model video that follows the tempo of the practitioner can be switched as appropriate. For example, at the start of practice, the HMD 1 can be configured to be switchable by using the operation input unit 71 or by making a gesture, as described above.
 (6)また、本実施形態のHMD1は、重畳表示している自映像と手本映像の差分が閾値以上である場合に、この重畳表示をする視点の自映像と手本映像に差分が発生していることを通知する処理を行ってもよい。この通知を認識することで、ユーザは、手本映像との差分が大きいことを容易に確認することができる。ここで、通知の態様は特に限定されず、一例として、音声出力部を用いた音声出力による通知が行われてもよい。また、ディスプレイにその旨を表示する通知が行われてもよい。 (6) In addition, in the HMD 1 of the present embodiment, when the difference between the self-image and the model image that are superimposed and displayed is equal to or greater than a threshold, a difference occurs between the self-image and the model image from the viewpoint that is superimposed and displayed. You may also perform processing to notify the user of the current status. By recognizing this notification, the user can easily confirm that the difference from the model video is large. Here, the manner of notification is not particularly limited, and as an example, notification may be performed by audio output using an audio output unit. Further, a notification to that effect may be displayed on the display.
 <第二実施形態>
 次に、図7-図8を参照しながら、第2実施形態について説明する。他の実施形態と同様の機能には、同一の符号を付し、他の実施形態と同様の説明については説明を省略することがある。第2実施形態では、配信装置701と、HMD702と、を備えるヘッドマウントディスプレイシステム(HMDシステム703と記すことがある)について説明する。
<Second embodiment>
Next, a second embodiment will be described with reference to FIGS. 7-8. Functions similar to those in other embodiments are denoted by the same reference numerals, and descriptions similar to those in other embodiments may be omitted. In the second embodiment, a head mounted display system (sometimes referred to as an HMD system 703) including a distribution device 701 and an HMD 702 will be described.
 配信装置701は、手本映像を配信する装置とされる。図7に示すように、本実施形態では、配信装置701はサーバとされ、サーバは、ネットワーク(図7においてNW)を介して、遠隔地で撮影されるお手本となる手本映像を取得する。そして、サーバは、ユーザ側に手本映像を配信するサービスを行い、HMD702は、サーバから取得される手本映像を表示する。 The distribution device 701 is a device that distributes a model video. As shown in FIG. 7, in this embodiment, the distribution device 701 is a server, and the server acquires a model video shot at a remote location via a network (NW in FIG. 7). Then, the server performs a service of distributing the model video to the user side, and the HMD 702 displays the model video obtained from the server.
 本実施形態では、HMD702は、第1実施形態の場合と同様の機能を有するように構成されてもよく、第1実施形態の場合と同様の重畳表示を行うことができる。また、HMD702は、第1実施形態の場合と同様にして、様々な視点の自映像と手本映像の重畳表示を行うことができる。 In this embodiment, the HMD 702 may be configured to have the same functions as in the first embodiment, and can perform the same superimposed display as in the first embodiment. Further, the HMD 702 can perform superimposed display of self-images and model images from various viewpoints in the same manner as in the first embodiment.
 本実施形態では、配信装置701から配信される手本映像をスマホ(スマートフォンの略称である)が取得し、HMD702は、スマホを介して手本映像を取得する。また、本実施形態では、HMD702は、該スマホのカメラをユーザの動作を撮影する撮影カメラとして用いて、ユーザの自映像を生成することができる。 In this embodiment, a smartphone (abbreviation for smartphone) acquires a model video distributed from a distribution device 701, and the HMD 702 acquires the model video via the smartphone. Furthermore, in the present embodiment, the HMD 702 can generate a self-image of the user by using the camera of the smartphone as a camera that photographs the user's movements.
 HMDシステム703は、適宜に変更することができる。上記の説明では、サーバが配信装置701となる例が説明されたが、配信装置701がスマホとなるHMDシステム703が提供されてもよい。すなわち、スマホが適宜の通信を介して手本映像を取得し、手本映像を配信してもよい。また、スマホが手本映像を記憶しておき、HMD702がスマホに記憶された手本映像を取得するHMDシステム703が提供されてもよい。また、スマホを省略して、配信装置701(例えば、サーバ)から手本映像がHMD702に直接的に配信されるHMDシステム703が提供されてもよい。 The HMD system 703 can be changed as appropriate. In the above description, an example has been described in which the server is the distribution device 701, but an HMD system 703 in which the distribution device 701 is a smartphone may also be provided. That is, the smartphone may acquire the model video via appropriate communication and distribute the model video. Further, an HMD system 703 may be provided in which the smartphone stores a model video and the HMD 702 acquires the model video stored in the smartphone. Furthermore, an HMD system 703 may be provided in which a model video is directly delivered to the HMD 702 from a delivery device 701 (for example, a server) without using a smartphone.
 また、配信装置701が、自映像と手本映像を取得して重畳表示するデータを生成し、生成したデータを配信してもよい。そして、HMD702は、このデータに基づく重畳表示を行ってもよい。図8の例では、練習開始に応じて、配信装置701(この例では、サーバ)側では、自映像と手本映像の3D化が行われ、重畳表示するデータ(自映像と手本映像を重ね合わせたデータ)が生成される(S801)。そして、生成されたデータがHMD702に継続的に転送される(S802)。HMD702側では、上記の説明の内容(すなわち、図4を用いて説明した内容)と同様の処理が行われる(S803~S806)。 Alternatively, the distribution device 701 may acquire the own video and the model video, generate data to be displayed in a superimposed manner, and distribute the generated data. The HMD 702 may perform superimposed display based on this data. In the example of FIG. 8, in response to the start of practice, the distribution device 701 (in this example, the server) converts the self-video and the model video into 3D, and displays the data to be superimposed (the self-video and the model video). superimposed data) is generated (S801). Then, the generated data is continuously transferred to the HMD 702 (S802). On the HMD 702 side, the same processing as described above (that is, the content explained using FIG. 4) is performed (S803 to S806).
 配信装置701として、サーバやスマホを例として説明したが、これ以外の適宜の装置が配信装置701とされてもよい。 Although a server and a smartphone have been described as examples of the distribution device 701, any other appropriate device may be used as the distribution device 701.
 以上、本発明の実施形態について説明したが、言うまでもなく、本発明の技術を実現する構成は上記実施形態に限られるものではなく、様々な変形例が考えられる。例えば、前述した実施の形態は、本発明を分かり易く説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。また、ある実施形態の構成の一部を他の実施形態の構成と置き換えることが可能であり、また、ある実施形態の構成に他の実施形態の構成を加えることも可能である。これらは全て本発明の範疇に属するものである。また、文中や図中に現れる数値やメッセージ等もあくまでも一例であり、異なるものを用いても本発明の効果を損なうことはない。 Although the embodiments of the present invention have been described above, it goes without saying that the configuration for realizing the technology of the present invention is not limited to the above embodiments, and various modifications are possible. For example, the embodiments described above have been described in detail to explain the present invention in an easy-to-understand manner, and the present invention is not necessarily limited to having all the configurations described. Furthermore, it is possible to replace a part of the configuration of one embodiment with the configuration of another embodiment, and it is also possible to add the configuration of another embodiment to the configuration of one embodiment. All of these belong to the scope of the present invention. Further, the numerical values, messages, etc. that appear in the text and figures are merely examples, and the effects of the present invention will not be impaired even if different values are used.
 所定の処理を実行することができればよく、例えば、各処理例で用いられるプログラムは、それぞれ独立したプログラムでもよく、複数のプログラムが一つのアプリケーションプログラムを構成していてもよい。また、各処理を行う順番を入れ替えて実行するようにしてもよい。 It is sufficient that a predetermined process can be executed. For example, the programs used in each process example may be independent programs, or a plurality of programs may constitute one application program. Furthermore, the order in which each process is performed may be changed.
 前述した本発明の機能等は、それらの一部または全部を、例えば集積回路で設計する等によりハードウェアで実現してもよい。また、マイクロプロセッサユニット、CPU等がそれぞれの機能等を実現する動作プログラムを解釈して実行することによりソフトウェアで実現してもよい。また、ソフトウェアの実装範囲を限定するものでなく、ハードウェアとソフトウェアを併用してもよい。また、各機能の一部または全部をサーバで実現してもよい。なお、サーバは、通信を介して他の構成部分と連携し機能の実行が出来ればよく、例えば、ローカルサーバ、クラウドサーバ、エッジサーバ、ネットサービス等であり、その形態は問わない。各機能を実現するプログラム、テーブル、ファイル等の情報は、メモリや、ハードディスク、SSD(Solid State Drive)等の記録装置、または、ICカード、SDカード、DVD等の記録媒体に格納されてもよいし、通信網上の装置に格納されてもよい。 Some or all of the functions of the present invention described above may be realized by hardware, for example, by designing an integrated circuit. Alternatively, the functions may be realized in software by having a microprocessor unit, CPU, etc. interpret and execute operating programs for realizing the respective functions. Furthermore, the scope of software implementation is not limited, and hardware and software may be used together. Moreover, a part or all of each function may be realized by a server. Note that the server only needs to be able to execute functions in cooperation with other components via communication, and may be, for example, a local server, a cloud server, an edge server, a network service, etc., and its form does not matter. Information such as programs, tables, files, etc. that realize each function may be stored in a memory, a recording device such as a hard disk, an SSD (Solid State Drive), or a recording medium such as an IC card, SD card, or DVD. However, it may also be stored in a device on a communication network.
 ディスプレイには、重畳表示する視点を示す適宜の情報が表示されてもよい。例えば、左90°に関する視点の重畳表示をする場合には、この重畳表示に併せて「左90°」との文字情報がディスプレイ上に表示されてもよい。 Appropriate information indicating the viewpoint to be superimposed may be displayed on the display. For example, when superimposing a viewpoint regarding 90 degrees to the left, text information such as "90 degrees to the left" may be displayed on the display together with this superimposed display.
 HMD(1、702)は、ユーザの首の回転に関わらず、差分が最大の視点の自映像と手本映像を重畳表示する表示モードと、実施形態で説明されたように、ユーザの首の回転に応じた視点の自映像と手本映像を重畳表示する表示モードと、を切替えることができるように構成されてもよい。ここで、HMD(1、702)は、上記で説明したように、操作入力部71やジェスチャーにより、これらの表示モードを切替え可能に構成されてもよい。 The HMD (1, 702) has a display mode in which the user's own image and a model image from a viewpoint with the largest difference are superimposed and displayed regardless of the rotation of the user's neck, and a display mode in which the user's neck is The display mode may be configured to be able to switch between a display mode in which the self-image and the model image are superimposed and displayed at a viewpoint corresponding to the rotation. Here, the HMD (1, 702) may be configured to be able to switch between these display modes using the operation input unit 71 or gestures, as described above.
 HMD(1、702)は、メガネ型であってもよいし、ゴーグル型であってもよい。また、HMD(1、702)は、スマホを適宜に装着可能であり、スマホの表示画面をディスプレイ(表示部53)とし、スマホのアウトカメラ(すなわち、表示画面とは異なる反対側の面に設けられるカメラ)をカメラ(撮像部52)として用いることができる構成であってもよい。 The HMD (1, 702) may be of a glasses type or a goggle type. In addition, the HMD (1, 702) can be equipped with a smartphone as appropriate, uses the display screen of the smartphone as a display (display part 53), and has an out-camera of the smartphone (i.e., installed on the opposite side from the display screen). The configuration may also be such that a camera (a camera that can be used) can be used as a camera (the imaging unit 52).
 HMD(1、702)は、重畳表示以外に、図9に示すような手本映像と自映像を連動(同期)した映像をディスプレイに表示することができる構成であってもよい。ここで、HMD(1、702)は、この表示と重畳表示を切替えることができるように構成されてもよく、一例として、上記で説明したように、操作入力部71やジェスチャーにより切替え可能に構成されてもよい。 In addition to superimposed display, the HMD (1, 702) may have a configuration that can display on the display an image in which the model image and the own image are linked (synchronized) as shown in FIG. Here, the HMD (1, 702) may be configured to be able to switch between this display and the superimposed display; for example, as explained above, the HMD (1, 702) may be configured to be able to switch between this display and the superimposed display using the operation input unit 71 or gestures. may be done.
 表示モードの切替えなどの適宜の操作は、音声入力部62を用いた音声入力、および、適宜の音声認識により、行われてもよい。この場合、音声認識に用いる音声データは、記憶部21に予め記憶され、主制御部11は、記憶部21に予め記憶された音声データを参照して音声認識を行う。 Appropriate operations such as switching the display mode may be performed by voice input using the voice input unit 62 and appropriate voice recognition. In this case, the voice data used for voice recognition is stored in advance in the storage section 21, and the main control section 11 performs voice recognition by referring to the voice data stored in advance in the storage section 21.
1  HMD(ヘッドマウントディスプレイ)
11 主制御部(プロセッサ)
53 表示部(ディスプレイ)
56 仮想鏡像表示部
56a HMD角度検出部
56b 3D回転処理部
56c 重畳表示部
1 HMD (head mounted display)
11 Main control unit (processor)
53 Display section (display)
56 Virtual mirror image display section 56a HMD angle detection section 56b 3D rotation processing section 56c Superimposition display section

Claims (15)

  1.  プロセッサと、ディスプレイと、を備えるヘッドマウントディスプレイであって、
     前記プロセッサは、
     ユーザの動作を示す自映像と手本の動作を示す手本映像を、前記ヘッドマウントディスプレイを頭部に装着したユーザの首の回転角度に応じた視点で前記ディスプレイに重畳表示する、
    ことを特徴とするヘッドマウントディスプレイ。
    A head mounted display comprising a processor and a display,
    The processor includes:
    superimposing and displaying a self-image showing a user's movement and a model image showing a model movement on the display from a viewpoint corresponding to a rotation angle of the neck of the user wearing the head-mounted display on the head;
    A head-mounted display characterized by:
  2.  請求項1に記載のヘッドマウントディスプレイであって、
     前記プロセッサは、
     前記ユーザが左右方向に首を回転するときに、前記ユーザの頭部を前方から見る視点の前記自映像と前記手本映像を、前記ディスプレイに重畳表示する、
    ことを特徴とするヘッドマウントディスプレイ。
    The head mounted display according to claim 1,
    The processor includes:
    When the user rotates his or her head in a left-right direction, the self-image and the model image from a front view of the user's head are displayed in a superimposed manner on the display.
    A head-mounted display characterized by:
  3.  請求項2に記載のヘッドマウントディスプレイであって、
     前記プロセッサは、
     前記ユーザが上下方向に首を回転するときに、前記ユーザを上方から見る視点の前記自映像と前記手本映像を、前記ディスプレイに重畳表示する、
    ことを特徴とするヘッドマウントディスプレイ。
    The head mounted display according to claim 2,
    The processor includes:
    When the user rotates his or her head in a vertical direction, the self-image and the model image from a viewpoint viewing the user from above are displayed in a superimposed manner on the display.
    A head-mounted display characterized by:
  4.  請求項1に記載のヘッドマウントディスプレイであって、
     記憶部を備え、
     前記記憶部は、
     前記ユーザの首の左右方向の回転角度に応じた前記視点の角度を示すデータを記憶し、
     前記プロセッサは、
     前記データを参照して、前記ユーザの首の回転角度に応じた視点の前記自映像と前記手本映像の重畳表示を行う、
    ことを特徴とするヘッドマウントディスプレイ。
    The head mounted display according to claim 1,
    Equipped with a storage section,
    The storage unit includes:
    storing data indicating the angle of the viewpoint according to the rotation angle of the user's neck in the left-right direction;
    The processor includes:
    superimposing and displaying the self-image and the model image at a viewpoint according to the rotation angle of the user's neck with reference to the data;
    A head-mounted display characterized by:
  5.  請求項4に記載のヘッドマウントディスプレイであって、
     前記データは、
     前記ユーザの首の回転角度をθ1として、前記視点の方向の角度をθ2として、1よりも大きい係数をαとするときに、θ1×α=θ2の関係に基づいて設定される、
    ことを特徴とするヘッドマウントディスプレイ。
    The head mounted display according to claim 4,
    The said data is
    Set based on the relationship θ1×α=θ2, where θ1 is the rotation angle of the user's neck, θ2 is the angle in the direction of the viewpoint, and α is a coefficient larger than 1.
    A head-mounted display characterized by:
  6.  請求項1に記載のヘッドマウントディスプレイであって、
     前記ヘッドマウントディスプレイは、
     前記ユーザの首の回転に追従して前記自映像と前記手本映像の視点が変更される重畳表示を行う表示モードである追従表示モードと、
     前記ユーザの首の回転角度に応じて所定の視点の前記自映像と前記手本映像を重畳表示する表示モードである所定視点表示モードと、を切替可能である、
    ことを特徴とするヘッドマウントディスプレイ。
    The head mounted display according to claim 1,
    The head mounted display is
    a tracking display mode that is a display mode that performs superimposed display in which the viewpoints of the own image and the model image are changed in accordance with the rotation of the user's neck;
    It is possible to switch between a predetermined viewpoint display mode, which is a display mode in which the self-image at a predetermined viewpoint and the model image are superimposed and displayed according to the rotation angle of the user's neck;
    A head-mounted display characterized by:
  7.  請求項1に記載のヘッドマウントディスプレイであって、
     前記プロセッサは、
     前記手本映像の一部のテンポを他の部分よりも遅く再生し、
     前記手本映像のテンポを遅くする部分は、
     ユーザにより設定される、あるいは、前記手本映像の習得度合いを評価する評価モデルを前記プロセッサが用いることにより設定される、
    ことを特徴とするヘッドマウントディスプレイ。
    The head mounted display according to claim 1,
    The processor includes:
    Playing a part of the model video at a slower tempo than other parts,
    The part where the tempo of the model video is slowed down is as follows:
    set by the user, or set by the processor using an evaluation model that evaluates the degree of mastery of the model video;
    A head-mounted display characterized by:
  8.  請求項1に記載のヘッドマウントディスプレイであって、
     前記プロセッサは、
     前記ユーザが動作するテンポに合わせて前記手本画像を再生する、
    ことを特徴とするヘッドマウントディスプレイ。
    The head mounted display according to claim 1,
    The processor includes:
    reproducing the model image in accordance with the tempo at which the user moves;
    A head-mounted display characterized by:
  9.  請求項1に記載のヘッドマウントディスプレイであって、
     前記プロセッサは、
     前記ユーザが左右方向の一方に首を回転するときに、前記回転の方向に対応する視点の前記自映像と前記手本映像を、第1の重畳表示として前記ディスプレイに重畳表示し、
     前記ユーザが同じ分だけ他方に首を回転したと仮定した場合における前記回転の方向に対応する視点の前記自映像と前記手本映像を、第2の重畳表示として前記ディスプレイに重畳表示し、
     前記プロセッサは、
     前記の第1の重畳表示と、前記の第2の重畳表示と、を前記ディスプレイ上の異なる位置に行う、
    ことを特徴とするヘッドマウントディスプレイ。
    The head mounted display according to claim 1,
    The processor includes:
    When the user rotates his or her head in one of the left and right directions, the self-image and the model image from a viewpoint corresponding to the direction of rotation are displayed in a superimposed manner on the display as a first superimposed display;
    Displaying the self-image and the model image from a viewpoint corresponding to the direction of rotation in a superimposed manner on the display as a second superimposed display, assuming that the user rotates his or her head in the other direction by the same amount;
    The processor includes:
    performing the first superimposed display and the second superimposed display at different positions on the display;
    A head-mounted display characterized by:
  10.  請求項9に記載のヘッドマウントディスプレイであって、
     前記プロセッサは、
     前記ユーザが左右方向の一方に首を回転するときの重畳表示をする第1エリアと、
     前記ユーザが左右方向の他方に首を回転するときの重畳表示をする第2エリアと、を前記ディスプレイ上に設定する、
    ことを特徴とするヘッドマウントディスプレイ。
    The head mounted display according to claim 9,
    The processor includes:
    a first area in which a superimposed display is displayed when the user rotates his or her head in one of the left and right directions;
    a second area for displaying a superimposed image when the user rotates his or her head in the other direction in the left and right directions, is set on the display;
    A head-mounted display characterized by:
  11.  請求項1に記載のヘッドマウントディスプレイであって、
     前記プロセッサは、
     重畳表示している自映像と手本映像の差分が閾値以上である場合に、差分が発生している旨の通知を行う、
    ことを特徴とするヘッドマウントディスプレイ。
    The head mounted display according to claim 1,
    The processor includes:
    If the difference between the superimposed self-image and the model image is greater than a threshold, a notification will be sent to the effect that a difference has occurred.
    A head-mounted display characterized by:
  12.  請求項1に記載のヘッドマウントディスプレイであって、
     前記ヘッドマウントディスプレイは、
     前記手本映像を再生して表示する表示モードである再生表示モードと、
     前記手本映像を固定して表示する表示モードである固定表示モードと、を切替可能である、
    ことを特徴とするヘッドマウントディスプレイ。
    The head mounted display according to claim 1,
    The head mounted display is
    a playback display mode that is a display mode in which the model video is played back and displayed;
    It is possible to switch between a fixed display mode, which is a display mode in which the model image is displayed in a fixed manner;
    A head-mounted display characterized by:
  13.  配信装置と、
     ヘッドマウントディスプレイと、
    を備え、
     前記配信装置は、
     手本の動作を示す手本映像を配信し、
     前記ヘッドマウントディスプレイは、
     プロセッサと、
     ディスプレイと、
    を備え、
     前記プロセッサは、
     ユーザの動作を示す自映像と前記配信装置から配信される前記手本映像を、前記ヘッドマウントディスプレイを頭部に装着したユーザの首の回転角度に応じた視点で前記ディスプレイに重畳表示する、
    ことを特徴とするヘッドマウントディスプレイシステム。
    a distribution device;
    head mounted display,
    Equipped with
    The distribution device includes:
    Distributing model videos showing model actions,
    The head mounted display is
    a processor;
    display and
    Equipped with
    The processor includes:
    superimposing and displaying a self-image showing the user's actions and the model image distributed from the distribution device on the display at a viewpoint corresponding to a rotation angle of the neck of the user wearing the head-mounted display on the head;
    A head-mounted display system characterized by:
  14.  プロセッサを用いて行うヘッドマウントディスプレイの表示方法であって、
    ユーザの動作を示す自映像と手本の動作を示す手本映像を、前記ヘッドマウントディスプレイを頭部に装着したユーザの首の回転角度に応じた視点でディスプレイに重畳表示する、
    ことを特徴とするヘッドマウントディスプレイの表示方法。
    A display method for a head-mounted display using a processor, the method comprising:
    superimposing and displaying a self-image showing the user's movement and a model image showing the model movement on a display at a viewpoint corresponding to a rotation angle of the neck of the user wearing the head-mounted display on the head;
    A display method for a head-mounted display characterized by:
  15.  請求項14に記載のヘッドマウントディスプレイの表示方法をプロセッサに実行させるプログラム。 A program that causes a processor to execute the head-mounted display display method according to claim 14.
PCT/JP2022/023903 2022-06-15 2022-06-15 Head-mounted display, head-mounted display system, and display method for head-mounted display WO2023242981A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/023903 WO2023242981A1 (en) 2022-06-15 2022-06-15 Head-mounted display, head-mounted display system, and display method for head-mounted display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/023903 WO2023242981A1 (en) 2022-06-15 2022-06-15 Head-mounted display, head-mounted display system, and display method for head-mounted display

Publications (1)

Publication Number Publication Date
WO2023242981A1 true WO2023242981A1 (en) 2023-12-21

Family

ID=89192464

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/023903 WO2023242981A1 (en) 2022-06-15 2022-06-15 Head-mounted display, head-mounted display system, and display method for head-mounted display

Country Status (1)

Country Link
WO (1) WO2023242981A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015186531A (en) * 2014-03-26 2015-10-29 国立大学法人 東京大学 Action information processing device and program
JP2016047219A (en) * 2014-08-28 2016-04-07 学校法人立命館 Body motion training support system
JP2017136142A (en) * 2016-02-02 2017-08-10 セイコーエプソン株式会社 Information terminal, motion evaluation system, motion evaluation method, motion evaluation program, and recording medium
JP2017158184A (en) * 2016-02-29 2017-09-07 シナノケンシ株式会社 Optional viewpoint video composite display system
JP2019012965A (en) * 2017-06-30 2019-01-24 富士通株式会社 Video control method, video control device, and video control program
JP2019187501A (en) * 2018-04-18 2019-10-31 美津濃株式会社 Swing analysis system and swing analysis method
JP2020107092A (en) * 2018-12-27 2020-07-09 株式会社コロプラ Program, method and information processing apparatus
WO2020179128A1 (en) * 2019-03-06 2020-09-10 株式会社日立製作所 Learning assist system, learning assist device, and program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015186531A (en) * 2014-03-26 2015-10-29 国立大学法人 東京大学 Action information processing device and program
JP2016047219A (en) * 2014-08-28 2016-04-07 学校法人立命館 Body motion training support system
JP2017136142A (en) * 2016-02-02 2017-08-10 セイコーエプソン株式会社 Information terminal, motion evaluation system, motion evaluation method, motion evaluation program, and recording medium
JP2017158184A (en) * 2016-02-29 2017-09-07 シナノケンシ株式会社 Optional viewpoint video composite display system
JP2019012965A (en) * 2017-06-30 2019-01-24 富士通株式会社 Video control method, video control device, and video control program
JP2019187501A (en) * 2018-04-18 2019-10-31 美津濃株式会社 Swing analysis system and swing analysis method
JP2020107092A (en) * 2018-12-27 2020-07-09 株式会社コロプラ Program, method and information processing apparatus
WO2020179128A1 (en) * 2019-03-06 2020-09-10 株式会社日立製作所 Learning assist system, learning assist device, and program

Similar Documents

Publication Publication Date Title
JP6074525B1 (en) Visual area adjustment method and program in virtual space
JP6058184B1 (en) Method and program for controlling head mounted display system
JP6540691B2 (en) Head position detection device and head position detection method, image processing device and image processing method, display device, and computer program
JP5996814B1 (en) Method and program for providing image of virtual space to head mounted display
JP6087453B1 (en) Method and program for providing virtual space
JP6097377B1 (en) Image display method and program
JP6130478B1 (en) Program and computer
US11032537B2 (en) Movable display for viewing and interacting with computer generated environments
US11865448B2 (en) Information processing apparatus and user guide presentation method
WO2017033777A1 (en) Program for controlling head-mounted display system
JP6523493B1 (en) PROGRAM, INFORMATION PROCESSING DEVICE, AND INFORMATION PROCESSING METHOD
JP2017138973A (en) Method and program for providing virtual space
WO2019142560A1 (en) Information processing device for guiding gaze
JP6121496B2 (en) Program to control the head mounted display system
JP6306083B2 (en) Method, program, and recording medium for providing virtual space
JP6126272B1 (en) Method, program, and recording medium for providing virtual space
JP2017121082A (en) Program and computer
JP2017142783A (en) Visual field area adjustment method and program in virtual space
JP6518645B2 (en) INFORMATION PROCESSING APPARATUS AND IMAGE GENERATION METHOD
JP2023095862A (en) Program and information processing method
WO2023242981A1 (en) Head-mounted display, head-mounted display system, and display method for head-mounted display
CN111699460A (en) Multi-view virtual reality user interface
US20230260235A1 (en) Information processing apparatus, information processing method, and information processing system
JP2019145161A (en) Program, information processing device, and information processing method
WO2017199848A1 (en) Method for providing virtual space, program, and recording medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22946801

Country of ref document: EP

Kind code of ref document: A1