WO2017051595A1 - Dispositif, procédé et programme de traitement d'informations - Google Patents

Dispositif, procédé et programme de traitement d'informations Download PDF

Info

Publication number
WO2017051595A1
WO2017051595A1 PCT/JP2016/070725 JP2016070725W WO2017051595A1 WO 2017051595 A1 WO2017051595 A1 WO 2017051595A1 JP 2016070725 W JP2016070725 W JP 2016070725W WO 2017051595 A1 WO2017051595 A1 WO 2017051595A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
adjustment
line
sight
display
Prior art date
Application number
PCT/JP2016/070725
Other languages
English (en)
Japanese (ja)
Inventor
翼 塚原
木村 淳
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2017051595A1 publication Critical patent/WO2017051595A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/02Viewing or reading apparatus
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/64Constructional details of receivers, e.g. cabinets or dust covers

Definitions

  • the present disclosure relates to an information processing apparatus, an information processing method, and a program.
  • a calibration image having marker points at the four corners and the center of a display screen is displayed on a head mounted display (HMD), and the user aligns his / her line of sight with the marker points at the start of use. It is described that calibration is performed.
  • HMD head mounted display
  • a line-of-sight information acquisition unit that acquires line-of-sight information when viewing display information
  • an adjustment execution unit that adjusts a positional relationship of the line-of-sight information with respect to the display information, the display information
  • the adjustment execution capable of executing a first adjustment for adjusting the line-of-sight information and a second adjustment for adjusting the line-of-sight information with respect to the display information based on an information amount smaller than the first adjustment.
  • an information processing apparatus including the unit.
  • the adjustment execution unit adjusts the direction of the line of sight with respect to a plurality of reference positions on the display screen corresponding to the display information, and on the display screen less than the first adjustment.
  • the second adjustment for adjusting the direction of the line of sight with respect to the reference position may be executable.
  • an image processing unit that generates an image in which a marker indicating the reference position is displayed on the display screen, and an offset between the line-of-sight information acquired in a state where the direction of the line of sight faces the reference position and the reference position
  • An offset amount acquisition unit that acquires an amount, and the adjustment execution unit may perform the first or second adjustment based on the offset amount.
  • the mode selection unit may determine whether to perform the first adjustment or the second adjustment based on information about a user or information about display content.
  • the mode selection unit may determine whether to perform the first adjustment or the second adjustment based on a user action recognition result.
  • the mode selection unit may determine whether to perform the first adjustment or the second adjustment based on display content.
  • the image processing unit may display the marker at the center of the display screen.
  • the image processing unit may display the marker so that the size of the marker decreases with the passage of time at the center of the display screen.
  • the image processing unit may display a countdown display together with the marker.
  • the image processing unit may display the marker in an area frequently used in the display screen by an application displayed on the display screen.
  • the image processing unit may display the marker of the 3D display at a depth position that is frequently used in an application displayed on the display screen.
  • the marker may be displayed when the application displayed on the display screen is activated, when the application is activated, or when the application is terminated.
  • the image processing unit may display the markers at least in the center and at the four corners of the display screen.
  • a sensor adjustment execution unit that adjusts the reference position on the display screen and the reference direction obtained by the position sensor during execution of the second adjustment may be provided.
  • it may be mounted on a head mounted display that provides the display information.
  • An information processing method comprising: adjusting the line-of-sight information with respect to the display information based on a smaller amount of information than the first adjustment.
  • a program is provided.
  • FIG. 13 It is a schematic diagram which shows the example which performs 3D display of the virtual object which gave the three-dimensional expression as a display at the time of performing the calibration by a 2nd image.
  • FIG. 13 it is a schematic diagram which respectively shows the image which a display part on either side displays, and a user's left and right eyes visually recognize.
  • It is a flowchart which shows the process performed by HMD.
  • the line-of-sight adjustment apparatus 1000 constitutes a head mounted display (hereinafter referred to as HMD (Head Mounted Display)) 1100 together with the glasses 900.
  • HMD Head Mounted Display
  • the line-of-sight adjusting apparatus 1000 includes a main body unit 100, a display unit 200, a line-of-sight camera 300, a front camera 400, a sound acquisition unit (microphone) 500, an operation input unit (operation switch) 600, and a 9-axis sensor. 650 (not shown in FIG. 1).
  • the 9-axis sensor 650 is built in the main body 100.
  • the main body 100 incorporates a control board for controlling the operation of the line-of-sight adjustment apparatus 1000.
  • the main body unit 100 includes a control board having a CPU (Central Processing Unit) or a RAM (Random Access Memory) and the like, using a signal line or the like, the line-of-sight camera 300, the front camera 400, and the sound acquisition unit 500. Connected.
  • a CPU Central Processing Unit
  • RAM Random Access Memory
  • the display unit 200 is provided in a portion of the lens 902 of the glasses 900, and displays an image such as a text or a figure in a transparent or translucent state, thereby displaying an AR virtual object (GUI) in a real space landscape. Superimposed display is possible. That is, the HMD 1100 is realized as a transmissive HMD.
  • the display unit 200 projects the image light provided from the main body unit 100 toward the outside of the line-of-sight adjustment apparatus 1000, that is, the eyes of the user wearing the HMD 1100.
  • the HMD 1100 may not be a transmissive type. In other words, the display unit may be provided separately from the lens 902 and provide an image to the user's pupil.
  • the line-of-sight camera 300 is provided on the nose of the glasses 900.
  • the line-of-sight camera 300 includes an image sensor such as a CMOS sensor, captures an image of the user's pupil, and detects the direction of the user's line of sight.
  • the front camera 400 has an image sensor such as a CMOS sensor, captures an external field image in front of the glasses 900, and acquires image data representing the external field image.
  • the line-of-sight adjustment apparatus 1000 can obtain the distance to each object existing in the real space based on the principle of triangulation.
  • Image data representing the external image acquired by the front camera 400 is sent to the main body 100.
  • the voice acquisition unit 500 acquires voices uttered by the user, external sounds, and the like as voice data.
  • the user can cause the line-of-sight adjustment apparatus 1000 to perform a desired operation by uttering a command by voice.
  • the operation input unit 600 includes a switch that can be operated by a user to input a desired command.
  • the user can cause the line-of-sight adjustment apparatus 1000 to perform a desired operation by operating the operation input unit 600.
  • the contents of the operation input from the operation input unit 600 can be displayed on the display unit 200.
  • the HMD 1100 has the above-described configuration, so that a virtual object or the like displayed by the display unit 200 is superimposed and displayed on a real space that the user visually recognizes through the lens 902 of the glasses 900, so-called real objects. Spatial computing can be realized.
  • FIG. 2 is a schematic diagram showing an image obtained when the user visually recognizes the front with the HMD 1100.
  • FIG. 2 shows an image visually recognized by the user when using a cycling application described later.
  • an icon 952 that is a virtual object is superimposed and displayed on an image of the real space (road surface 950 of the road).
  • a device such as a personal computer or a smart phone, when an icon is selected, an operation such as operating the cursor with a mouse and clicking the icon or tapping the icon on the touch panel is required.
  • the line of sight when the line of sight is directed to the displayed icon 952, it is detected by the line-of-sight camera 300 that the direction of the line of sight is directed to the icon 952, and the voice acquisition unit Audio data is acquired by 500, so that an icon 952 with a line of sight can be selected.
  • a predetermined gesture may be performed using a hand with the line of sight directed to the displayed icon 952.
  • the gesture is detected by the front camera 400, and the icon 952 with a line of sight can be selected using the detection of the gesture as a trigger.
  • the user can select the icon 952 due to a time-out by directing his / her line of sight to the displayed icon 952 for a predetermined time.
  • the icon 952 can be selected by performing a predetermined operation from the operation input unit 600 in a state where the line of sight is directed to the displayed icon 952.
  • the type of virtual object displayed by the display unit 200 differs depending on the application used by the HMD 1100.
  • Examples of these applications include exercise applications used in exercise (exercise) such as games, jogging, and cycling.
  • the display unit 200 Since the display unit 200 is provided on the left and right sides corresponding to the left and right lenses 902, it is possible to display a 3D virtual object (stereoscopic display (3D display)) by giving parallax to the left and right images.
  • a 3D virtual object stereographic display (3D display)
  • the 3D virtual object can be displayed in correspondence with the depth position of the object in the real space.
  • a 3D virtual object (such as a cup) can be displayed on a desk that exists in real space. Thereby, the user can visually recognize an image as if a cup is actually placed on a desk in real space.
  • FIG. 3 is a block diagram illustrating a configuration of the main body 100 of the line-of-sight adjustment apparatus 1000.
  • the main unit 100 has a CPU, and the configuration shown in FIG. 3 can be configured by a CPU and a program (software) for causing the CPU to function.
  • the main body 100 includes an image processing unit 102, an offset amount detection unit 104, a line-of-sight position detection unit 108, a sensor adjustment execution unit 110, and an image memory 112.
  • the line-of-sight position detection unit 108 includes a line-of-sight information acquisition unit 108a and an adjustment execution unit 108b.
  • the line-of-sight position detection unit 108 obtains pupil position coordinates (line-of-sight information) from the line-of-sight camera 300.
  • the adjustment execution unit 108b can switch and execute a first calibration and a second calibration described later;
  • the image memory 112 stores information related to applications used by the user in the HMD 1100. Based on the image information acquired from the image memory 112, the image processing unit 102 generates an image to be projected from the display unit 200 to the outside (user's eyes). As a result, the icon 952 described above can be displayed superimposed on an object in real space.
  • an image such as a game character is generated by the image processing unit 102 and projected to the user's eyes.
  • an image such as a game character is generated by the image processing unit 102 and projected to the user's eyes.
  • an image such as a traveling speed, a traveling distance, navigation information, and the like is generated by the image processing unit 102. Projected on the eyes.
  • the image processing unit 102 receives an external image in front of the glasses 900 captured by the front camera 400.
  • the external image captured by the front camera 400 is subjected to image processing in the image processing unit 102 and is projected to the outside (user's eyes) from the display unit 200 as necessary.
  • the image processing unit 102 can generate a grid indicating the position in the depth direction of the object in the external image based on the external image captured by the front camera 400.
  • the generated grid is displayed so as to be superimposed on an object in real space that the user sees through the lens 902. Thereby, the user can visually recognize an image in which a grid is superimposed on an object in real space viewed with the eyes, and can instantaneously obtain information such as the distance to the object.
  • the line-of-sight position detection unit 108 can detect the position coordinates of the pupil based on the image of the user's pupil captured by the line-of-sight camera 300 and detect the direction of the line of sight of the user. As a result, the line-of-sight adjustment apparatus 1000 can perform processing based on the correspondence between the direction of the line of sight and the position of the image of the application.
  • the direction of the user's line of sight detected by the line-of-sight position detection unit 108 is input to the image processing unit 102.
  • the line-of-sight position detection unit 108 detects the direction of the user's line of sight, an operation based on the direction of the line of sight as described above can be performed.
  • the image processing unit 102 can detect a state in which the user's line of sight is facing a specific character displayed as a virtual object while the user is using a game application.
  • the gesture is detected by the front camera 400, and a certain action can be performed on the character using the detection of the gesture as a trigger.
  • the character when a user performs a predetermined gesture while the line of sight is facing a specific character, the character can be moved according to the movement direction of the line of sight or the movement direction of the gesture.
  • the image processing unit 102 generates an image in which the position of the character is moved, and the display unit 200 provides the generated image to the user's pupil.
  • the line-of-sight adjustment apparatus 1000 can perform processing according to the direction of the user's line of sight.
  • the reference position of the direction of the line of sight needs to be set correctly.
  • the line-of-sight position detecting unit 108 determines the reference point of the position coordinate of the eye image by performing calibration.
  • the image processing unit 102 generates a calibration image, and the generated calibration image is projected to the outside (user's eyes) by the display unit 200.
  • the first image (hereinafter also referred to as a first calibration image) is an image shown in FIG. 4 as an example, and is provided to the user when the use of the HMD 1100 is started, for example.
  • FIG. 4 is an image visually recognized by the user wearing the HMD 1100, and shows a display by the display unit 200, and an image of the real space is superimposed on the display by the display unit 200.
  • strong calibration a plurality of calibration marker points are set in the display screen of the display unit 200, and a high-precision calibration is performed over a relatively long time. Perform
  • the first image includes marker points 700, 702, 704, 706, and 708 at the four corners and the center of the display screen.
  • Calibration is performed by the user aligning the line of sight with the marker point 700 at the center of the first image and operating the operation input unit 600. Further, instead of using the operation input unit 600, the front camera 400 detects a predetermined gesture performed by the user using a hand or the like, so that calibration can be performed.
  • the calibration of the line-of-sight position is performed based on the image of the user's pupil captured by the line-of-sight camera 300.
  • the user's pupil image captured by the line-of-sight camera 300 is sent to the line-of-sight position detection unit 108.
  • the line-of-sight position detection unit 108 detects the position coordinates of the pupil based on the image of the user's pupil captured by the line-of-sight camera 300, and detects the direction of the line of sight of the user.
  • the position coordinates of the pupil are sent to the image processing unit 102.
  • FIG. 5 is a schematic diagram for explaining the calibration of the line-of-sight position, and shows an image captured by the line-of-sight camera 300.
  • calibration is performed when the user looks toward the center marker point 700 shown in FIG.
  • the reference position hereinafter referred to as the center reference position
  • the center reference position is indicated by a dotted line.
  • calibration is performed so that the center reference position is positioned at the center of the screen 750 of the image captured by the visual line camera 300. Thereafter, as shown in FIG.
  • the image processing unit 102 detects the position coordinates (X, Y) of the image of the moved pupil 800 with the position of the pupil 800 at the time of calibration as the origin O. Thereby, the direction of the line of sight can be detected.
  • the reference position (hereinafter referred to as the upper left reference position) of the image of the pupil 800 at the time of calibration corresponds to the position where the line of sight faces the upper left marker point 702, and then the actual position of the pupil 800 is determined.
  • the position coordinates of the moved pupil 800 image are detected with the upper left reference position as the origin.
  • the pupil image is shifted from each reference position in accordance with the actual movement of the subsequent pupils.
  • the position coordinates of the moved pupil image are detected using each reference position as the origin.
  • which of the reference positions corresponding to the plurality of marker points 700, 702, 704, 706, and 708 is used as the origin is determined based on the reference position with the closest visual line direction according to the detected pupil direction. To do.
  • an announcement sentence such as “Please press the operation button while looking at the center marker point” or “Please press the operation button while looking at the marker point in the upper left” is an image. Generated by the processing unit 108 and displayed on the display unit 200.
  • the HMD 1100 may be shifted downward due to its own weight (hereinafter referred to as “hanging deviation”).
  • hanging deviation the position of the pupil 800 cannot be accurately detected, and the direction of the line of sight is determined correctly. Can not be. Therefore, if a shift occurs, it is necessary to perform calibration again or to correct the shift by the user.
  • FIGS. 6 and 7 are schematic diagrams for explaining a state in which the relative positional relationship between the HMD 1100 and the user's face is deviated when a misalignment occurs.
  • the upper row shows the positional relationship between the user's head and the HMD 1100
  • the lower row shows a screen 750 of an image captured by the visual line camera 300.
  • FIG. 6 shows a case where there is no misalignment.
  • the pupil 800 is positioned at the origin O (the above-described center reference position) at the center of the screen.
  • the position coordinates of the image of the pupil 800 can be detected with high accuracy, while calibration is performed. It takes a relatively long time. If the calibration using the first image is performed while the user is using the application, the user experience by executing the application is impaired. Therefore, after starting to use the HMD 1100, there is a limitation in performing calibration using the first image during use of the application. Even if the user corrects the misalignment by hand while using the application, the user experience by executing the application is similarly impaired.
  • a second calibration image that can be performed more easily than calibration using the first image.
  • the calibration using the second image is executed based on a smaller amount of information than the calibration using the first image.
  • an animation or gesture that allows the user to actively look at the center of the screen is employed. Then, for example, the second image is provided when the application is started, when the application is terminated, during execution of the application, or when the user ignites a gesture, and calibration is performed using the second image at that timing.
  • FIG. 8 is a schematic diagram showing an example of the second image, and shows an image visually recognized by the user wearing the HMD 1100.
  • a second image for calibration is displayed when the home screen of the application appears.
  • a virtual object (icon) 850 that is noticed by the user is displayed at the top of the screen, and the virtual object 850 is moved to the center.
  • the virtual object 850 By moving the virtual object 850 from the upper part of the screen to the center of the screen and making it stand still at the center of the screen, the virtual object 850 at the center of the screen can be watched. Then, the line-of-sight position calibration is performed at the timing when the object 850 stops at the center of the screen.
  • the calibration using the second image has fewer reference marker points than the calibration using the first image, and information at the time of calibration.
  • the amount can be less than the calibration with the first image.
  • one marker point is calibrated a plurality of times, and in the calibration using the second image, one marker point is used.
  • the amount of information at the time of calibration using the second image can be further reduced.
  • the amount of calibration information by the second image can be made smaller than the calibration by the first image.
  • the calibration with the second image allows the calibration with the second image by gazing for a shorter time.
  • the amount of information can be reduced.
  • the information amount at the time of calibration by the second image is the first.
  • the amount of calibration information based on the image may be smaller.
  • FIG. 9 is a schematic diagram for explaining the calibration by the second image when the shift has occurred, and shows a screen 750 of an image captured by the visual line camera 300.
  • the user's line of sight is directed toward the object 850.
  • the pupil 800 is displaced above the origin O of the screen 750 before calibration. For this reason, calibration is performed in consideration of the offset amount D due to the shift so that the position of the pupil 800 shown in FIG.
  • the position of the pupil in the state where the object 850 is viewed is positioned at the origin O of the screen 750. Therefore, when the direction of the line of sight is detected after the calibration is performed, the position of the origin O is adjusted by the offset amount D, and thus the direction of the line of sight can be accurately obtained. Therefore, it is not necessary for the user to calibrate intentionally, and since the device side does not suddenly request calibration, an excellent user experience by the HMD 1100 can be provided.
  • FIG. 10 to 12 are schematic diagrams showing other examples of the second image for calibration, and show images visually recognized by the user wearing the HMD 1100.
  • FIG. 10 Also in the example illustrated in FIG. 10, the virtual object 860 provided by the display unit 200 and the external world image (user's hand) viewed by the user are superimposed.
  • the virtual object 860 is displayed according to the user's gesture. That is, when the user raises his hand, the front camera 400 recognizes the gesture and displays the virtual object 860. At that time, since the user pays attention to the virtual object 860 displayed in the center of the screen, the gaze position is calibrated at that timing.
  • FIG. 11 shows a screen immediately after an application is selected from the home screen.
  • the game title is displayed in the center of the screen, and the direction of the user's line of sight is matched with the virtual object 865 (game title).
  • the gaze position is calibrated.
  • a 9-axis sensor which will be described later, it is possible to operate the game using a line of sight and head movement without a sense of incongruity.
  • the virtual object 870 (GAME OVER characters) is displayed at the center of the screen at the timing when the application is terminated, thereby performing the calibration by guiding the user's line of sight to the center of the display screen.
  • the timing when the application ends include timing when the game ends, timing when the user performs an end operation, and the like.
  • the countdown display 875 is performed together with the characters GAME OVER, so that the user can focus on the countdown value.
  • FIG. 13 shows an example in which 3D display of a virtual object 880 subjected to a three-dimensional representation is performed as display when performing calibration using the second image.
  • FIG. 14 shows images displayed on the left and right display units 200 in the example of FIG. 13 and visually recognized by the left and right eyes of the user.
  • FIG. 15 and FIG. 16 are schematic diagrams showing still another example of the second image.
  • various images can be used as the second image.
  • a circle 890 that is a virtual object is displayed at the center of the screen, and the diameter of the circle 890 is reduced as time passes.
  • visual_axis can be reliably collected on the screen center.
  • a countdown display may be performed at the center of the circle 890 as time passes.
  • FIG. 16 shows an example in which a virtual object 892 is displayed at the right end of the screen in order to make the user gaze at the right end of the screen as yet another example of the second image.
  • a calibration virtual object 892 is displayed at the right edge of the screen where the user is gazing frequently.
  • the display position of the second calibration image may be changed according to the type of application.
  • calibration can be performed according to the result of user action recognition. For example, it is determined whether the user is running, walking, or stopped by the 9-axis sensor 650, and depending on the result of the determination, it is considered that a misalignment is likely to occur when running, so strong calibration Or perform weak calibration frequently. On the other hand, when the user is stopped, it is difficult for the shift to occur, so weak calibration is performed and the frequency is also reduced. Further, when the 9-axis sensor 650 determines that the user is lying down (sleeping), it is considered that a misalignment is likely to occur. Therefore, strong calibration or weak calibration is frequently performed. To do.
  • the HMD 1100 includes sensors that detect heart rate, body temperature, sweating, and the like
  • calibration according to the detection values of these sensors can also be performed. For example, when the heart rate is high, it can be determined that the user is impatient and is easy to visually recognize the center of the screen. Therefore, in this case, the second image for calibration is displayed at the center of the screen.
  • the position coordinates of the pupil 800 detected by the line-of-sight information acquisition unit 108 a of the line-of-sight position detection unit 108 are sent to the offset amount detection unit 104.
  • the image processing unit 102 generates a second image for calibration, and sends the position coordinates of the virtual object 850 to the offset amount detection unit 104.
  • the offset amount detection unit 104 calculates the offset amount D shown in FIG. 9 from the position coordinates of the pupil 800 and the position coordinates of the virtual object 850.
  • the calculated offset amount D is sent to the line-of-sight position detection unit 108.
  • the adjustment execution unit 108b of the line-of-sight position detection unit 108 calculates the position coordinates of the pupil considering the offset amount D with respect to the detected position coordinates of the pupil 800, and executes calibration.
  • the pupil position coordinates calibrated by the adjustment execution unit 108 b are sent to the image processing unit 102.
  • the image processing unit 102 receives a command from the user and executes a process according to the command. For example, as described with reference to FIG. 2, when the icon 952 is selected based on the line-of-sight direction and voice, the line-of-sight direction obtained from the position coordinates of the pupil 800 considering the offset amount D matches the position coordinates of the icon 952. If the voice is acquired from the voice acquisition unit 500, the icon 952 is selected. After the icon is selected, the image processing unit 102 generates an image after the icon 952 is selected and displays the image on the display unit 200.
  • the nine-axis sensor 650 includes a three-axis gyro sensor that detects three-axis angular velocities (rotational speeds) of the X, Y, and Z axes, a three-axis acceleration sensor that detects acceleration in the X, Y, and Z-axis directions, and geomagnetism. It comprises a triaxial geomagnetic sensor that detects and detects absolute directions in the X, Y, and Z axis directions. According to the 9-axis sensor 650, the position and movement of the HMD 1100 can be detected.
  • FIG. 17 is a schematic diagram showing how the displayed virtual objects 860, 862, 864, 866, and 868 move to the left and right when the user wearing the HMD 1100 moves the face to the left and right.
  • the virtual object 860 located at the center is displayed at the center of the display screen.
  • a virtual object 864 positioned to the right of the virtual object 860 is displayed at the center of the display screen, and the virtual object 860 moves to the left side of the display screen.
  • a virtual object 868 on the right side of the central virtual object 864 appears on the right side of the display screen.
  • the virtual object 862 positioned to the left of the virtual object 860 is displayed at the center of the display screen, and the virtual object 860 moves to the right side of the display screen.
  • a virtual object 866 on the left side of the central virtual object 862 appears on the left side of the display screen.
  • a virtual object 850 for calibration is displayed at the center of the display screen, and the direction of the line of sight is calibrated, and at the same time, the 9-axis sensor 650 is calibrated.
  • the five virtual objects 860, 862, 864, 866, and 868 displayed in the left-right direction are superimposed on the virtual object 850 for calibration, and the virtual object 850 is displayed on the display screen as described above.
  • the line-of-sight position is calibrated.
  • the 9-axis sensor 650 is calibrated and the horizontal direction of the HMD 1100 is adjusted so that the virtual object 860 located at the center of the five virtual objects 860, 862, 864, 866, and 868 is located at the center of the display screen. Calibration of the initial position (origin) of the 9-axis sensor 650 with respect to movement is performed. As a result, the object 860 positioned at the center can be displayed at the center of the display screen with the direction of the line of sight facing the front direction during calibration.
  • the 9-axis sensor 650 When the 9-axis sensor 650 is calibrated, the absolute direction reference in the X, Y, and Z-axis directions detected by the 9-axis sensor 650 by geomagnetism is sent to the image processing unit 102.
  • the reference position with respect to the angular position around the vertical axis (Y axis) is sent to the offset amount detection unit 104.
  • the image processing unit 102 generates a second image for calibration, and sends the position coordinates of the virtual object 850 to the offset amount detection unit 104.
  • the offset amount detection unit 104 calculates the difference (offset amount) between the reference position with respect to the angular position around the vertical axis obtained by the 9-axis sensor 650 and the position coordinates of the virtual object 850.
  • the calculated offset amount is sent to the sensor adjustment execution unit 110 that calibrates the 9-axis sensor 650.
  • the 9-axis sensor 650 calibrates the reference position with respect to the angular position around the vertical axis in consideration of the offset amount.
  • the reference position around the vertical axis detected by the 9-axis sensor 650 by geomagnetism coincides with the position coordinates of the virtual object 850. Therefore, as shown in FIG. 17, the object 860 located at the center can be displayed at the center of the display screen with the direction of the line of sight facing the front direction during calibration.
  • FIG. 18 is a flowchart showing processing performed by the line-of-sight adjustment apparatus 1000.
  • FIGS. 19 to 25 are schematic diagrams illustrating real space objects that the user visually recognizes through the lens 902 and virtual objects that the user visually recognizes by display on the display unit 200 when the cycling application is used.
  • step S10 of FIG. 18 it is determined whether or not the HMD 1100 is attached.
  • the operation input unit 600 is operated and the power of the HMD 1100 is turned on, it is determined that the HMD 1100 is attached. If it is determined that the HMD 1100 is attached, the process proceeds to step S12.
  • step S12 calibration (strong calibration) using the first image is performed.
  • FIG. 19 is a schematic diagram illustrating calibration using the first image. As shown in FIG. 19, the user issues an instruction to look directly at an explicit point.
  • marker points 710 and 712 are added as explicit points that the user directly views.
  • the calibration of the line-of-sight direction and the position of the user interface (UI) is performed. .
  • FIG. 20 shows marker points 700, 702, 704, 706, 708, 710, and 712 when performing the calibration shown in FIG.
  • FIG. 21 shows a case where the five marker points 700, 702, 704, 706, and 708 shown in FIG. 4 are used.
  • FIG. 22 shows an example in which marker points 714 and 716 are added in addition to the marker points 700, 702, 704, 706, 708, 710 and 712 shown in FIG.
  • step S14 it is determined whether or not the cycling application is activated. If the cycling application is activated, the process proceeds to step S16. On the other hand, when the cycling application is not activated, the process waits in step S14.
  • step S14 a plurality of virtual objects are displayed as shown in FIG.
  • a virtual object (map 960) and a virtual object (pacemaker 962) are shown.
  • the user can select either a virtual object (map 960) or a virtual object (pacemaker 962). If any virtual object is selected, it is determined in step S14 that the application has been activated.
  • step S16 calibration using the second image is performed while the application is running. For example, if the user selects a virtual object (pacemaker 962), as shown in FIG. 24, the virtual object (pacemaker 962) moves to the center of the screen and a countdown of a few seconds is performed. A display that looks like that is displayed. At this timing, weak calibration is performed assuming that the user's line of sight is looking at the center.
  • step S18 it is determined whether or not forced calibration is performed. If forced calibration is performed, the process proceeds to step S20. For example, when selecting an icon using the line of sight described with reference to FIG. 2, the icon is not selected even if the user gazes at an icon, or a large force is clearly applied to the HMD 1100 from the outside, When it is detected by the line-of-sight camera 300 or the 9-axis sensor 650 that the position of the HMD 1100 and the eyeball is misaligned, forced calibration is performed. In such a case, since it is assumed that the initial calibration is largely shifted in the application, in step S20, calibration using the first image (strong calibration) is performed.
  • step S22 it is determined whether or not the application is terminated. If the application is terminated, the process proceeds to step S24.
  • step S24 calibration using the second image (weak calibration) is performed at the end of the application.
  • the virtual object 964 shown in FIG. 25 is displayed, and the user's line of sight is collected at the center of the screen by attracting the user's consciousness at the center of the screen. And the accuracy of a gaze detection is ensured by performing weak calibration.
  • step S22 the process returns to step S16.
  • step S26 it is determined whether or not the user has removed the HMD 1100. If it is determined that the user has removed the HMD 1100, the process ends (END). On the other hand, if it is determined that the user has not removed the HMD 1100, the process returns to step S14. Whether or not the user has removed the HMD 1100 can be determined, for example, when the operation input unit 600 is operated and the power of the HMD 1100 is turned off (OFF). As described above, when the HMD 1100 is mounted, the application is activated and display content corresponding to the application is displayed until the application is terminated. In accordance with the flow of the display content sequence, in this embodiment, calibration is performed using the second image after the application is started and after the application is finished (steps S16 and S24). Calibration using the image is performed (step S20).
  • components of the line-of-sight adjustment apparatus 1000 at least some of the components may be provided on the cloud.
  • components such as the image processing unit 102, the offset amount detection unit 104, the line-of-sight position detection unit 108, and the image memory 112 can be provided on the cloud.
  • the calibration processing described above is performed by sending information acquired by the line-of-sight camera 300, the front camera 400, the audio acquisition unit 500, the operation input unit 600, and the 9-axis sensor 650 mounted on the HMD 1100 to the cloud. be able to.
  • the image generated by the image processing unit 102 can be displayed on the display unit 200 by sending it to the HMD 1100.
  • the line-of-sight adjustment apparatus 1000 may be mounted on a device other than the HMD 1100.
  • the line-of-sight adjustment apparatus 1000 may be mounted on a device such as a personal computer (PC) or a television receiver.
  • a device such as a PC or a television receiver includes an imaging device that captures a user's face
  • the gaze position calibration can be performed by installing the gaze adjustment device 1000 of this embodiment.
  • the display unit 200 corresponds to a display of these devices. As a result, even in a device such as a PC or a television receiver, calibration can be performed when a deviation occurs in the direction of the user's line of sight.
  • a line-of-sight information acquisition unit that acquires line-of-sight information when viewing display information
  • An adjustment execution unit that adjusts the positional relationship of the line-of-sight information with respect to the display information, based on a first adjustment that adjusts the line-of-sight information with respect to the display information, and an information amount that is smaller than the first adjustment
  • a second adjustment that adjusts the line-of-sight information with respect to the display information
  • An information processing apparatus comprising: (2) The information processing apparatus according to (1), wherein the adjustment execution unit includes a mode selection unit that selects either the first adjustment or the second adjustment.
  • the adjustment execution unit adjusts the direction of the line of sight with respect to a plurality of reference positions on the display screen corresponding to the display information, and the display screen is smaller than the first adjustment.
  • an image processing unit that generates an image in which a marker indicating the reference position is displayed on the display screen;
  • An offset amount acquisition unit that acquires an offset amount between the line-of-sight information acquired in a state where the direction of the line of sight faces the reference position and the reference position;
  • the image processing unit displays the markers at least at the center and four corners of the display screen.
  • the information processing apparatus according to (3) further including a sensor adjustment execution unit that adjusts a reference position on the display screen and a reference direction obtained by a position sensor during the execution of the second adjustment. .
  • the information processing apparatus which is mounted on a head-mounted display that provides the display information.
  • (19) obtaining line-of-sight information when viewing display information; Adjusting the positional relationship of the line-of-sight information with respect to the display information, and adjusting the line-of-sight information with respect to the display information; A second adjustment for adjusting the line-of-sight information.
  • An information processing method comprising: (20) means for acquiring line-of-sight information when viewing display information; Adjusting the positional relationship of the line-of-sight information with respect to the display information, and adjusting the line-of-sight information with respect to the display information; And a second adjustment for adjusting the line-of-sight information.
  • a program to make the computer function as.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

Afin de réaliser facilement des étalonnages dans un dispositif effectuant divers types de traitement par détection de la position de la ligne de visée, même lorsque la relation de position entre l'oeil et le dispositif se modifie, le dispositif de traitement d'informations selon l'invention comprend une unité d'acquisition d'informations de ligne de visée (108a) destinée à acquérir des informations de ligne de visée lorsque des informations d'affichage sont visualisées, et une unité d'exécution de réglage (108b) destinée à régler la relation de position des informations de ligne de visée par rapport aux informations d'affichage, l'unité d'exécution de réglage (108b) pouvant effectuer un premier réglage par lequel les informations de ligne de visée sont réglées par rapport aux informations d'affichage, et un deuxième réglage par lequel les informations de ligne de visée sont réglées par rapport aux information d'affichage en fonction du volume d'informations inférieur au volume d'informations utilisé pour le premier réglage.
PCT/JP2016/070725 2015-09-25 2016-07-13 Dispositif, procédé et programme de traitement d'informations WO2017051595A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015187962 2015-09-25
JP2015-187962 2015-09-25

Publications (1)

Publication Number Publication Date
WO2017051595A1 true WO2017051595A1 (fr) 2017-03-30

Family

ID=58386005

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/070725 WO2017051595A1 (fr) 2015-09-25 2016-07-13 Dispositif, procédé et programme de traitement d'informations

Country Status (1)

Country Link
WO (1) WO2017051595A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017097274A (ja) * 2015-11-27 2017-06-01 株式会社デンソー 表示補正装置
CN112136152A (zh) * 2018-03-15 2020-12-25 奇跃公司 由观看设备的部件变形导致的图像校正
US11953653B2 (en) 2017-12-10 2024-04-09 Magic Leap, Inc. Anti-reflective coatings on optical waveguides
US11960661B2 (en) 2018-08-03 2024-04-16 Magic Leap, Inc. Unfused pose-based drift correction of a fused pose of a totem in a user interaction system
US12001013B2 (en) 2023-01-09 2024-06-04 Magic Leap, Inc. Pixel intensity modulation using modifying gain values

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001134371A (ja) * 1999-11-05 2001-05-18 Shimadzu Corp 視線検出装置
JP2001211403A (ja) * 2000-01-25 2001-08-03 Mixed Reality Systems Laboratory Inc 頭部装着型表示装置及び頭部装着型表示システム
JP2010118047A (ja) * 2008-10-16 2010-05-27 Sharp Corp 通信端末装置、通信方法、および通信プログラム
JP2015101292A (ja) * 2013-11-27 2015-06-04 株式会社デンソー 視線検出装置、および視線検出方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001134371A (ja) * 1999-11-05 2001-05-18 Shimadzu Corp 視線検出装置
JP2001211403A (ja) * 2000-01-25 2001-08-03 Mixed Reality Systems Laboratory Inc 頭部装着型表示装置及び頭部装着型表示システム
JP2010118047A (ja) * 2008-10-16 2010-05-27 Sharp Corp 通信端末装置、通信方法、および通信プログラム
JP2015101292A (ja) * 2013-11-27 2015-06-04 株式会社デンソー 視線検出装置、および視線検出方法

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017097274A (ja) * 2015-11-27 2017-06-01 株式会社デンソー 表示補正装置
US11953653B2 (en) 2017-12-10 2024-04-09 Magic Leap, Inc. Anti-reflective coatings on optical waveguides
CN112136152A (zh) * 2018-03-15 2020-12-25 奇跃公司 由观看设备的部件变形导致的图像校正
JP2021518574A (ja) * 2018-03-15 2021-08-02 マジック リープ, インコーポレイテッドMagic Leap,Inc. 視認デバイスのコンポーネントの変形に起因する画像補正
JP7344896B2 (ja) 2018-03-15 2023-09-14 マジック リープ, インコーポレイテッド 視認デバイスのコンポーネントの変形に起因する画像補正
US11908434B2 (en) 2018-03-15 2024-02-20 Magic Leap, Inc. Image correction due to deformation of components of a viewing device
US11960661B2 (en) 2018-08-03 2024-04-16 Magic Leap, Inc. Unfused pose-based drift correction of a fused pose of a totem in a user interaction system
US12001013B2 (en) 2023-01-09 2024-06-04 Magic Leap, Inc. Pixel intensity modulation using modifying gain values

Similar Documents

Publication Publication Date Title
CN110647237B (zh) 在人工现实环境中基于手势的内容共享
JP7283506B2 (ja) 情報処理装置、情報処理方法、及び情報処理プログラム
US10073516B2 (en) Methods and systems for user interaction within virtual reality scene using head mounted display
US10401953B2 (en) Systems and methods for eye vergence control in real and augmented reality environments
US9884248B2 (en) Display control method for head-mounted display (HMD) and image generation device
US10783712B2 (en) Visual flairs for emphasizing gestures in artificial-reality environments
US20130050069A1 (en) Method and system for use in providing three dimensional user interface
CN106796452B (zh) 头戴式显示装置及其控制方法、计算机可读介质
US20210124415A1 (en) Eye-based activation and tool selection systems and methods
WO2014085789A1 (fr) Manipulation directe d'hologramme à l'aide d'imu
KR20220120649A (ko) 인공 현실 콘텐츠의 가변 초점 디스플레이를 갖는 인공 현실 시스템
WO2017051595A1 (fr) Dispositif, procédé et programme de traitement d'informations
WO2019155840A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP6220937B1 (ja) 情報処理方法、当該情報処理方法をコンピュータに実行させるためのプログラム及びコンピュータ
WO2019150880A1 (fr) Dispositif et procédé de traitement d'informations, et programme
WO2019142560A1 (fr) Dispositif de traitement d'informations destiné à guider le regard
WO2018146922A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
US11743447B2 (en) Gaze tracking apparatus and systems
JP2009018127A (ja) 学習支援装置および学習支援方法
JP2018195172A (ja) 情報処理方法、情報処理プログラム及び情報処理装置
JP6941130B2 (ja) 情報処理方法、情報処理プログラム及び情報処理装置
JP2018029969A (ja) 情報処理方法及び当該情報処理方法をコンピュータに実行させるためのプログラム
WO2023157332A1 (fr) Appareil de traitement d'informations et procédé d'affichage d'écran de réglage
EP4120052A1 (fr) Systèmes et procédés d'affichage montés sur la tête
Hitchin A Study of Eye Tracking as an Input Device for Immersive Environments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16848385

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 16848385

Country of ref document: EP

Kind code of ref document: A1