CN110082909B - Head mounted display and method of operating the same - Google Patents

Head mounted display and method of operating the same Download PDF

Info

Publication number
CN110082909B
CN110082909B CN201810243487.4A CN201810243487A CN110082909B CN 110082909 B CN110082909 B CN 110082909B CN 201810243487 A CN201810243487 A CN 201810243487A CN 110082909 B CN110082909 B CN 110082909B
Authority
CN
China
Prior art keywords
distance
head
distance sensing
mounted display
sensing result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810243487.4A
Other languages
Chinese (zh)
Other versions
CN110082909A (en
Inventor
林家宇
黄昭世
陈志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Acer Inc
Original Assignee
Acer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acer Inc filed Critical Acer Inc
Publication of CN110082909A publication Critical patent/CN110082909A/en
Application granted granted Critical
Publication of CN110082909B publication Critical patent/CN110082909B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention provides a head-mounted display and an operation method thereof. The head mounted display includes a first distance sensing element, a second distance sensing element, a third distance sensing element, and processing circuitry. The distance sensing elements are arranged on the lower surface of the head-mounted display. The distance sensing elements sense distances along the same (or different) direction from the lower surface to the ground, and a first distance sensing result, a second distance sensing result and a third distance sensing result are obtained. The first distance sensing result, the second distance sensing result and the third distance sensing result are used as a basis for changing at least one display parameter of the video frame. The head-mounted display and the operation method thereof can sense the motion state of the head of a user.

Description

Head mounted display and method of operating the same
Technical Field
The present invention relates to a display device, and more particularly, to a Head-mounted display (HMD) and an operation method thereof.
Background
In recent years, Virtual Reality (VR) display devices have developed fire heat. The VR display device utilizes computer simulation to generate a virtual world of a three-dimensional space, and provides a simulation of visual sense for a user, so that the user feels like the user experiences the virtual world. In general, VR display devices may be implemented as Head Mounted Displays (HMDs). The VR display device allows a user to view objects in three-dimensional space in a timely and unrestricted manner. When the user moves, the computer can immediately perform complex operation, and the accurate three-dimensional world image is transmitted back to the VR display device, so that the user can feel the presence. It is conceivable that how to sense the motion state of the user in time is one of the important technologies of the VR display device.
Disclosure of Invention
The invention provides a head-mounted display and an operation method thereof, which can sense the motion state of the head of a user.
Embodiments of the present invention provide a head mounted display for displaying video frames. The head mounted display includes a first distance sensing element, a second distance sensing element, a third distance sensing element, and processing circuitry. The first distance sensing element is arranged in the left area of the lower surface of the head-mounted display. The first distance sensing element senses a distance along a first direction from the lower surface to the ground to obtain a first distance sensing result. The second distance sensing element is arranged in the middle area of the lower surface of the head-mounted display. The second distance sensing element senses the distance along a second direction from the lower surface to the ground to obtain a second distance sensing result. The third distance sensing element is arranged at the right area of the lower surface of the head-mounted display. The third distance sensing element senses the distance along a third direction from the lower surface to the ground to obtain a third distance sensing result. The processing circuit is coupled to the first distance sensing element, the second distance sensing element and the third distance sensing element to receive the first distance sensing result, the second distance sensing result and the third distance sensing result. The first distance sensing result, the second distance sensing result and the third distance sensing result are used as a basis for changing at least one display parameter of the video frame.
In an embodiment of the invention, the positions of the first distance sensing element, the second distance sensing element and the third distance sensing element on the lower surface of the head mounted display are not collinear.
In an embodiment of the invention, the first direction, the second direction and the third direction are parallel to each other.
In an embodiment of the invention, the first direction and the third direction are parallel to each other, and the second direction is not parallel to the first direction and the third direction.
In an embodiment of the invention, the first direction, the second distance sensing element and the third direction are not parallel to each other.
In an embodiment of the invention, the display parameter includes a viewing angle of the video frame or a persistence of vision of the video frame.
In an embodiment of the invention, the processing circuit determines the height of the user according to at least one of the first distance sensing result, the second distance sensing result, and the third distance sensing result. Wherein the height is used as a basis for changing the viewing angle of the video frame.
In an embodiment of the invention, the first distance sensing result represents a first distance, the second distance sensing result represents a second distance, and the third distance sensing result represents a third distance. And when the first distance is smaller than the third distance, judging that the head of the user wearing the head-mounted display rotates rightwards. And when the first distance is greater than the third distance, judging that the head of the user rotates leftwards.
In an embodiment of the invention, when the second distance is smaller than the first threshold, it is determined that the head of the user wearing the head-mounted display rotates downward. And when the first distance and the third distance are both larger than the second threshold value, judging that the head of the user rotates upwards.
In an embodiment of the invention, when the first distance, the second distance and the third distance are simultaneously decreased, it is determined that the user wearing the head-mounted display squats down.
In an embodiment of the invention, at least one of the first distance, the second distance and the third distance is used to calculate a rotational acceleration of a head of a user wearing the head-mounted display. Wherein the rotational acceleration is used as a basis for changing the persistence of vision of the video frame.
Embodiments of the present invention provide a method of operating a head mounted display. The head mounted display is used for displaying video frames. The operation method comprises the following steps: arranging a first distance sensing element on the left area of the lower surface of the head-mounted display; arranging a second distance sensing element in a middle area of the lower surface of the head-mounted display; arranging a third distance sensing element on the right area of the lower surface of the head-mounted display; sensing a distance along a first direction from the lower surface to the ground by a first distance sensing element to obtain a first distance sensing result; sensing the distance along a second direction from the lower surface to the ground by a second distance sensing element to obtain a second distance sensing result; sensing the distance along a third direction from the lower surface to the ground by a third distance sensing element to obtain a third distance sensing result; and using the first distance sensing result, the second distance sensing result and the third distance sensing result as a basis for changing at least one display parameter of the video frame.
In an embodiment of the present invention, the step of changing at least one display parameter of the video frame includes: and judging the height of the user according to at least one of the first distance sensing result, the second distance sensing result and the third distance sensing result. Wherein the height is used as a basis for changing the viewing angle of the video frame.
In an embodiment of the invention, the first distance sensing result represents a first distance, the second distance sensing result represents a second distance, and the third distance sensing result represents a third distance. The step of changing at least one display parameter of the video frame comprises: when the first distance is smaller than the third distance, judging that the head of a user wearing the head-mounted display rotates rightwards; and when the first distance is greater than the third distance, judging that the head of the user rotates leftwards.
In an embodiment of the invention, the step of changing at least one display parameter of the video frame includes: when the second distance is smaller than the first threshold value, judging that the head of the user wearing the head-mounted display rotates downwards; and when the first distance and the third distance are both larger than the second threshold value, judging that the head of the user rotates upwards.
In an embodiment of the invention, the step of changing at least one display parameter of the video frame includes: and when the first distance, the second distance and the third distance are simultaneously reduced, judging that the user wearing the head-mounted display squats.
As an embodiment of the present invention, the step of changing at least one display parameter of the video frame includes: calculating a rotational acceleration of the head of the user wearing the head mounted display using at least one of the first distance, the second distance, and the third distance. Wherein the rotational acceleration is used as a basis for changing the persistence of vision of the video frame.
Based on the above, embodiments of the present invention provide a head mounted display and a method of operating the same, which senses a distance through three (or more) distance sensing elements disposed at a lower surface. According to the sensing results of the distance sensing elements, the head-mounted display can sense the motion state of the head of the user, and further correspondingly change at least one display parameter of the video frame, such as correspondingly changing a view angle (angle of view) or Persistence of vision (vision).
In order to make the aforementioned and other features and advantages of the invention more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a schematic circuit block diagram of a head-mounted display according to an embodiment of the invention.
Fig. 2 is a schematic perspective view illustrating arrangement positions of a first distance sensing element, a second distance sensing element and a third distance sensing element according to an embodiment of the invention.
Fig. 3 is a schematic front view illustrating a user wearing a head-mounted display.
Fig. 4 is a side view schematically illustrating a user wearing a head-mounted display.
Fig. 5 is a schematic diagram illustrating distance detection of a head-mounted display according to an embodiment of the invention.
Fig. 6 is a flowchart illustrating an operation method of a head-mounted display according to an embodiment of the invention.
Fig. 7 is a detailed flowchart illustrating the operation method shown in fig. 6 according to an embodiment of the invention.
FIG. 8 is a schematic diagram illustrating a situation where users of different heights wear a head mounted display.
Fig. 9 is a schematic diagram illustrating the wearing of the head mounted display to detect the height of a user.
Fig. 10 is a schematic diagram illustrating a front view of the head of a user.
FIG. 11 is a schematic diagram illustrating a top view of the head of a user.
Fig. 12 is a schematic diagram illustrating a user looking up with his head facing downward.
Fig. 13 is a schematic diagram (front view) illustrating a front view of the head of a user.
FIG. 14 is a schematic diagram illustrating the situation when the head of the user is rotating to the right.
FIG. 15 is a schematic diagram illustrating the head of the user rotating to the right to a stop point.
Fig. 16 is a schematic circuit block diagram of a head-mounted display according to another embodiment of the invention.
Fig. 17 is a detailed flowchart illustrating the operation method shown in fig. 6 according to another embodiment of the invention.
Description of reference numerals:
16: main unit
30: head of user
50: ground surface
81: adult
82: children's toy
100: head-mounted display
111: first distance sensing element
112: second distance sensing element
113: third distance sensing element
120: processing circuit
130: display element
1600: head-mounted display
1620: processing circuit
α, β: included angle
D1: a first direction
D2: second direction
D3: third direction
DC1, DC2, DC3, DC4, DC5, DC 6: second distance
DL1, DL2, DL3, DL4, DL5, DL 6: first distance
DR1, DR2, DR3, DR4, DR5, DR 6: third distance
S610, S620, S630, S631, S632, S633, S634, S635, S636, S1710, S1720, S1730, S1740, S1750, S1760, S1770: step (ii) of
View1, View 2: angle of view
x: distance between two adjacent plates
y: height of a person
Detailed Description
The term "coupled" as used throughout this specification, including the claims, may refer to any direct or indirect connection. For example, if a first device couples (or connects) to a second device, it should be construed that the first device may be directly connected to the second device or the first device may be indirectly connected to the second device through some other device or some connection means. Further, wherever possible, the same reference numbers will be used throughout the drawings and the description to refer to the same or like parts. Elements/components/steps in different embodiments using the same reference numerals or using the same terms may be referred to one another in relation to the description.
Fig. 1 is a schematic circuit block diagram of a head-mounted display (HMD) 100 according to an embodiment of the present invention. The head mounted display 100 may display video frames for viewing by a user. The head-mounted display 100 includes a first distance sensing element 111, a second distance sensing element 112, a third distance sensing element 113, a processing circuit 120, and a display element 130. The display device 130 may be a liquid crystal display device or other types of display devices according to design requirements. The processing circuit 120 is coupled to and drives the display device 130, so that the display device 130 displays the video frame for the user to watch.
The first distance sensing element 111, the second distance sensing element 112 and/or the third distance sensing element 113 may be an optical distance sensor, an acoustic distance sensor or other types of distance sensors according to design requirements. The first distance sensing element 111 senses the distance to obtain a first distance sensing result, and provides the first distance sensing result to the processing circuit 120. The second distance sensing element 112 senses the distance to obtain a second distance sensing result, and provides the second distance sensing result to the processing circuit 120. The third distance sensing element 113 senses the distance to obtain a third distance sensing result, and provides the third distance sensing result to the processing circuit 120.
Fig. 2 is a schematic perspective view illustrating the arrangement positions of the first distance sensing element 111, the second distance sensing element 112 and the third distance sensing element 113 according to an embodiment of the invention. The first distance sensing element 111 is disposed at a left region of the lower surface of the head-mounted display 100, the second distance sensing element 112 is disposed at a middle region of the lower surface of the head-mounted display 100, and the third distance sensing element 113 is disposed at a right region of the lower surface of the head-mounted display 100, as shown in fig. 2. In the embodiment shown in fig. 2, the positions of the first distance sensing element 111, the second distance sensing element 112 and the third distance sensing element 113 on the lower surface of the head mounted display 100 are not collinear. In other embodiments, the positions of the first distance sensing element 111, the second distance sensing element 112 and the third distance sensing element 113 may be arranged on the same straight line according to design requirements.
Fig. 3 is a front view illustrating a user wearing the head mounted display 100. Fig. 4 is a side view schematically illustrating a user wearing the head mounted display 100. The head mounted display 100 is suitable for being worn on the head 30 of a user, as shown in fig. 3 and 4, so as to facilitate the user to view the video frames presented by the display element 130. The first distance sensing element 111, the second distance sensing element 112 and the third distance sensing element 113 are disposed on a lower surface of the head mounted display 100 so as to sense distances.
Fig. 5 is a schematic diagram illustrating distance detection of the head mounted display 100 according to an embodiment of the invention. Along a first direction D1 from the lower surface of the head mounted display 100 to the ground 50, the first distance sensing element 111 may sense a distance to obtain a first distance sensing result, wherein the first distance sensing result represents a first distance DL 1. Along a second direction D2 from the lower surface of the head mounted display 100 to the ground 50, the second distance sensing element 112 may sense a distance to obtain a second distance sensing result, wherein the second distance sensing result represents a second distance DC 1. Along a third direction D3 from the lower surface of the head mounted display 100 to the ground 50, the third distance sensing element 113 may sense a distance to obtain a third distance sensing result, wherein the third distance sensing result represents a third distance DR 1.
In the embodiment shown in fig. 5, the first direction D1, the second direction D2 and the third direction D3 may be parallel to each other. According to design requirements, in other embodiments, the first direction D1 and the third direction D3 may be parallel to each other, but the second direction D2 is not parallel to the first direction D1 and the third direction D3. In still other embodiments, the first direction D1, the second direction D2, and the third direction D3 may not be parallel to each other.
Fig. 6 is a flowchart illustrating an operation method of a head-mounted display according to an embodiment of the invention. The embodiment shown in fig. 16, which will be described later, may also be applied to the related description of fig. 6, that is, the head mounted display 100 described below may be replaced with the head mounted display 1600 shown in fig. 16. Referring to fig. 1 and 6, the user may wear the head-mounted display 100 on the head 30 (step S610). As described above, the first distance sensing element 111 is disposed at the left region of the lower surface of the head mounted display 100, the second distance sensing element 112 is disposed at the middle region of the lower surface of the head mounted display 100, and the third distance sensing element 113 is disposed at the right region of the lower surface of the head mounted display 100.
In step S620, the first distance sensing element 111 senses the distance along the first direction D1 from the bottom surface of the head mounted display 100 to the floor 50 to obtain a first distance sensing result, the second distance sensing element 112 senses the distance along the second direction D2 from the bottom surface 50 of the head mounted display 100 to the floor 50 to obtain a second distance sensing result, and the third distance sensing element 113 senses the distance along the third direction D3 from the bottom surface of the head mounted display 100 to the floor 50 to obtain a third distance sensing result.
The processing circuit 120 is coupled to the first distance sensing element 111, the second distance sensing element 112 and the third distance sensing element 113 for receiving the first distance sensing result, the second distance sensing result and the third distance sensing result. The first distance sensing result, the second distance sensing result and the third distance sensing result can be used as a basis for changing at least one display parameter of the video frame. In the embodiment shown in fig. 1, the processing circuit 120 may perform step S630. In step S630, the processing circuit 120 may use the first distance sensing result, the second distance sensing result and the third distance sensing result to correspondingly change the at least one display parameter of the video frame, for example, to correspondingly change a view angle (angle of view) of the video frame and/or to correspondingly change a persistence of vision (vision) of the video frame. The persistence of vision, also known as a positive afterimage, is the vision produced by light on the retina. The phenomenon that vision remains for a period of time after the light ceases is called persistence of vision.
Fig. 7 is a detailed flowchart illustrating the operation method shown in fig. 6 according to an embodiment of the invention. In the embodiment shown in fig. 7, the step S630 includes steps S631, S632, S633, S634, S635 and S636. Please refer to fig. 1 and fig. 7. In step S631, the first distance sensing element 111, the second distance sensing element 112, and the third distance sensing element 113 may feed back the distance sensing result to the processing circuit 120. According to design requirements, in some embodiments, the processing circuit 120 may perform one of the steps S632 and S634. In other embodiments, the processing circuit 120 may perform step S632 and step S634 at the same time.
In step S632, the processing circuit 120 can determine the height of the user according to at least one of the first distance sensing result, the second distance sensing result, and the third distance sensing result. Wherein the calculated height is used as a basis for changing the viewing angle of the video frame. The present embodiment does not limit the manner of changing the view angle of the video frame. For example, the changing manner of the view angle of the video frame may be an existing view angle algorithm (and stereo image algorithm) or other algorithms according to design requirements.
Fig. 8 is a schematic diagram illustrating a situation in which users of different heights wear the head-mounted display 100. The adult 81 perspective is View1, while the child 82 perspective is View 2. The View angle View1 is of course different from the View angle View2 based on the height of the adult 81 being different from the height of the child 82. When the adult 81 wears the head mounted display 100, the first distance sensing result of the first distance sensing element 111 represents the first distance DL1, the second distance sensing result of the second distance sensing element 112 represents the second distance DC1, and the third distance sensing result of the third distance sensing element 113 represents the third distance DR 1. The processing circuit 120 may determine the height of the adult 81 according to at least one of the first distance DL1, the second distance DC1, and the third distance DR 1. When the child 82 wears the head mounted display 100, the first distance sensing result of the first distance sensing element 111 represents the first distance DL2, the second distance sensing result of the second distance sensing element 112 represents the second distance DC2, and the third distance sensing result of the third distance sensing element 113 represents the third distance DR 2. Processing circuit 120 may determine the height of child 82 based on at least one of first distance DL2, second distance DC2, and third distance DR 2.
Fig. 9 is a diagram illustrating the head mounted display 100 detecting the height of the user. The second distance sensing element 112 is used as an illustrative object here. Other distance sensing elements may be analogized with reference to the description relating to the second distance sensing element 112. Generally, the angle α between the second direction D2 of the second distance sensing element 112 and the vertical direction is known based on design decisions. Based on the laws of geometry, the angle β between the second direction D2 and the horizontal direction can be inferred, i.e. 90- α ═ β. When the second distance sensing result of the second distance sensing element 112 indicates that the distance is x, the height y of the user is x · sin β.
Please refer to fig. 1 and fig. 7. Based on the height calculated in step S632, the processing circuit 120 may change/decide the viewing angle of the video frame accordingly (step S633). The present embodiment does not limit the implementation of step S633. For example, the viewing angle determining manner in step S633 may be an existing viewing angle algorithm or other algorithms according to design requirements.
Step S632 can also determine where the head of the user moves/rotates in addition to calculating the height of the user. Based on the motion/rotation state determined in step S632, the processing circuit 120 may change/decide the viewing angle of the video frame accordingly (step S633).
For example, fig. 10 is a schematic view illustrating the front of the head of the user, fig. 11 is a schematic view illustrating the head of the user looking down, and fig. 12 is a schematic view illustrating the head of the user looking up. In the scenario shown in fig. 10, the head of the user is looking forward, so the first distance sensing result of the first distance sensing element 111 represents the first distance DL1, the second distance sensing result of the second distance sensing element 112 represents the second distance DC1, and the third distance sensing result of the third distance sensing element 113 represents the third distance DR 1. In the scenario shown in fig. 11, the head of the user is looking down, so the first distance sensing result of the first distance sensing element 111 represents the first distance DL3, the second distance sensing result of the second distance sensing element 112 represents the second distance DC3, and the third distance sensing result of the third distance sensing element 113 represents the third distance DR 3. Compared to the scenario shown in fig. 10, since the head of the user is looking down, the first distance sensing result is shortened from the first distance DL1 to DL3, the second distance sensing result is shortened from the second distance DC1 to DC3, and the third distance sensing result is shortened from the third distance DR1 to DR 3. In particular the second distance DC3, the second distance DC3 is particularly short because the second distance sensing element 112 senses the body of the user.
Accordingly, the processing circuit 120 may check the second distance sensing result of the second distance sensing element 112. When the second distance represented by the second distance sensing result is smaller than the first threshold, the processing circuit 120 may determine that the head of the user wearing the head mounted display 100 rotates downward. The first threshold may be determined according to design requirements.
In the scenario shown in fig. 12, the head of the user is looking upward, so the first distance sensing result of the first distance sensing element 111 represents the first distance DL4, the second distance sensing result of the second distance sensing element 112 represents the second distance DC4, and the third distance sensing result of the third distance sensing element 113 represents the third distance DR 4. Compared to the scenario shown in fig. 10, since the head of the user looks upward, the first distance sensing result increases from the first distance DL1 to DL4, the second distance sensing result increases from the second distance DC1 to DC4, and the third distance sensing result increases from the third distance DR1 to DR 4.
Therefore, the processing circuit 120 may check the first distance sensing result of the first distance sensing element 111 and/or the third distance sensing result of the third distance sensing element 113. When both the first distance represented by the first distance sensing result and the third distance represented by the third distance sensing result are greater than the second threshold, the processing circuit 120 may determine that the head of the user wearing the head mounted display 100 rotates upward. The second threshold may be determined according to design requirements.
For another example, fig. 13 is a schematic diagram (front view) illustrating the front view of the head of the user, fig. 14 is a schematic diagram illustrating the head of the user rotating to the right on the way, and fig. 15 is a schematic diagram illustrating the head of the user rotating to the right to the stop point. In the scenario shown in fig. 13, the head of the user is looking forward, so the first distance sensing result of the first distance sensing element 111 represents the first distance DL1, the second distance sensing result of the second distance sensing element 112 represents the second distance DC1, and the third distance sensing result of the third distance sensing element 113 represents the third distance DR 1. In the scenario shown in fig. 14, the head of the user rotates to the right but the rotation angle is still small, so the first distance sensing result of the first distance sensing element 111 represents the first distance DL5, the second distance sensing result of the second distance sensing element 112 represents the second distance DC5, and the third distance sensing result of the third distance sensing element 113 represents the third distance DR 5. In general, when the head is rotated to the right, the head is involuntarily slightly deflected to the left of the head. The larger the rotation angle, the larger the slight deviation angle to the side of the head. Therefore, compared to the situation shown in fig. 13, since the head of the user rotates to the right, the first distance sensing result is shortened from the first distance DL1 to DL5, the second distance sensing result is increased from the second distance DC1 to DC5, and the third distance sensing result is increased from the third distance DR1 to DR 5.
Accordingly, the processing circuit 120 may check the first distance sensing result of the first distance sensing element 111 and the third distance sensing result of the third distance sensing element 113. When the first distance represented by the first distance sensing result is less than the third distance represented by the third distance sensing result, the processing circuit 120 may determine that the head of the user wearing the head mounted display 100 rotates to the right. Similarly, when the first distance represented by the first distance sensing result is greater than the third distance represented by the third distance sensing result, the processing circuit 120 may determine that the head of the user wearing the head mounted display 100 rotates to the left.
For another example, when the first distance of the first distance sensing element 111, the second distance of the second distance sensing element 112 and the third distance of the third distance sensing element 113 simultaneously decrease, the processing circuit 120 may determine that the user wearing the head-mounted display squats. Since the user' S viewing height has changed, the processing circuit 120 may change/determine the viewing angle of the video frame in step S633.
Please refer to fig. 1 and fig. 7. The processing circuit 120 may calculate the rotational acceleration of the head of the user in step S634 by using at least one of the first distance sensing result of the first distance sensing element 111, the second distance sensing result of the second distance sensing element 112, and the third distance sensing result of the third distance sensing element 113. Based on the rotational acceleration calculated in step S634, the processing circuit 120 may change/determine the persistence of vision of the video frame accordingly (step S635). The present embodiment does not limit the implementation of step S635. For example, the persistence of vision decision manner of step S635 may be an existing persistence of vision algorithm or other algorithms according to design requirements.
An example of calculation of the rotational acceleration in step S634 will be described. In any case, the implementation of step S634 should not be limited thereto. For convenience of illustration, it is assumed that when the head of the user is looking forward, i.e. the situation shown in fig. 13, the first distance DL1 of the first distance sensing element 111 is 2m (meters), the second distance DC1 of the second distance sensing element 112 is 1.5m, and the third distance DR1 of the third distance sensing element 113 is 2 m. When the head of the user rotates to the right but the rotation angle is still small, i.e. the situation shown in fig. 14, the first distance DL5 of the first distance sensing element 111 is 1.8m, the second distance DC5 of the second distance sensing element 112 is 1.65m, and the third distance DR5 of the third distance sensing element 113 is 2.2 m. When the head of the user rotates to the right to the stopping point, in the situation shown in fig. 15, the first distance DL6 of the first distance sensing element 111 is 1.7m, the second distance DC6 of the second distance sensing element 112 is 1.7m, and the third distance DR6 of the third distance sensing element 113 is 2.3 m.
Assume that it takes 0.5 seconds from the scenario shown in fig. 13 to the scenario shown in fig. 14. From the scenario shown in fig. 13 to the scenario shown in fig. 14, the third distance sensing result of the third distance sensing element 113 increases from the third distance DR1 (i.e., 2m) to DR5 (i.e., 2.2 m). Therefore, the rotational speed V of the head of the user is (distance)/(time) ((2.2-2)/(0.5) (-0.4 m/s), and the rotational acceleration a of the head is (V/(time) (-0.4/0.5) (-0.8 m/s)2. Based on the rotational acceleration a calculated in step S634, the processing circuit 120 may correspondingly change/decide the persistence of vision of the video frame in step S635.
Please refer to fig. 1 and fig. 7. In step S636, the processing circuit 120 may adjust the content of the video frame by using the viewing angle determined in step S633 and the persistence of vision determined in step S635. The present embodiment does not limit the implementation of step S636. For example, the video frame adjustment/generation manner of step S636 may be an existing stereo image algorithm or other algorithms according to design requirements.
Fig. 16 is a block diagram of a head-mounted display 1600 according to another embodiment of the invention. The head mounted display 1600 may display video frames for viewing by a user. The head-mounted display 1600 includes a first distance sensing element 111, a second distance sensing element 112, a third distance sensing element 113, a processing circuit 1620, and a display element 130. The head-mounted display 1600, the first distance sensing element 111, the second distance sensing element 112, the third distance sensing element 113, the processing circuit 1620 and the display element 130 shown in fig. 16 can be analogized by referring to the related descriptions of the head-mounted display 100, the first distance sensing element 111, the second distance sensing element 112, the third distance sensing element 113, the processing circuit 120 and the display element 130 shown in fig. 1, and therefore, the descriptions thereof are omitted. That is, the descriptions related to the head mounted display 100 in fig. 2 to 6 and fig. 8 to 15 can also be applied to the head mounted display 1600 shown in fig. 16.
In the embodiment shown in fig. 16, the processing circuit 1620 of the head mounted display 1600 may establish a connection with the host 16 via a wireless (or wired) communication interface. The processing of greater computational complexity may be handed off to the host 16. For example, the host 16 may perform a stereo image algorithm to create a stereo image and provide the stereo image to the processing circuit 1620 via a wireless (or wired) communication interface. Processing circuit 1620 may drive display element 130 to cause display element 130 to display a stereoscopic image (video frame) for a user to view. Therefore, in the embodiment shown in fig. 16, the processing circuit 1620 may perform the steps S610 and S620 shown in fig. 6, and the host 160 may perform the step S630 shown in fig. 6. In step S630 shown in fig. 6, the host 16 may use the sensing results of the first distance sensing element 111, the second distance sensing element 112, and the third distance sensing element 113 to correspondingly change the display parameters of the video frame, for example, correspondingly change the viewing angle and/or the persistence of vision of the video frame.
Fig. 17 is a detailed flowchart illustrating the operation method shown in fig. 6 according to another embodiment of the invention. In the embodiment shown in fig. 17, step S630 includes steps S1710, S1720, S1730, S1740, S1750, and S1760. Please refer to fig. 16 and 17. In step S1710, the first distance sensing element 111, the second distance sensing element 112 and the third distance sensing element 113 may feed back the distance sensing results to the processing circuit 1620, and the processing circuit 1620 may feed back the distance sensing results to the host 16 via a wireless (or wired) communication interface. According to design requirements, in some embodiments, the host 16 may perform one of the steps S1720 and S1740. In other embodiments, the host 16 may perform step S1720 and step S1740 simultaneously.
In step S1720, the host 16 can determine the height of the user according to at least one of the first distance sensing result, the second distance sensing result, and the third distance sensing result. Based on the height calculated in step S1720, the host computer 16 may change/determine the view angle of the video frame (step S1730). The host computer 16 can determine where the head of the user moves/rotates in step S1720, in addition to calculating the height of the user. Based on the motion/rotation status determined in step S1720, the host computer 16 may change/determine the view angle of the video frame (step S1730). Steps S1720 and S1730 shown in fig. 17 can be analogized with reference to the descriptions of steps S632 and S633 shown in fig. 7, and thus are not described again.
The host computer 16 may calculate the rotational acceleration of the head of the user using at least one of the first distance sensing result of the first distance sensing element 111, the second distance sensing result of the second distance sensing element 112, and the third distance sensing result of the third distance sensing element 113 in step S1740. Based on the rotational acceleration calculated in step S1740, the host computer 16 may change/determine the persistence of vision of the video frame accordingly (step S1750). Steps S1740 and S1750 shown in fig. 17 can be analogized with reference to the descriptions of steps S634 and S635 shown in fig. 7, and thus are not described again.
In step S1760, the host computer 16 may adjust the content of the video frame by using the viewing angle determined in step S1730 and the persistence of vision determined in step S1750. Step S1760 shown in fig. 17 can be analogized with reference to the description of step S636 shown in fig. 7, and thus will not be described again. Accordingly, based on the first distance sensing result of the first distance sensing element 111, the second distance sensing result of the second distance sensing element 112, and the third distance sensing result of the third distance sensing element 113 of the head-mounted display 1600, the host computer 16 may perform a stereoscopic image algorithm to establish a stereoscopic image (video frame).
In step S1770, the host 16 may transmit the adjusted stereoscopic images (video frames) to the processing circuit 1620 of the head mounted display 1600 via a wireless (or wired) communication interface. Processing circuit 1620 may drive display element 130 to cause display element 130 to display a stereoscopic image (video frame) for a user to view.
The Processing circuit 120 and/or the Processing circuit 1620 may be implemented by logic circuits (hardware) formed on an integrated circuit (integrated circuit), or may be implemented by software using a Central Processing Unit (CPU). In the latter case, the related functions of the processing circuit 120 and/or the processing circuit 1620 may be implemented as programming codes of software (i.e., programs). The software (i.e., program) can be Read by a computer (or CPU) and can be recorded/stored in a Read Only Memory (ROM), a storage device (referred to as a "recording medium"), and/or a Random Access Memory (RAM). And, the program is read from the recording medium and executed by a computer (or CPU), thereby achieving the related functions. As the recording medium, a "non-transitory computer readable medium" may be used, and for example, a tape (tape), a disk (disk), a card (card), a semiconductor memory, a programmable logic circuit, or the like may be used. The program may be provided to the computer (or CPU) via any transmission medium (communication network, broadcast wave, etc.). Such as the Internet, wired communication, wireless communication, or other communication media.
In different application scenarios, the related functions of the processing circuit 120 and/or the processing circuit 1620 may be implemented as software, firmware or hardware by using a general programming language (e.g., C or C + +), a hardware description language (e.g., Verilog HDL or VHDL), or other suitable programming languages. For a hardware implementation, various logic blocks, modules, and circuits within one or more controllers, microcontrollers, microprocessors, Application-specific integrated circuits (ASICs), Digital Signal Processors (DSPs), Field Programmable Gate Arrays (FPGAs), and/or other processing units may be used to implement or perform the functions described in the embodiments herein. Additionally, the apparatus and methods of the present invention may be implemented by a combination of hardware, firmware, and/or software.
In summary, embodiments of the invention provide a head-mounted display and an operating method thereof, which sense a distance through three (or more) distance sensing elements disposed on a lower surface. According to the sensing results of the distance sensing elements, the head-mounted display can sense the motion/rotation state of the head of the user, and further correspondingly change the display parameters of the video frame, such as correspondingly changing the visual angle or the persistence of vision of the video frame.
Although the present invention has been described with reference to the above embodiments, it should be understood that various changes and modifications can be made therein by those skilled in the art without departing from the spirit and scope of the invention.

Claims (20)

1. A head mounted display for displaying video frames, the head mounted display comprising:
a first distance sensing element, disposed in a left region of a lower surface of the head-mounted display, for sensing a distance along a first direction from the lower surface to a ground surface to obtain a first distance sensing result;
a second distance sensing element disposed in a middle region of the lower surface of the head-mounted display, for sensing a distance along a second direction from the lower surface to the ground to obtain a second distance sensing result;
a third distance sensing element disposed in a right region of the lower surface of the head-mounted display, for sensing a distance along a third direction from the lower surface to the ground to obtain a third distance sensing result; and
a processing circuit coupled to the first distance sensing element, the second distance sensing element and the third distance sensing element to receive the first distance sensing result, the second distance sensing result and the third distance sensing result, wherein the first distance sensing result, the second distance sensing result and the third distance sensing result are used as a basis for changing at least one display parameter of the video frame, the first distance sensing result represents a first distance, the second distance sensing result represents a second distance, the third distance sensing result represents a third distance, at least one of the first distance, the second distance, and the third distance being used to calculate a rotational acceleration of a head of a user wearing the head mounted display, and the rotational acceleration is used as a basis for changing the persistence of vision of the video frame.
2. The head-mounted display of claim 1, wherein the first distance sensing element, the second distance sensing element, and the third distance sensing element are not collinear at the location of the lower surface of the head-mounted display.
3. The head-mounted display of claim 1, wherein the first direction, the second direction, and the third direction are parallel to each other.
4. The head-mounted display of claim 1, wherein the first direction and the third direction are parallel to each other and the second direction is not parallel to the first direction and the third direction.
5. The head-mounted display of claim 1, wherein the first direction, the second direction, and the third direction are non-parallel to each other.
6. The head-mounted display of claim 1, wherein the at least one display parameter comprises a viewing angle of the video frame or a persistence of vision of the video frame.
7. The head-mounted display of claim 1, wherein the processing circuit determines a height of the user according to at least one of the first distance sensing result, the second distance sensing result, and the third distance sensing result, wherein the height is used as a basis for changing a viewing angle of the video frame.
8. The head-mounted display of claim 1,
when the first distance is smaller than the third distance, judging that the head of the user wearing the head-mounted display rotates rightwards, and
and when the first distance is greater than the third distance, judging that the head of the user rotates leftwards.
9. The head-mounted display of claim 1,
when the second distance is smaller than a first threshold value, judging that the head of a user wearing the head-mounted display rotates downwards, and
and when the first distance and the third distance are both larger than a second threshold value, judging that the head of the user rotates upwards.
10. The head-mounted display of claim 1, wherein the user wearing the head-mounted display squats when the first distance, the second distance, and the third distance simultaneously decrease.
11. A method of operating a head mounted display for displaying video frames, the method comprising:
disposing a first distance sensing element at a left region of a lower surface of the head mounted display;
disposing a second distance sensing element in a middle region of the lower surface of the head mounted display;
disposing a third distance sensing element on a right region of the lower surface of the head mounted display;
sensing a distance along a first direction from the lower surface to the ground by the first distance sensing element to obtain a first distance sensing result;
sensing, by the second distance sensing element, a distance along a second direction from the lower surface to the ground to obtain a second distance sensing result;
sensing, by the third distance sensing element, a distance along a third direction from the lower surface to the ground to obtain a third distance sensing result; and
using the first distance sensing result, the second distance sensing result, and the third distance sensing result as a basis for changing at least one display parameter of the video frame, wherein the first distance sensing result represents a first distance, the second distance sensing result represents a second distance, and the third distance sensing result represents a third distance, and the step of changing at least one display parameter of the video frame comprises:
calculating a rotational acceleration of a head of a user wearing the head-mounted display using at least one of the first distance, the second distance, and the third distance, wherein the rotational acceleration is used as a basis for changing persistence of vision of the video frame.
12. The method of operation of claim 11, wherein the first distance sensing element, the second distance sensing element, and the third distance sensing element are not collinear at the location of the lower surface of the head mounted display.
13. The method of operation of claim 11, wherein the first direction, the second direction, and the third direction are parallel to each other.
14. The method of claim 11, wherein the first direction and the third direction are parallel to each other and the second direction is not parallel to the first direction and the third direction.
15. The method of operation of claim 11, wherein the first direction, the second direction, and the third direction are non-parallel to one another.
16. The method according to claim 11, wherein the at least one display parameter includes a viewing angle of the video frame or a persistence of vision of the video frame.
17. The method of claim 11, wherein the step of changing at least one display parameter of the video frame comprises:
and determining the height of the user according to at least one of the first distance sensing result, the second distance sensing result and the third distance sensing result, wherein the height is used as a basis for changing the viewing angle of the video frame.
18. The method of claim 11, wherein the step of changing at least one display parameter of the video frame comprises:
when the first distance is smaller than the third distance, judging that the head of a user wearing the head-mounted display rotates rightwards; and
and when the first distance is greater than the third distance, judging that the head of the user rotates leftwards.
19. The method of claim 11, wherein the step of changing at least one display parameter of the video frame comprises:
when the second distance is smaller than a first threshold value, judging that the head of a user wearing the head-mounted display rotates downwards; and
and when the first distance and the third distance are both larger than a second threshold value, judging that the head of the user rotates upwards.
20. The method of claim 11, wherein the step of changing at least one display parameter of the video frame comprises:
and when the first distance, the second distance and the third distance are simultaneously reduced, judging that the user wearing the head-mounted display squats.
CN201810243487.4A 2018-01-25 2018-03-23 Head mounted display and method of operating the same Active CN110082909B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW107102623 2018-01-25
TW107102623A TWI658291B (en) 2018-01-25 2018-01-25 Head-mounted display and operation method thereof

Publications (2)

Publication Number Publication Date
CN110082909A CN110082909A (en) 2019-08-02
CN110082909B true CN110082909B (en) 2021-08-17

Family

ID=67347910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810243487.4A Active CN110082909B (en) 2018-01-25 2018-03-23 Head mounted display and method of operating the same

Country Status (2)

Country Link
CN (1) CN110082909B (en)
TW (1) TWI658291B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2609009B (en) * 2021-07-16 2024-01-03 Sony Interactive Entertainment Inc Head-mountable display systems and methods

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104246864A (en) * 2013-02-22 2014-12-24 索尼公司 Head-mounted display and image display device
CN205302188U (en) * 2016-01-15 2016-06-08 广东小天才科技有限公司 Wear -type virtual reality equipment
CN106200899A (en) * 2016-06-24 2016-12-07 北京奇思信息技术有限公司 The method and system that virtual reality is mutual are controlled according to user's headwork
CN206057684U (en) * 2016-08-29 2017-03-29 魏学华 A kind of virtual reality glasses for carrying out elevation carrection
CN106950534A (en) * 2017-02-27 2017-07-14 广东小天才科技有限公司 A kind of locus detection method, system and VR wearable devices
CN107050848A (en) * 2016-12-09 2017-08-18 深圳市元征科技股份有限公司 Somatic sensation television game implementation method and device based on body area network
CN107281750A (en) * 2017-05-03 2017-10-24 深圳市恒科电子科技有限公司 VR aobvious action identification methods and VR show
KR20170119272A (en) * 2016-06-20 2017-10-26 김종만 3D Motion Input Apparatus for Virtual Reality Device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8593331B2 (en) * 2010-06-16 2013-11-26 Qualcomm Incorported RF ranging-assisted local motion sensing
US20120327116A1 (en) * 2011-06-23 2012-12-27 Microsoft Corporation Total field of view classification for head-mounted display
JP6028357B2 (en) * 2012-03-22 2016-11-16 ソニー株式会社 Head mounted display and surgical system
US20150379772A1 (en) * 2014-06-30 2015-12-31 Samsung Display Co., Ltd. Tracking accelerator for virtual and augmented reality displays
US10451875B2 (en) * 2014-07-25 2019-10-22 Microsoft Technology Licensing, Llc Smart transparency for virtual objects
CN204945491U (en) * 2015-08-03 2016-01-06 众景视界(北京)科技有限公司 Wear-type holographic intelligent glasses

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104246864A (en) * 2013-02-22 2014-12-24 索尼公司 Head-mounted display and image display device
CN205302188U (en) * 2016-01-15 2016-06-08 广东小天才科技有限公司 Wear -type virtual reality equipment
KR20170119272A (en) * 2016-06-20 2017-10-26 김종만 3D Motion Input Apparatus for Virtual Reality Device
CN106200899A (en) * 2016-06-24 2016-12-07 北京奇思信息技术有限公司 The method and system that virtual reality is mutual are controlled according to user's headwork
CN206057684U (en) * 2016-08-29 2017-03-29 魏学华 A kind of virtual reality glasses for carrying out elevation carrection
CN107050848A (en) * 2016-12-09 2017-08-18 深圳市元征科技股份有限公司 Somatic sensation television game implementation method and device based on body area network
CN106950534A (en) * 2017-02-27 2017-07-14 广东小天才科技有限公司 A kind of locus detection method, system and VR wearable devices
CN107281750A (en) * 2017-05-03 2017-10-24 深圳市恒科电子科技有限公司 VR aobvious action identification methods and VR show

Also Published As

Publication number Publication date
TW201932912A (en) 2019-08-16
CN110082909A (en) 2019-08-02
TWI658291B (en) 2019-05-01

Similar Documents

Publication Publication Date Title
JP6548821B2 (en) How to optimize the placement of content on the screen of a head mounted display
CN107949819B (en) Apparatus and method for dynamic graphics rendering based on saccade detection
CN105915990B (en) Virtual reality helmet and using method thereof
EP3241088B1 (en) Methods and systems for user interaction within virtual or augmented reality scene using head mounted display
US20150245017A1 (en) Virtual see-through instrument cluster with live video
CN103380625A (en) Head-mounted display and misalignment correction method thereof
JP2017531221A (en) Countering stumbling when immersed in a virtual reality environment
JP6662914B2 (en) Mediation reality
EP3619685B1 (en) Head mounted display and method
US10591987B2 (en) Method, virtual reality apparatus and recording medium for fast moving in virtual reality
US9584781B2 (en) Image displaying method and electronic device
CN110082909B (en) Head mounted display and method of operating the same
EP3236306A1 (en) A method for rendering a 3d virtual reality and a virtual reality equipment for implementing the method
JP5096643B1 (en) Congestion capability determination device and method
JP6069601B2 (en) 3D underwater interactive device for rehabilitation
TWI620100B (en) Method, virtual reality apparatus and recording medium for displaying fast moving frames of virtual reality
TW201925986A (en) Virtual reality device and method for operating a virtual reality device
JP6872391B2 (en) Walking support system, walking support method, and program
JP2017111081A (en) Head-mounted display for checking transport and program for head-mounted display for checking transport
KR102286517B1 (en) Control method of rotating drive dependiong on controller input and head-mounted display using the same
CN217133475U (en) Be used for immersive VR to experience equipment
TWI669940B (en) Method and electronic device for displaying stereoscopic images
KR102337907B1 (en) Augmented reality smart glass device
KR102339044B1 (en) Method and device for controlling emergency situation
US20240045498A1 (en) Electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant