WO2018112838A1 - 头戴式显示设备及其视觉辅助方法 - Google Patents

头戴式显示设备及其视觉辅助方法 Download PDF

Info

Publication number
WO2018112838A1
WO2018112838A1 PCT/CN2016/111510 CN2016111510W WO2018112838A1 WO 2018112838 A1 WO2018112838 A1 WO 2018112838A1 CN 2016111510 W CN2016111510 W CN 2016111510W WO 2018112838 A1 WO2018112838 A1 WO 2018112838A1
Authority
WO
WIPO (PCT)
Prior art keywords
display device
angle
image frame
area
motion
Prior art date
Application number
PCT/CN2016/111510
Other languages
English (en)
French (fr)
Inventor
赵聪
Original Assignee
深圳市柔宇科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市柔宇科技有限公司 filed Critical 深圳市柔宇科技有限公司
Priority to US16/471,352 priority Critical patent/US20190333468A1/en
Priority to CN201680042725.4A priority patent/CN107980220A/zh
Priority to EP16924452.2A priority patent/EP3561570A1/en
Priority to PCT/CN2016/111510 priority patent/WO2018112838A1/zh
Publication of WO2018112838A1 publication Critical patent/WO2018112838A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/003Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • G09G5/005Adapting incoming signals to the display format of the display terminal
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user

Definitions

  • the present invention relates to a display device, and more particularly to a head mounted display device and a visual aid thereof.
  • head-mounted display devices have gradually become popular because of their convenience and ability to achieve stereoscopic display and stereo.
  • VR virtual reality
  • head-mounted display devices have become more widely used as hardware support devices for VR technology. Since the user cannot see the outside after wearing the head-mounted display device, it is often necessary to remove the head-mounted display device to see the outside environment. For example, when the user needs to use an external input device, the helmet needs to be removed and the position of the external input device is confirmed. If the external input device has a button, the user may need to remove the head-mounted display device to see that it is necessary to operate. The position of the button, wearing a head-mounted display device at a time, and viewing the content displayed on the head-mounted display device brings great inconvenience to the user input.
  • the embodiment of the invention discloses a head-mounted display device and a visual aiding method thereof, which can assist the user to view the external environment and facilitate the operation of the user when the user wears the head-mounted display device.
  • the head-mounted display device disclosed in the embodiment of the present invention includes a display device and a processor, wherein the head-mounted display device further includes: an image capturing device for capturing an image frame around the head-mounted display device; a motion sensor, Detecting a movement angle of the head mounted display device relative to a reference position; the processor is coupled to the display device, the camera device, and the motion sensor, and configured to detect the head at the motion sensor When the wearable display device moves downward relative to the reference position to reach a first preset angle, controlling to turn on the camera to capture an image frame; the processor is further configured to change according to a motion angle detected by the motion sensor, The display device is controlled to display an area corresponding to the motion angle in the captured image frame.
  • the visual aiding method disclosed in the embodiment of the present invention is applied to a head mounted display device, and the head mounted display device includes a display device, a camera device, and a motion sensor, wherein the visual aid method package
  • the motion sensor detects that the head mounted display device moves downward relative to the reference position to reach a first preset angle, controlling to turn on the camera to capture an image frame; and detecting the motion sensor according to the motion sensor
  • the measured change in the angle of motion controls the display device to display an area of the captured image frame corresponding to the angle of motion.
  • the head-mounted display device of the present invention and the visual assistance method thereof can turn on the imaging device to capture an image frame when the head-mounted display device moves downward by a first predetermined angle with respect to the reference position, and display the image according to the change of the movement angle
  • the corresponding area of the image screen can assist the user wearing the head-mounted display device to observe the external environment, and can display a process of gradation of the field of view according to the change of the area of the displayed image frame.
  • FIG. 1 is a schematic diagram of a head mounted display device in accordance with an embodiment of the present invention.
  • FIG. 2 is a functional block diagram of a head mounted display device in accordance with an embodiment of the present invention.
  • 3 to 6 are diagrams showing an example of a specific process of displaying the display device of the head mounted display device in accordance with the change in the movement angle in the first embodiment.
  • FIG. 7 is a schematic diagram of a display device of a head mounted display device displaying a complete image frame according to an embodiment of the invention.
  • FIG. 8 is a schematic diagram showing display of a display device of a head mounted display device in a second embodiment of the present invention.
  • FIG. 9 is a flow chart of a visual aiding method in accordance with an embodiment of the present invention.
  • FIG. 10 is a sub-flow diagram of step S902 of Figure 9.
  • FIG. 1 is a schematic diagram of a head mounted display device 100 .
  • the head mounted display device 100 includes a display device 1 and an imaging device 2.
  • the display device 1 is for outputting a display screen.
  • the imaging device 2 is disposed on the display device 1 for capturing an image of the environment surrounding the head mounted display device 100.
  • the imaging device 2 is disposed in front of the display device 1 for capturing an image frame in front of the head mounted display device 100 .
  • FIG. 2 is a structural block diagram of the head mounted display device 100.
  • the head mounted display device 100 includes a motion sensor 3 and a processor 4 in addition to the display device 1 and the camera device 2.
  • the motion sensor 3 is disposed on the earphone device 1 or the display device 1 for detecting a moving angle of the head mounted display device 100 with respect to a reference position.
  • the processor 4 is connected to the display device 1, the camera device 2 and the motion sensor 3, and is configured to detect that the head-mounted display device 100 moves downward relative to the reference position at the motion sensor 3 At a predetermined angle, the control turns on the imaging device 2 to capture an image frame.
  • the processor 4 controls the display device 1 to display an area corresponding to the motion angle in the captured image frame according to the change of the motion angle detected by the motion sensor 3.
  • the motion sensor 3 detects a motion signal of the head mounted display device 100 relative to a reference position, and the processor 4 determines the The angle of motion of the head mounted display device 100 relative to the reference position detected by the motion sensor 3.
  • the present application can automatically turn on the camera 2 as a visual aid when the head mounted display device 100 is moved downward relative to the reference position to a first predetermined angle without removing the head mounted display device 100.
  • the processor 4 controls the display device 1 to display a larger area of the image frame when the motion angle detected by the motion sensor 3 becomes larger.
  • the processor 4 is in the operation When the motion angle detected by the motion sensor 3 becomes small, the display device 1 is controlled to display a smaller area of the image frame.
  • the reference position is a position of the head mounted display device 100 when the user faces the front face after the user wears the head mounted display device 100.
  • the processor 4 controls the display device 1 to display an area corresponding to the motion angle in the captured image frame according to the change of the motion angle detected by the motion sensor 3.
  • the processor 4 acquires an image frame captured by the camera device 2 at different motion angles during the process of increasing the motion angle detected by the motion sensor 3, and intercepts the current motion angle.
  • FIG. 3 is a schematic diagram of the first image frame IM1 captured by the imaging device 2 at a first preset angle. That is, the first video image IM1 shown in FIG. 3 is an image screen captured when the head mounted display device 100 moves downward relative to the reference position to a first preset angle.
  • the first video image IM1 currently captured by the imaging device 2 includes a pen P1, a book B1, and a part of the input device T1.
  • FIG. 4 is a schematic diagram of a partial area of the first image frame IM1 displayed by the display device 1.
  • the processor 4 acquires a first image frame IM1 captured by the camera device 2 at a first preset angle such as an angle a.
  • the processor 4 controls the display device 1 to display a partial area A1 corresponding to the first preset angle in the first image frame IM1 currently captured by the imaging device 2.
  • the partial area A1 is an area of the height H1 that extends from the upper side of the first video picture IM1 and is cut from the first video picture IM1.
  • the display device 1 displays an area of the h1/h ratio in the first video screen IM1.
  • FIG. 5 Please refer to FIG. 5 as a schematic diagram of the second image frame IM2 captured by the camera 2 at a specific angle greater than the first predetermined angle, such as the angle b. That is, the second video image IM2 shown in FIG. 5 is an image screen captured when the head mounted display device 100 moves downward relative to the reference position to the specific angle b.
  • the head-mounted display device 100 and the camera are driven as the user's head moves downward.
  • the image device 2 moves downward, when moving to the specific angle, the second image frame IM2 captured by the image pickup device 2 changes to a screen as shown in FIG.
  • the second video frame IM2 currently captured by the imaging device 2 includes a lower half of the pen P1, a lower half of the book B1, all of the input device T1, and a portion of the hand Z1.
  • the processor 4 acquires a second image frame IM2 captured by the imaging device 2 at the specific angle.
  • FIG. 6 a schematic diagram of a partial area of the second image frame IM2 displayed by the display device 1.
  • the processor 4 controls the display device 1 to display a partial area A2 corresponding to the specific angle in the second video image IM2 currently captured by the imaging apparatus 2.
  • the partial area A2 is an area of the second image screen IM2 that the processor 4 extends from the upper side of the second video screen IM2 and has a height h2.
  • the display device 1 displays an area of the h2/h ratio in the second video image IM2.
  • the ratio of the height h2 of the area A2 to the height h1 of the area A1 is equal to the ratio of the specific angle to the first preset angle.
  • the first preset angle be a
  • the specific angle be b
  • h1/h2 a/b
  • h2 h1*b/a.
  • the head mounted display device 100 further includes a memory 5, and in some embodiments, a corresponding relationship between the first preset angle a and the height h1 is prestored in the memory 5.
  • the processor 4 may determine, according to the corresponding relationship, a height h1 corresponding to the downward movement of the head mounted display device 100 to the first preset angle, and intercept the slave image from the first image frame IM1.
  • the processor 4 intercepts, from the second image frame IM2, an area of height h2 extending from the upper side of the second image frame IM2 as the area A2, and controls display of the area intercepted in the second image picture IM2. A2.
  • a plurality of angles and heights may be pre-stored in the memory 5.
  • the plurality of angles and heights include a correspondence between the first preset angle a and the height h1. Department, other multiple angles and heights correspond to each other.
  • the processor 4 may determine, according to the correspondence, a height corresponding to an angle that the head mounted display device 100 is currently moving relative to the reference position.
  • the processor 4 intercepts an area extending from the upper side of the image screen to a corresponding height from the current image frame, and controls to display the area intercepted from the current image frame.
  • the processor 4 controls the location when the motion angle detected by the motion sensor 3 relative to the reference position moves downward to a second preset angle c.
  • the display device 1 displays a complete video frame IM3 currently captured by the imaging device 2.
  • the second preset angle is greater than the specific angle and the first preset angle.
  • the specific angle is an arbitrary angle between the first preset angle a and the second preset angle c.
  • the screen displayed by the display device 1 is gradually enlarged, giving the user a feeling of gradually opening the field of view.
  • the image screen IM3 displays the entire input device T1 and the human hand Z1.
  • the user does not need to take off the head mounted display device 100, and the relative position of the current hand and the input device T1 can be obtained by the image screen captured by the camera device 2, and can be moved to the position of the input device T1, and the input is
  • the device T1 operates and inputs the corresponding content, which greatly facilitates the user's operation.
  • the input device T1 may include a physical or virtual button K1.
  • the image captured by the camera device 2 is clearly displayed. Knowing the position of the button K1, and moving the hand Z1 to the corresponding button K1 for operation.
  • the processor 4 acquires an image frame captured by the camera device 2 at different motion angles during the process of detecting the motion angle detected by the motion sensor 3, and intercepts the image frame captured by the current motion angle. The corresponding area.
  • the upward movement of the head mounted display device 100 is the reverse process of the head mounted display device 100.
  • the camera 2 is currently capturing the second image frame IM2 as shown in FIG. 5, and the processor 4 controls the display device 1 to display as shown in the figure.
  • the processor 4 controls the display device 1 to display A partial area A1 having a height h1 corresponding to the first preset angle in the first image screen IM1 shown in FIG.
  • the screen displayed by the display device 1 also becomes smaller, giving the user a feeling of gradually closing the field of view.
  • the processor 4 controls to turn off the camera when the motion angle detected by the motion sensor 3 relative to the reference position decreases to less than the first preset angle a. 2 and/or controlling the display device 1 to stop displaying the image frame.
  • the processor 4 controls the display device 1 to display an area corresponding to the motion angle in the image frame according to a change in the motion angle detected by the motion sensor 3, including
  • the processor 4 acquires image images captured by the camera device 2 at different motion angles during the movement angle detected by the motion sensor 3, and intercepts different regions in different image frames, and Controlling the display device 1 to display a spliced region of the currently intercepted region and all regions previously intercepted; the processor 4 acquires different motion angles during a process in which the motion angle detected by the motion sensor 3 becomes smaller.
  • the image frame captured by the imaging device 2 is captured, and an area corresponding to the motion angle in the image frame captured at the current motion angle is intercepted, and the display device 1 is controlled to display the region intercepted in the current image frame.
  • the first video image IM1 currently captured by the imaging device 2 includes a pen P1, a book B1, and a part of the input device T1.
  • the processor 4 controls the display device 1 to display a partial area A1 corresponding to the first preset angle in the first image frame IM1 currently captured by the current camera 2.
  • the partial area A1 is an area including the height h1 on the upper side of the first video picture IM1, which is extracted from the first video picture IM1.
  • the processor 4 in the second embodiment controls the area A3 displayed by the display device 1 when the head mounted display device 100 moves downward to a specific angle b relative to the reference position.
  • the image screen captured by the image pickup apparatus 2 is the second image screen IM2 shown in FIG. 5.
  • the processor 4 determines an area of the area A2 that overlaps with the area A1 displayed when the motion is moved to the first preset angle in FIG. 4, and the area in the area A2 is located in the overlapped area.
  • the image is removed, and a region of h2-h1 as shown in (2) of Fig. 8 is obtained.
  • the processor 4 removes the area A1 displayed when moving to the first preset angle as shown in (1) of FIG. 8 and the area A2 shown in (2) of FIG.
  • the area after the A1 overlap area is partially superimposed, and the superimposed area A3 as shown in (3) of Fig. 8 is obtained.
  • the upper partial area of the superimposed area A3 is an area of height h1 taken from the first image frame IM1, and the lower part of the superimposed area A3 is taken from the second image picture IM2.
  • the processor 4 controls the display of the display device 1 to be the same as the first embodiment, and is not described herein.
  • the processor 4 also controls the current display device 1 to display the entire image frame currently acquired.
  • the head mounted display device 100 includes the input device T1, and the input device T1 is connected to the display device 1 and/or the earphone device 6 by wire or wirelessly.
  • the input device T1 may be an input device having a physical button or may be an input device such as a touch screen or a touch pad on which a virtual button is displayed.
  • the input device T1 can be an external device that is adapted to the head mounted display device 100 and is not part of the head mounted display device 100.
  • the camera device 2 may be a camera, and the captured image frame may be an image image taken at a predetermined time or a preset motion angle, or may be a video image.
  • the processor 4 may control the display device 1 to display a partial area in the video picture in real time after the motion angle is greater than or equal to the first preset angle, and change the certain angle when the specific angle is reached.
  • 3-8 is merely an exemplary figure, the user can view the external scene through the assistance of the camera 2, thereby finding the input device T1 and other required items, or implementing dialogue with other people, and the like. .
  • the head mounted display device further includes an earphone device 6 for outputting sound
  • the earphone device 6 may include an earphone loop 61 and two earphones 62, the two The earphones 62 are connected by the earphone loops 61.
  • the processor 4 can be a central processing unit, a microprocessor, a microcontroller, a single chip microcomputer, a digital signal processor, or the like.
  • the memory 5 can be a flash memory card, a solid state memory, a random access memory or the like.
  • the motion sensor 3 may be a gyroscope or an acceleration sensor or the like.
  • the head mounted display device 100 can be a head mounted display device such as smart glasses or a smart helmet.
  • FIG. 9 is a flowchart of an auxiliary vision method according to an embodiment of the present invention. This method is used in the aforementioned head mounted display device 100. The method comprises the following steps:
  • the processor 4 controls to turn on the camera 2 to capture an image frame when the motion sensor 3 detects that the head mounted display device 100 moves downward relative to the reference position to a first preset angle (S901). .
  • the processor 4 controls the display device 1 to display an area corresponding to the motion angle in the captured image frame according to the change of the motion angle detected by the motion sensor 3 (S902). Specifically, the processor 4 controls the display device 1 to display a larger area of the image frame when the motion angle detected by the motion sensor 3 becomes larger. The processor 4 controls the display device 1 to display a smaller area of the image frame when the motion angle detected by the motion sensor 3 becomes smaller.
  • the auxiliary vision method further comprises the steps of:
  • the processor 4 determines whether the motion angle detected by the motion sensor 3 is greater than or equal to the first Two preset angles (S903). If yes, go to step S904, if no, go back to step S902.
  • the control display device 1 displays a complete video image captured by the imaging device 2 (S904).
  • the step S903 and the step S904 may also include: in the step S902, displaying a complete image screen indicating that an area in the image screen displayed at this time is the entire image frame.
  • the auxiliary vision method further comprises the steps of:
  • the processor 4 determines whether the motion angle detected by the motion sensor 3 is less than a first preset angle (S905). If yes, go to step S906, if no, go back to step S902.
  • the step S905 can also be performed before the step S903.
  • the processor 4 controls to turn off the imaging device 2 and/or control the display device 1 to stop displaying the video image captured by the imaging device 2 (S906).
  • step S902 includes:
  • the processor 4 acquires the first image frame IM1 taken by the imaging device 2 at the first preset angle a (S9021).
  • the processor 4 controls the display device 1 to display the area A1 having the first height h1 in the first image screen IM1 (S9022).
  • the memory 5 of the head mounted display device 100 prestores a correspondence between the first preset angle a and the height h1.
  • the processor 4 determines the height h1 corresponding to the first preset angle a according to the corresponding relationship, and intercepts an area of the first image frame IM1 that extends from the upper side of the first image frame IM1 by a height h1. As the corresponding area A1, the area A1 intercepted in the first video picture IM1 is controlled to be displayed.
  • the processor 4 acquires a second image frame IM2 captured by the imaging device 2 at a specific angle b greater than the first predetermined angle (S9023).
  • the processor 4 controls the display device 1 to display an area A2 corresponding to the specific angle b in the second image screen IM2 currently captured by the image capturing apparatus 2 (S9024).
  • An area of height h2 extending downward from the second image screen IM2 is referred to as the corresponding area A2, and the area A2 intercepted in the second image screen IM2 is controlled to be displayed.
  • the head mounted display device A plurality of angles and heights are prestored in the memory 5 of the processor 100.
  • the processor 4 determines the height h2 corresponding to the specific angle b according to the correspondence relationship, and then intercepts the second image from the second image frame IM2.
  • An area of height h2 extending from the upper side of the screen IM2 is referred to as the corresponding area A2, and the area A2 intercepted in the second image screen IM2 is controlled to be displayed.
  • the step S902 includes: the processor 4 acquires an image frame captured by the camera device 2 at different motion angles during a process in which the motion angle detected by the motion sensor 3 is increased; a different area in the image frame, and controlling the display device 1 to display an area in which the currently intercepted area is spliced with all the previously intercepted areas; the angle of motion detected by the processor 4 at the motion sensor 3 is small
  • the image frame captured by the camera device 2 at different motion angles is acquired; the region corresponding to the motion angle in the image frame captured at the current motion angle is intercepted, and the display device 1 is controlled to display the intercepted region.
  • the imaging device 2 as an auxiliary visual device, it is possible to turn on when the user needs to look down for something to assist the user in finding an object such as an input device or the like.

Abstract

一种头戴式显示设备(100)包括显示装置(1)、摄像装置(2)及运动传感器(3)。还公开了一种视觉辅助方法包括:在运动传感器侦测到头戴式显示设备相对参考位置向下运动达到第一预设角度时,控制开启摄像装置拍摄影像画面;以及根据运动传感器侦测到的运动角度的变化,控制显示装置显示所拍摄的影像画面中的与运动角度对应的区域。这种头戴式显示设备和视觉辅助方法,能够借助摄像装置进行视觉辅助。

Description

头戴式显示设备及其视觉辅助方法 技术领域
本发明涉及一种显示设备,尤其涉及一种头戴式显示设备及其视觉辅助方法。
背景技术
目前,头戴式显示设备由于便捷性,且能实现立体显示及立体声等效果,已经逐渐被人们所喜爱。近年来,随着虚拟现实(virtual reality,VR)技术的出现,头戴式显示设备作为VR技术的硬件支持设备,更加应用广泛了。由于用户穿戴头戴式显示设备后,无法看到外面的情况,往往需要取下头戴式显示设备才能看到外面环境。例如,当用户需要使用外部输入装置时,需要取下头盔并观看确认外部输入装置的位置,如果外部输入装置有按键时,可能还需要用户一会取下头戴式显示设备,看清楚要操作的按键的位置,一会要戴上头戴式显示设备,看头戴式显示设备上显示的内容,给用户输入带来了极大的不便。
发明内容
本发明实施例公开一种头戴式显示设备及其视觉辅助方法,能够在用户穿戴头戴式显示设备时,辅助用户观看外部环境,方便用户的操作。
本发明实施例公开的头戴式显示设备,包括显示装置及处理器,其中,所述头戴式显示设备还包括:摄像装置,用于摄取头戴式显示设备周围的影像画面;运动传感器,用于侦测所述头戴式显示设备相对于一参考位置的运动角度;所述处理器与所述显示装置、摄像装置及运动传感器连接,用于在所述运动传感器侦测到所述头戴式显示设备相对所述参考位置向下运动达到第一预设角度时,控制开启所述摄像装置拍摄影像画面;所述处理器并用于根据所述运动传感器侦测到的运动角度的变化,控制所述显示装置显示所拍摄的影像画面中的与所述运动角度对应的区域。
本发明实施例公开的视觉辅助方法,应用于头戴式显示设备,所述头戴式显示设备包括显示装置、摄像装置及运动传感器,其中,所述视觉辅助方法包 括:在所述运动传感器侦测到所述头戴式显示设备相对所述参考位置向下运动达到第一预设角度时,控制开启所述摄像装置拍摄影像画面;以及根据所述运动传感器侦测到的运动角度的变化,控制所述显示装置显示所拍摄的影像画面中的与所述运动角度对应的区域。
本发明的头戴式显示设备及其视觉辅助方法,能够在头戴式显示设备相对于参考位置向下运动了第一预设角度时开启摄像装置以拍摄影像画面,并根据运动角度的变化显示影像画面的对应区域,能够辅助穿戴有所述头戴式显示设备的用户进行观察外部环境,并可根据所显示的影像画面的区域的变化,显示一个视野渐变的过程。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明一实施例中的头戴式显示设备的示意图。
图2为本发明一实施例中的头戴式显示设备的功能模块图;
图3-图6为第一实施例中头戴式显示设备的显示装置跟随运动角度的变化进行显示的具体过程示例图。
图7为本发明一实施例中的头戴式显示设备的显示装置显示完整的影像画面的示意图。
图8为本发明第二实施例中的头戴式显示设备的显示装置在一运动角度下进行显示的示意图。
图9为本发明一实施例中的视觉辅助方法的流程图。
图10为图9中步骤S902的子流程图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参阅图1,为头戴式显示设备100的示意图。如图1所示,所述头戴式显示设备100包括显示装置1以及摄像装置2。所述显示装置1用于输出显示画面。所述摄像装置2设置于显示装置1上,用于拍摄头戴式显示设备100周围环境的影像画面。在一些实施例中,如图1所示,所述摄像装置2设置于显示装置1的前方,用于拍摄头戴式显示设备100前方的影像画面。
请一并参阅图2,图2为头戴式显示设备100的结构框图。所述头戴式显示设备100除包括所述显示装置1以及摄像装置2以外,还包括运动传感器3以及处理器4。
所述运动传感器3设置于所述耳机装置1或所述显示装置1上,用于侦测所述头戴式显示设备100相对于一参考位置的运动角度。
所述处理器4与所述显示装置1、摄像装置2及运动传感器3连接,用于在所述运动传感器3侦测到所述头戴式显示设备100相对所述参考位置向下运动达到第一预设角度时,控制开启所述摄像装置2拍摄影像画面。所述处理器4并根据所述运动传感器3侦测到的运动角度的变化,控制所述显示装置1显示所拍摄的影像画面中的与所述运动角度对应的区域。在一些实施例中,所述运动传感器3侦测所述头戴式显示设备100相对于一参考位置的运动角度产生对应的感应信号,所述处理器4接收所述感应信号后,确定所述运动传感器3侦测到的所述头戴式显示设备100相对所述参考位置的运动角度。
从而,本申请可在头戴式显示设备100相对所述参考位置向下运动达到第一预设角度时,自动开启摄像装置2作为视觉辅助,无需取下头戴式显示设备100。
其中,所述处理器4在所述运动传感器3侦测到的运动角度变大时,控制所述显示装置1显示所述影像画面中更大面积的区域。所述处理器4在所述运 动传感器3侦测到的运动角度变小时,控制所述显示装置1显示所述影像画面中更小面积的区域。
其中,所述参考位置为用户穿戴所述头戴式显示设备100后,用户面向正前方时,所述头戴式显示设备100的位置。
在第一实施例中,所述处理器4并根据所述运动传感器3侦测到的运动角度的变化,控制所述显示装置1显示所拍摄的影像画面中的与所述运动角度对应的区域,包括:所述处理器4在所述运动传感器3侦测到的运动角度变大过程或变小过程中,获取不同的运动角度下所述摄像装置2拍摄的影像画面,并截取当前运动角度下拍摄的影像画面中的与所述运动角度对应的区域,并控制显示装置1显示所述影像画面中截取的区域。
请一并参阅图3-图6,为第一实施例中所述处理器4控制显示装置1进行显示的具体过程示例图。图3为摄像装置2在第一预设角度拍摄的第一影像画面IM1的示意图。即,图3所示的第一影像画面IM1为所述头戴式显示设备100相对所述参考位置向下运动达到第一预设角度时所拍摄的影像画面。
如图3所示,所述摄像装置2当前拍摄到的第一影像画面IM1中包括笔P1、书本B1以及输入装置T1的一部分。
请一并参阅图4,为显示装置1显示的第一影像画面IM1中部分区域的示意图。所述处理器4获取所述摄像装置2在第一预设角度如角度a时拍摄的第一影像画面IM1。如图4所示,所述处理器4并控制显示装置1显示摄像装置2当前拍摄到的第一影像画面IM1中与所述第一预设角度对应的部分区域A1。
其中,所述部分区域A1为从所述第一影像画面IM1中截取的从所述第一影像画面IM1上边往下延伸的高度为h1的区域。
设第一影像画面IM1的高度为h,此时,显示装置1显示所述第一影像画面IM1中的h1/h比例的区域。
请一并参阅图5,为摄像装置2在一大于所述第一预设角度的特定角度如角度b拍摄的第二影像画面IM2的示意图。即,图5所示的第二影像画面IM2为所述头戴式显示设备100相对所述参考位置向下运动达到所述特定角度b时所拍摄的影像画面。
如图5所示,随着用户头部往下运动,带动所述头戴式显示设备100及摄 像装置2往下移动,当移动到所述特定角度时,所述摄像装置2拍摄的第二影像画面IM2会变化成如图5所示的画面。
如图5所示,所述摄像装置2当前拍摄到的第二影像画面IM2中包括笔P1的下半部分、书本B1的下半部分、输入装置T1的全部以及手Z1的部分。
所述处理器4并获取所述摄像装置2在所述特定角度时拍摄的第二影像画面IM2。
请一并参阅图6,为显示装置1显示的第二影像画面IM2中部分区域的示意图。如图6所示,所述处理器4控制显示装置1显示摄像装置2当前拍摄到的第二影像画面IM2中与所述特定角度对应的部分区域A2。
其中,所述部分区域A2为处理器4从所述第二影像画面IM2中截取的从所述第二影像画面IM2上边往下延伸的高度为h2的区域。
设第二影像画面IM2的高度为h,此时,显示装置1显示所述第二影像画面IM2中的h2/h比例的区域。
其中,所述区域A2的高度h2与所述区域A1的高度h1的比值等于所述特定角度与第一预设角度之比。设第一预设角度为a,特定角度为b,则h1/h2=a/b,从而h2=h1*b/a。
如图2所示,所述头戴式显示设备100还包括存储器5,在一些实施例中,所述存储器5中预存有第一预设角度a与高度h1的对应关系。所述处理器4可根据所述对应关系确定所述头戴式显示设备100相对所述参考位置向下运动达到第一预设角度时对应的高度h1,并从第一影像画面IM1中截取从第一影像画面IM1的上边往下延伸的高度为h1的区域A1,并控制显示装置1显示从所述第一影像画面IM1中截取的区域A1。所述处理器4并可在所述头戴式显示设备100相对所述参考位置向下运动达到任一特定角度b时,根据公式h2=h1*b/a计算相对于参考位置向下运动达到所述特定角度b时所需截取的区域A2的高度h2。所述处理器4并从第二影像画面IM2中截取从第二影像画面IM2上边往下延伸的高度为h2的区域作为所述区域A2,并控制显示所述第二影像画面IM2中截取的区域A2。
在一些实施例中,所述存储器5中也可以预存有多个角度与高度的对应关系,所述多个角度与高度的对应关系包括第一预设角度a与高度h1的对应关 系、其他多个角度与高度的对应关系。所述处理器4可根据所述对应关系确定所述头戴式显示设备100相对所述参考位置向下运动当前达到的角度所对应的高度。所述处理器4并从当前影像画面中截取从影像画面上边往下延伸到对应高度的区域,并控制显示所述从当前影像画面中截取的区域。
请一并参阅图7,在一些实施例中,所述处理器4在所述运动传感器3侦测到的相对所述参考位置向下运动的运动角度达到第二预设角度c时,控制所述显示装置1显示所述摄像装置2当前拍摄的完整的影像画面IM3。其中,所述第二预设角度大于所述特定角度及第一预设角度。所述特定角度为所述第一预设角度a及第二预设角度c之间的任意角度。
其中,上述描述中仅以一个特定角度作为说明,显然,头戴式显示设备100相对所述参考位置向下运动时,所述摄像装置2在第一预设角度与第二预设角度之间,经过多个特定角度的位置都会拍摄影像画面,所述处理器4根据公式h2=h1*b/a计算相对于参考位置向下运动达到所述特定角度b时所需截取的区域A2的高度h2,从当前影像画面中截取从第二影像画面IM2上边往下延伸的具有相应高度的区域,而控制显示装置1依次显示在每一特定角度位置拍摄的影像画面中的具有相应高度的区域。
从而,随着用户的低头,显示装置1显示的画面也逐渐变大,给用户一个逐渐打开视野的感觉。
如图7所示,在向下运动的运动角度达到第二预设角度c时,此时,影像画面IM3显示了整个输入装置T1及人手Z1。
从而,用户无需摘下头戴式显示设备100,可通过摄像装置2拍摄的影像画面得知,当前手与输入装置T1的相对位置,并可移动到所述输入装置T1的位置处,对输入装置T1进行操作,而输入相应内容,极大地方便了用户的操作。
如图6-7所示,在一些实施例中,所述输入装置T1中可包括实体的或虚拟的按键K1,用户通过输入装置T1进行输入时,可以通过摄像装置2拍摄的影像画面清楚地知道按键K1的位置,并可将手Z1移动到对应的按键K1处进行操作。
其中,当用户抬起头而带动所述头戴式显示设备100向上时,如前所述, 所述处理器4在所述运动传感器3侦测到的运动角度变小的过程中,获取不同的运动角度下所述摄像装置2拍摄的影像画面,并截取当前运动角度下拍摄的影像画面中相应的区域。
其中,所述头戴式显示设备100向上运动为所述头戴式显示设备100的逆过程。例如,当运动角度变小至所述特定角度b时,此时,摄像装置2当前拍摄到的为如图5所示的第二影像画面IM2,所述处理器4控制显示装置1显示如图6所示的所述第二影像画面IM2中的与所述第二预设角度对应的高度为h2的部分区域A2。当运动角度继续变小至所述第一预设角度a时,此时,摄像装置2当前拍摄到的为如图1所示的第一影像画面IM1,所述处理器4控制显示装置1显示如图4所示的所述第一影像画面IM1中的与所述第一预设角度对应的高度为h1的部分区域A1。
从而,随着用户的抬头,显示装置1显示的画面也逐渐变小,给用户一个逐渐关闭视野的感觉。
在一些实施例中,所述处理器4并在所述运动传感器3侦测到的相对所述参考位置向下的运动角度减小至小于所述第一预设角度a时,控制关闭摄像装置2和/或控制所述显示装置1停止显示所述影像画面。
在其他实施例中,所述处理器4并根据所述运动传感器3侦测到的运动角度的变化,控制所述显示装置1显示所述影像画面中的与所述运动角度对应的区域,包括:所述处理器4在所述运动传感器3侦测到的运动角度变大过程中,获取不同的运动角度下所述摄像装置2拍摄的影像画面,并截取不同影像画面中的不同区域,并控制所述显示装置1显示当前截取的区域与之前截取的所有区域的拼接区域;所述处理器4在所述运动传感器3侦测到的运动角度变小过程中,获取不同的运动角度下所述摄像装置2拍摄的影像画面,并截取当前运动角度下拍摄的影像画面中与所述运动角度对应的区域,并控制显示装置1显示当前影像画面中截取的区域。
如前面的图3及图4所述,当所述头戴式显示设备100相对所述参考位置向下运动达到第一预设角度时,所述摄像装置2拍摄得到第一影像画面IM1。所述摄像装置2当前拍摄到的第一影像画面IM1中包括笔P1、书本B1以及输入装置T1的一部分。
如图4所示,所述处理器4控制显示装置1显示当前摄像装置2当前拍摄到的第一影像画面IM1中的与所述第一预设角度对应的部分区域A1。其中,所述部分区域A1为从所述第一影像画面IM1中截取的包括所述第一影像画面IM1上边的高度为h1的区域。
请一并参阅图8,为第二实施例中所述处理器4在所述头戴式显示设备100相对所述参考位置向下运动到特定角度b时,控制显示装置1显示的区域A3的示意图。
如前所述,运动到特定角度b时,所述摄像装置2拍摄的影像画面为图5所示的第二影像画面IM2。所述处理器4根据h2=h1*b/a确定第二影像画面IM2中的包括所述第二影像画面IM2上边的高度为h2的区域A2。
所述处理器4并确定所述区域A2中与所述图4中所述的运动到第一预设角度时显示的区域A1重叠的区域,并将所述区域A2中位于重叠的区域中的图像去除,得到如图8中(2)所示的h2-h1的区域。
所述处理器4并将如图8中(1)所示的所述运动到第一预设角度时显示的区域A1与图8中(2)所示的所述区域A2去除与所述区域A1重叠区域后的区域部分叠加,得到如图8中(3)所示的叠加区域A3。其中,所述叠加区域A3的上部分区域为从所述第一影像画面IM1中截取的高度为h1的区域,所述叠加区域A3的下部分区域为从所述第二影像画面IM2中截取的高度为h2-h1的区域。
而当用户抬起头而带动头戴式显示设备100向上时,所述处理器4控制显示装置1的显示与第一实施例相同,不在此赘述。
显然,当所述头戴式显示设备100相对所述参考位置向下运动到第二预设角度时,所述处理器4同样控制当前显示装置1显示当前获取的整个影像画面。
其中,如图1所示,所述头戴式显示设备100包括所述输入装置T1,所述输入装置T1与所述显示装置1和/或耳机装置6通过有线或无线连接。所述输入装置T1可为具有实体按键的输入装置或者可为显示有虚拟按键的触摸屏、触摸板等输入装置。
在一些实施例中,所述输入装置T1可为与所述头戴式显示设备100适配的外部装置,不为所述头戴式显示设备100的一部分。
所述摄像装置2可为摄像头,拍摄的影像画面可为间隔预定时间或预设运动角度所拍摄的图像画面,也可为视频画面。当为视频画面时,所述处理器4可在运动角度大于等于所述第一预设角度后,控制显示装置1实时显示视频画面中的部分区域,并在达到一些特定角度时,改变所述视频画面中所显示的部分区域的面积,以及在达到所述第二预设角度时,控制显示装置1显示完整的视频画面。
其中,图3-8所示仅仅是示例性的图,用户可通过摄像装置2的辅助观看到外部景象,从而寻找到输入装置T1以及其他所需的物件,或者实现与其他人对话等等功能。
如图1所示,所述头戴显示设备还包括耳机装置6、所述耳机装置6用于输出声音,所述耳机装置6可包括耳机环带61及两个耳机听筒62,所述两个耳机听筒62通过所述耳机环带61连接。
其中,该处理器4可为中央处理器、微处理器、微控制器、单片机、数字信号处理器等。该存储器5可为闪存卡、固态存储器、随机存储器等。
所述运动传感器3可为陀螺仪或加速度传感器等。
该头戴式显示设备100可为智能眼镜、智能头盔等头戴式显示设备。
请参阅图9,为本发明一实施例中的辅助视觉方法的流程图。该方法用于前述的头戴式显示设备100中。该方法包括如下步骤:
所述处理器4在运动传感器3侦测到所述头戴式显示设备100相对所述参考位置向下运动达到第一预设角度时,控制开启所述摄像装置2以拍摄影像画面(S901)。
所述处理器4并根据所述运动传感器3侦测到的运动角度的变化,控制所述显示装置1显示所拍摄的影像画面中的与所述运动角度对应的区域(S902)。具体的,所述处理器4在所述运动传感器3侦测到的运动角度变大时,控制所述显示装置1显示所述影像画面中更大面积的区域。所述处理器4在所述运动传感器3侦测到的运动角度变小时,控制所述显示装置1显示所述影像画面中更小面积的区域。
其中,在一些实施例中,所述辅助视觉方法还包括步骤:
所述处理器4判断所述运动传感器3侦测到的运动角度是否大于或等于第 二预设角度(S903)。如果是,则执行步骤S904,如果否,则返回步骤S902。
控制显示装置1显示所述摄像装置2拍摄的完整的影像画面(S904)。
显然,在一些实施例中,所述步骤S903及步骤S904也可包含在所述步骤S902中,显示完整的影像画面表示此时显示的影像画面中的区域为所述整个影像画面。
其中,在一些实施例中,所述辅助视觉方法还包括步骤:
所述处理器4判断所述运动传感器3侦测到的运动角度是否小于第一预设角度(S905)。如果是,则执行步骤S906,如果否,则返回步骤S902。其中,所述步骤S905也可执行于所述步骤S903之前。
处理器4控制关闭所述摄像装置2和/或控制所述显示装置1停止显示所述摄像装置2拍摄的影像画面(S906)。
请参阅图10,为图9中步骤S902在本发明一实施例中的子流程图。如图10所示,在一实施例中,所述步骤S902包括:
处理器4获取摄像装置2在第一预设角度a拍摄的第一影像画面IM1(S9021)。
处理器4控制显示装置1显示所述第一影像画面IM1中的具有第一高度h1的区域A1(S9022)。在一些实施例中,所述头戴式显示设备100的存储器5中预存有第一预设角度a与高度h1的对应关系。所述处理器4可根据所述对应关系确定所述第一预设角度a对应的高度h1,并从第一影像画面IM1中截取从第一影像画面IM1上边往下延伸的高度为h1的区域作为所述对应的区域A1,并控制显示所述第一影像画面IM1中截取的区域A1。
所述处理器4获取摄像装置2在大于所述第一预设角度的特定角度b拍摄的第二影像画面IM2(S9023)。
所述处理器4控制显示装置1显示当前摄像装置2当前拍摄到的第二影像画面IM2中与所述特定角度b对应的区域A2(S9024)。在一些实施例中,所述处理器4根据公式h2=h1*b/a计算相对于参考位置向下运动达到所述特定角度b时对应的高度h2,并从第二影像画面IM2中截取从第二影像画面IM2上边往下延伸的高度为h2的区域作为所述对应的区域A2,并控制显示所述第二影像画面IM2中截取的区域A2。在另一些实施例中,所述头戴式显示设备 100的存储器5中预存有多个角度与高度的对应关系,所述处理器4根据所述对应关系确定所述特定角度b对应的高度h2,然后从第二影像画面IM2中截取从第二影像画面IM2上边往下延伸的高度为h2的区域作为所述对应的区域A2,并控制显示所述第二影像画面IM2中截取的区域A2。
在其他实施例中,步骤S902包括:所述处理器4在所述运动传感器3侦测到的运动角度变大过程中,获取不同的运动角度下所述摄像装置2拍摄的影像画面;截取不同影像画面中的不同区域,并控制所述显示装置1显示当前截取的区域与之前截取的所有区域拼接而成的区域;所述处理器4在所述运动传感器3侦测到的运动角度变小过程中,获取不同的运动角度下所述摄像装置2拍摄的影像画面;截取当前运动角度下拍摄的影像画面中与所述运动角度对应的区域,并控制显示装置1显示所述截取的区域。
从而,本申请中,通过使用摄像装置2作为辅助视觉装置,可以在用户需要低头寻找东西时开启,以辅助用户寻找如输入装置等在内的物件。
以上所述是本发明的优选实施例,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也视为本发明的保护范围。

Claims (20)

  1. 一种头戴式显示设备,包括显示装置及处理器,其特征在于,所述头戴式显示设备还包括:
    摄像装置,用于摄取头戴式显示设备周围的影像画面;
    运动传感器,用于侦测所述头戴式显示设备相对于一参考位置的运动角度;
    所述处理器与所述显示装置、摄像装置及运动传感器连接,用于在所述运动传感器侦测到所述头戴式显示设备相对所述参考位置向下运动达到第一预设角度时,控制开启所述摄像装置拍摄影像画面;所述处理器并用于根据所述运动传感器侦测到的运动角度的变化,控制所述显示装置显示所拍摄的影像画面中的与所述运动角度对应的区域。
  2. 如权利要求1所述的头戴式显示设备,其特征在于,所述处理器在所述运动传感器侦测到的运动角度变大时,控制所述显示装置显示所述影像画面中更大面积的区域;所述处理器在所述运动传感器侦测到的运动角度变小时,控制所述显示装置显示所述影像画面中更小面积的区域。
  3. 如权利要求2所述的头戴式显示设备,其特征在于,所述处理器在所述运动传感器侦测到的运动角度大于或等于第二预设角度时,控制显示装置显示所述摄像装置拍摄的完整的影像画面。
  4. 如权利要求1所述的头戴式显示设备,其特征在于,所述处理器在所述运动传感器侦测到的运动角度小于第一预设角度时,控制关闭所述摄像装置和/或控制所述显示装置停止显示所述摄像装置拍摄的影像画面。
  5. 如权利要求1-4任一项所述的头戴式显示设备,其特征在于,所述处理器在所述运动传感器侦测到的运动角度变大过程或变小过程中,获取不同的运动角度下所述摄像装置拍摄的影像画面,截取当前运动角度下拍摄的影像画面中的与所述运动角度对应的区域,并控制显示装置显示所述影像画面中截取的区域。
  6. 如权利要求5所述的头戴式显示设备,其特征在于,所述处理器获取不同的运动角度下所述摄像装置拍摄的影像画面,截取当前运动角度下拍摄的 影像画面中的与所述运动角度对应的区域,并控制显示装置显示所述影像画面中截取的区域,包括:
    所述处理器获取摄像装置在第一预设角度拍摄的第一影像画面,截取所述第一影像画面中的与所述第一预设角度对应的区域,以及控制显示装置显示所述第一影像画面中截取的区域;所述处理器获取摄像装置在大于所述第一预设角度的特定角度拍摄的第二影像画面,截取所述第二影像画面中的与所述特定角度对应的区域,并控制显示装置显示所述第二影像画面中截取的区域。
  7. 如权利要求6所述的头戴式显示设备,其特征在于,所述头戴式显示设备还包括存储器,所述存储器中预存有第一预设角度与第一高度的对应关系;所述处理器截取所述第一影像画面中的与所述第一预设角度对应的区域,以及控制显示装置显示所述第一影像画面中截取的区域包括:所述处理器根据所述第一预设角度与第一高度的对应关系确定所述第一预设角度对应的第一高度,并从第一影像画面中截取从第一影像画面上边往下延伸的高度为第一高度的区域,并控制显示所述第一影像画面中截取的区域。
  8. 如权利要7所述的头戴式显示设备,其特征在于,所述处理器截取所述第二影像画面中的与所述特定角度对应的区域,并控制显示装置显示所述截取的区域包括:所述处理器根据公式h2=h1*b/a计算相对于参考位置向下运动达到所述特定角度时对应的第二高度,并从第二影像画面中截取从第二影像画面上边往下延伸的高度为第二高度的区域,并控制显示所述第二影像画面中截取的区域;其中,h2为所述第二高度,h1为所述第一高度,b为所述特定角度,a为所述第一预设角度。
  9. 如权利要求7所述的头戴式显示设备,其特征在于,所述存储器中还预存有多个角度与高度的对应关系,所述处理器根据所述对应关系确定所述特定角度对应的第二高度,然后从第二影像画面中截取从第二影像画面上边往下延伸的高度为第二高度的区域,并控制显示所述第二影像画面中截取的区域。
  10. 如权利要求1-4任一项所述的头戴式显示设备,其特征在于,所述处理器在所述运动传感器侦测到的运动角度变大过程中,获取不同的运动角度下所述摄像装置拍摄的影像画面,截取不同影像画面中的不同区域,并控制所述 显示装置显示当前截取的区域与之前截取的所有区域拼接而成的区域。
  11. 一种视觉辅助方法,应用于一头戴式显示设备,所述头戴式显示设备包括显示装置、摄像装置及运动传感器,其特征在于,所述视觉辅助方法包括:
    在所述运动传感器侦测到所述头戴式显示设备相对所述参考位置向下运动达到第一预设角度时,控制开启所述摄像装置拍摄影像画面;以及
    根据所述运动传感器侦测到的运动角度的变化,控制所述显示装置显示所拍摄的影像画面中的与所述运动角度对应的区域。
  12. 如权利要求11所述的方法,其特征在于,所述步骤“根据所述运动传感器侦测到的运动角度的变化,控制所述显示装置显示所拍摄的影像画面中的与所述运动角度对应的区域”包括:
    在所述运动传感器侦测到的运动角度变大时,控制所述显示装置显示所述影像画面中更大面积的区域;以及
    在所述运动传感器侦测到的运动角度变小时,控制所述显示装置显示所述影像画面中更小面积的区域。
  13. 如权利要求12所述的方法,其特征在于,所述方法还包括步骤:
    在所述运动传感器侦测到的运动角度大于或等于第二预设角度时,控制显示装置显示所述摄像装置拍摄的完整的影像画面。
  14. 如权利要求11所述的方法,其特征在于,所述方法还包括步骤:
    在所述运动传感器侦测到的运动角度小于第一预设角度时,控制关闭所述摄像装置和/或控制所述显示装置停止显示所述摄像装置拍摄的影像画面。
  15. 如权利要求11-14任一项所述的方法,其特征在于,所述步骤“根据所述运动传感器侦测到的运动角度的变化,控制所述显示装置显示所拍摄的影像画面中的与所述运动角度对应的区域”包括:
    在所述运动传感器侦测到的运动角度变大过程或变小过程中,获取不同的运动角度下所述摄像装置拍摄的影像画面,截取当前运动角度下拍摄的影像画面中的与所述运动角度对应的区域,并控制显示装置显示所述影像画面中截取的区域。
  16. 如权利要求15所述的方法,其特征在于,所述步骤“获取不同的运 动角度下所述摄像装置拍摄的影像画面,截取当前运动角度下拍摄的影像画面中的与所述运动角度对应的区域,并控制显示装置显示所述影像画面中截取的区域”,包括:
    获取摄像装置在第一预设角度拍摄的第一影像画面,截取所述第一影像画面中的与所述第一预设角度对应的区域,以及控制显示装置显示所述第一影像画面中截取的区域;以及
    获取摄像装置在大于所述第一预设角度的特定角度拍摄的第二影像画面,截取所述第二影像画面中的与所述特定角度对应的区域,并控制显示装置显示所述第二影像画面中截取的区域。
  17. 如权利要求16所述的方法,其特征在于,所述步骤“截取所述第一影像画面中的与所述第一预设角度对应的区域,以及控制显示装置显示所述第一影像画面中截取的区域”包括:
    根据所述第一预设角度与第一高度的对应关系确定所述第一预设角度对应的第一高度;
    从第一影像画面中截取从第一影像画面上边往下延伸的高度为第一高度的区域;以及
    控制显示所述第一影像画面中截取的区域。
  18. 如权利要17所述的方法,其特征在于,所述步骤“截取所述第二影像画面中的与所述特定角度对应的区域,并控制显示装置显示所述第二影像画面中截取的区域”包括:
    根据公式h2=h1*b/a计算相对于参考位置向下运动达到所述特定角度时对应的第二高度;其中,h2为所述第二高度,h1为所述第一高度,b为所述特定角度,a为所述第一预设角度;
    从第二影像画面中截取从第二影像画面上边往下延伸的高度为第二高度的区域;以及
    控制显示所述第二影像画面中截取的区域。
  19. 如权利要求17所述的方法,其特征在于,,所述处理器根据多个角度与高度的对应关系确定所述特定角度对应的第二高度,然后从第二影像画面 中截取从第二影像画面上边往下延伸的高度为第二高度的区域,并控制显示所述第二影像画面中截取的区域。
  20. 如权利要求11-14任一项所述的方法,其特征在于,所述步骤“根据所述运动传感器侦测到的运动角度的变化,控制所述显示装置显示所拍摄的影像画面中的与所述运动角度对应的区域”包括:
    在所述运动传感器侦测到的运动角度变大过程中,获取不同的运动角度下所述摄像装置拍摄的影像画面,截取不同影像画面中的不同区域,并控制所述显示装置显示当前截取的区域与之前截取的所有区域的拼接区域。
PCT/CN2016/111510 2016-12-22 2016-12-22 头戴式显示设备及其视觉辅助方法 WO2018112838A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US16/471,352 US20190333468A1 (en) 2016-12-22 2016-12-22 Head mounted display device and visual aiding method
CN201680042725.4A CN107980220A (zh) 2016-12-22 2016-12-22 头戴式显示设备及其视觉辅助方法
EP16924452.2A EP3561570A1 (en) 2016-12-22 2016-12-22 Head-mounted display apparatus, and visual-aid providing method thereof
PCT/CN2016/111510 WO2018112838A1 (zh) 2016-12-22 2016-12-22 头戴式显示设备及其视觉辅助方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/111510 WO2018112838A1 (zh) 2016-12-22 2016-12-22 头戴式显示设备及其视觉辅助方法

Publications (1)

Publication Number Publication Date
WO2018112838A1 true WO2018112838A1 (zh) 2018-06-28

Family

ID=62004258

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/111510 WO2018112838A1 (zh) 2016-12-22 2016-12-22 头戴式显示设备及其视觉辅助方法

Country Status (4)

Country Link
US (1) US20190333468A1 (zh)
EP (1) EP3561570A1 (zh)
CN (1) CN107980220A (zh)
WO (1) WO2018112838A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI704376B (zh) * 2019-07-19 2020-09-11 宏碁股份有限公司 視角校正方法、虛擬實境顯示系統與運算裝置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10423241B1 (en) * 2017-07-31 2019-09-24 Amazon Technologies, Inc. Defining operating areas for virtual reality systems using sensor-equipped operating surfaces
CN116224593A (zh) * 2018-06-25 2023-06-06 麦克赛尔株式会社 头戴式显示器、头戴式显示器协作系统及其方法
CN113596369B (zh) * 2021-07-23 2023-09-01 深圳市警威警用装备有限公司 多终端协同执法记录方法、电子设备及计算机储存介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150084841A1 (en) * 2010-12-03 2015-03-26 Esight Corp. Apparatus and method for a bioptic real time video system
CN105209959A (zh) * 2013-03-14 2015-12-30 高通股份有限公司 用于头戴式显示器的用户界面
US20160107572A1 (en) * 2014-10-20 2016-04-21 Skully Helmets Methods and Apparatus for Integrated Forward Display of Rear-View Image and Navigation Information to Provide Enhanced Situational Awareness
CN105992986A (zh) * 2014-01-23 2016-10-05 索尼公司 图像显示装置和图像显示方法
CN106199963A (zh) * 2014-09-01 2016-12-07 精工爱普生株式会社 显示装置及其控制方法以及计算机程序

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101658562B1 (ko) * 2010-05-06 2016-09-30 엘지전자 주식회사 이동 단말기 및 그 제어방법
CN103226282B (zh) * 2013-05-13 2016-09-07 合肥华恒电子科技有限责任公司 一种便携式虚拟现实投影装置
CN103439794B (zh) * 2013-09-11 2017-01-25 百度在线网络技术(北京)有限公司 头戴式设备的校准方法和头戴式设备
CN109069927A (zh) * 2016-06-10 2018-12-21 Colopl株式会社 用于提供虚拟空间的方法、用于使计算机实现该方法的程序以及用于提供虚拟空间的系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150084841A1 (en) * 2010-12-03 2015-03-26 Esight Corp. Apparatus and method for a bioptic real time video system
CN105209959A (zh) * 2013-03-14 2015-12-30 高通股份有限公司 用于头戴式显示器的用户界面
CN105992986A (zh) * 2014-01-23 2016-10-05 索尼公司 图像显示装置和图像显示方法
CN106199963A (zh) * 2014-09-01 2016-12-07 精工爱普生株式会社 显示装置及其控制方法以及计算机程序
US20160107572A1 (en) * 2014-10-20 2016-04-21 Skully Helmets Methods and Apparatus for Integrated Forward Display of Rear-View Image and Navigation Information to Provide Enhanced Situational Awareness

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI704376B (zh) * 2019-07-19 2020-09-11 宏碁股份有限公司 視角校正方法、虛擬實境顯示系統與運算裝置

Also Published As

Publication number Publication date
CN107980220A (zh) 2018-05-01
EP3561570A1 (en) 2019-10-30
US20190333468A1 (en) 2019-10-31

Similar Documents

Publication Publication Date Title
CN105229720B (zh) 显示控制装置、显示控制方法以及记录介质
JP3847753B2 (ja) 画像処理装置、画像処理方法、記録媒体、コンピュータプログラム、半導体デバイス
CN105210144B (zh) 显示控制装置、显示控制方法和记录介质
US20180088682A1 (en) Head mounted display
WO2018112838A1 (zh) 头戴式显示设备及其视觉辅助方法
KR102568708B1 (ko) 가상 현실 헤드셋에서 손 동작들을 인식하기 위한 방법 및 장치
CN107844190B (zh) 基于虚拟现实vr设备的图像展示方法及装置
US20150320601A1 (en) Method and system for mediated reality welding
JP5228074B2 (ja) 情報処理装置、表示制御方法
KR102280000B1 (ko) 표시 제어 장치, 표시 제어 방법 및 저장 매체
US11487354B2 (en) Information processing apparatus, information processing method, and program
JP2012244196A (ja) 画像処理装置及び方法
KR20150139159A (ko) 촬영 장치 및 촬영 장치의 비디오 생성방법
WO2016163183A1 (ja) 没入型仮想空間に実空間のユーザの周辺環境を提示するためのヘッドマウント・ディスプレイ・システムおよびコンピュータ・プログラム
WO2017022291A1 (ja) 情報処理装置
JP2014053794A (ja) 情報処理プログラム、情報処理装置、情報処理システム及び情報処理方法
JP2016213674A (ja) 表示制御システム、表示制御装置、表示制御方法、及びプログラム
WO2023040288A1 (zh) 显示设备及设备控制方法
US11589001B2 (en) Information processing apparatus, information processing method, and program
JP4689548B2 (ja) 画像処理装置、画像処理方法、記録媒体、コンピュータプログラム、半導体デバイス
WO2018054097A1 (zh) 一种自拍校准视线的方法、装置及终端
US9066010B2 (en) Photographing apparatus, photographing method and medium recording photographing control program
WO2018123022A1 (ja) コンピュータプログラム、表示装置、頭部装着型表示装置、マーカ
WO2017113307A1 (zh) 头戴式显示设备及其摄像头的调节方法
WO2018176235A1 (zh) 头戴式显示设备及其显示切换方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16924452

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2016924452

Country of ref document: EP

Effective date: 20190722