CN113190110A - Interface element control method and device of head-mounted display equipment - Google Patents

Interface element control method and device of head-mounted display equipment Download PDF

Info

Publication number
CN113190110A
CN113190110A CN202110342716.XA CN202110342716A CN113190110A CN 113190110 A CN113190110 A CN 113190110A CN 202110342716 A CN202110342716 A CN 202110342716A CN 113190110 A CN113190110 A CN 113190110A
Authority
CN
China
Prior art keywords
wrist
target
target image
interface element
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110342716.XA
Other languages
Chinese (zh)
Inventor
吴涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Xiaoniao Kankan Technology Co Ltd
Original Assignee
Qingdao Xiaoniao Kankan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Xiaoniao Kankan Technology Co Ltd filed Critical Qingdao Xiaoniao Kankan Technology Co Ltd
Priority to CN202110342716.XA priority Critical patent/CN113190110A/en
Publication of CN113190110A publication Critical patent/CN113190110A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to an interface element control method and device for a head-mounted display device, the head-mounted display device and a computer readable storage medium. The head-mounted display device comprises at least one camera for shooting a target image of a target area and a display interface, wherein the target image is displayed in the display interface; the method comprises the following steps: acquiring a target image shot by at least one camera; determining whether at least one wrist meets a preset condition according to the target image under the condition that the image content of the target image comprises at least one wrist; wherein the preset conditions include: the wrist is in a static state; if the result of the determination is positive, determining a target display area taking the target wrist as the center in the display interface; wherein the target wrist is one of the at least one wrist; and displaying at least one interface element in the current scene in the target display area, so that after the interface element selected by the user is determined according to the gesture of the user, the information corresponding to the interface element is used as input information.

Description

Interface element control method and device of head-mounted display equipment
Technical Field
The embodiment of the disclosure relates to the technical field of virtual reality, and more particularly, to a method and an apparatus for controlling interface elements of a head-mounted display device, and a computer-readable storage medium.
Background
A Head Mounted Display (HMD) is a Display device that can be worn on the Head of a user, and can achieve different effects such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR).
An HMD, when in use, may place a user's head or eyes in a closed environment to enable the user to be immersed in a virtual environment. In order to display the real content in the AR and MR, at least one color camera is further installed on the HMD, so that the real image in the real world is captured by the at least one color camera and projected to human eyes, thereby simulating that the user directly looks at the external three-dimensional physical space environment. The HMD displays the real image and the virtual image subjected to the superimposition processing through a display interface thereon.
Selectable and/or operable Interface elements, such as User Interface (UI) elements, Graphical User Interface (GUI) elements, and the like, may also be presented to the User in the display Interface of the HMD, so that the User may interact with the Interface elements through gestures. Currently, as the application fields of HMDs increase (e.g., entertainment fields, medical fields, education fields, industrial fields, etc.), various virtual reality scenes (or augmented reality scenes or mixed reality scenes) can be displayed on the display interface of the HMD, and interface elements are displayed at predetermined positions on the display interface.
Disclosure of Invention
It is an object of the embodiments of the present disclosure to provide a new technical solution for interface element control of a head-mounted display device.
According to a first aspect of the present disclosure, there is provided an interface element control method of a head-mounted display device, the head-mounted display device including at least one camera for taking a target image of a target area, and a display interface in which the target image is displayed; the method comprises the following steps: acquiring a target image shot by at least one camera; determining whether at least one wrist meets a preset condition according to the target image under the condition that the image content of the target image comprises at least one wrist; wherein the preset conditions include: the wrist is in a static state; if the result of the determination is positive, determining a target display area taking the target wrist as the center in the display interface; wherein the target wrist is one of the at least one wrist; and displaying at least one interface element in the current scene in the target display area, so that after the interface element selected by the user is determined according to the gesture of the user, the information corresponding to the interface element is used as input information.
Optionally, the preset conditions further include: the posture of the wrist is a preset posture.
Optionally, the preset gesture includes: the inner side of the wrist faces the head mounted display device; wherein, the inner side of the wrist and the palm center are positioned at the same side.
Optionally, the at least one wrist comprises a plurality of wrists; before determining a target display area centered on a target wrist in the display interface, the method further comprises: determining a preset wrist in a plurality of wrists according to the target image; wherein, predetermine the wrist and include: a left or right wrist; and taking the preset wrist as the target wrist.
Optionally, the gesture is a gesture made by a target hand in the target image, and a wrist of the target hand is a target wrist or a non-target wrist.
Optionally, the at least one interface element comprises one or more of: windows, dialog boxes, menus, scroll bars, and graphical symbols.
Optionally, the at least one camera comprises two cameras, the two cameras being arranged at two eye positions, respectively.
According to a second aspect of the present disclosure, there is also provided an interface element control apparatus of a head-mounted display device, the head-mounted display device including at least one camera for taking a target image of a target area, and a display interface in which the target image is displayed; the device includes: the acquisition module is used for acquiring a target image shot by at least one camera; the determining module is used for determining whether the at least one wrist meets a preset condition according to the target image under the condition that the image content of the target image acquired by the acquiring module comprises the at least one wrist; wherein the preset conditions include: the wrist is in a static state; the processing module is used for determining a target display area which takes the target wrist as the center in the display interface if the determination result of the determining module is positive; wherein the target wrist is one of the at least one wrist; and the display module is used for displaying at least one interface element in the current scene in the target display area obtained by the processing module so as to take the information corresponding to the interface element as input information after the interface element selected by the user is determined according to the gesture of the user.
Optionally, the preset conditions further include: the posture of the wrist is a preset posture.
Optionally, the preset gesture includes: the inner side of the wrist faces the head mounted display device; wherein, the inner side of the wrist and the palm center are positioned at the same side.
Optionally, the at least one wrist comprises a plurality of wrists; before determining a target display area centered on a target wrist in the display interface, the apparatus further comprises: the wrist determining module is used for determining a preset wrist in the plurality of wrists according to the target image; wherein, predetermine the wrist and include: a left or right wrist; and taking the preset wrist as the target wrist.
Optionally, the gesture is a gesture made by a target hand in the target image, and a wrist of the target hand is a target wrist or a non-target wrist.
Optionally, the at least one interface element comprises one or more of: windows, dialog boxes, menus, scroll bars, and graphical symbols.
Optionally, the at least one camera comprises two cameras, the two cameras being arranged at two eye positions, respectively.
According to a third aspect of the present disclosure, there is also provided a head-mounted display device comprising at least one camera for taking a target image of a target area, and a display interface in which the target image is displayed; the head-mounted display device further comprises a memory for storing a computer program and a processor; the processor is adapted to execute a computer program to implement the method according to the first aspect of the present disclosure.
According to a fourth aspect of the present disclosure, there is also provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method according to the first aspect of the present disclosure.
The method and the device for displaying the interface element in the head-mounted display device have the advantages that the wrist position of the user can be tracked in the display interface of the head-mounted display device, the target display area used for displaying at least one interface element in the current scene in the display interface is determined by positioning the position of the wrist in the display interface, and the at least one interface element in the current scene is displayed in the target display area, so that after the interface element selected by the user is determined according to the gesture of the user, information corresponding to the interface element is used as input information. Therefore, in the embodiment of the disclosure, the user can specify the display position of the interface element in the display interface in real time according to the virtual reality scene, and the convenience of the head-mounted display device in the using process is improved.
Other features of embodiments of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which is to be read in connection with the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the embodiments of the disclosure.
Fig. 1 illustrates a flowchart of a method of controlling an interface element of a head-mounted display device according to an embodiment of the present disclosure;
FIG. 2 illustrates a preset gesture of a wrist in another method for controlling interface elements of a head-mounted display device according to an embodiment of the disclosure;
fig. 3 is a flowchart illustrating a method of controlling an interface element of a head-mounted display device according to another embodiment of the disclosure;
fig. 4 shows a functional structure block diagram of an interface element control apparatus of a head-mounted display device according to an embodiment of the present disclosure;
fig. 5 shows a functional structure block diagram of a head-mounted display device provided by an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The HMD can present to the user a visualized three-dimensional virtual image in the virtual world, or the HMD can overlay the virtual world and the real world, e.g., overlay a virtual image in the virtual world with a real image in the real world, thereby presenting to the user a visualized three-dimensional image that is a combination of the virtual image and the real image.
Currently, as the application field of the HMD increases, a variety of virtual reality scenes can be displayed in a display interface of the HMD. In the entertainment field, the HMD may display various game scenes in a human-computer interaction game, for example, and may display experience scenes in a theme park (e.g., an ancient earth ecology scene displaying virtual reality in the process of experiencing the ancient earth ecology by a user), and may play a movie; in the medical field, the HMD may perform, for example, a remote operation and simulate an actual operation scene, in the educational field, the HMD may present, for example, a variety of experience-type scenes to realize experience-type scene teaching, and in the industrial fields of steel mills, electric power, petrochemical production, and the like, the HMD may simulate a variety of industrially manufactured scenes, and the like.
In the above scenario, one or more interface elements may be displayed in the display interface of the HMD, such as a menu or sub-menu containing one or more operable options, or other virtual interface elements that may be operated, such as virtual static buttons, or dynamic animals and objects, etc.
In practical situations, a user needs to specify the display positions of the interface elements in the display interface in real time according to different scenes, for example, during playing a game with the HMD, multiple scenes in the game are switched back and forth in a short time, in which case, the user may want to specify the display positions of the interface elements in the display interface in real time according to practical situations; as another example, during a remote operation using the HMD, a doctor needs to specify, in real time, display positions of interface elements displayed in the display interface in the scene according to his/her professional knowledge and experience, and so on. However, the positions of the interface elements in the display interface are preset at present, and a user cannot specify the display positions of the interface elements in the display interface in real time, so that a great deal of invariance is caused to the user in the using process.
Based on the existing problems, the embodiment of the present disclosure provides a new technical solution for controlling interface elements of a head-mounted display device. Various embodiments and examples according to the present disclosure are described below with reference to the drawings.
< method examples >
Fig. 1 shows a flowchart of a method for controlling an interface element of a head-mounted display device according to an embodiment of the present disclosure. The head-mounted display device comprises at least one camera for shooting a target image of a target area and a display interface, wherein the target image is displayed in the display interface.
As shown in fig. 1, the method includes the following steps S110 to S140.
Step S110: an image of a target captured by at least one camera is acquired.
In some embodiments, the at least one camera includes two cameras disposed at two human eye locations, respectively, to accurately simulate a human eye looking directly at an external three-dimensional physical space environment.
Of course, it is understood that the at least one camera may include only one camera, or the at least one camera may include more than three cameras, as long as the three-dimensional physical space environment directly seen by the human eye can be simulated.
In some embodiments, the at least one camera is, for example, a color camera and the target image is a two-dimensional image captured by the color camera.
Some parameters of the color camera are as follows:
resolution is generally: 1280px 720px and above;
frame rate: 60Hz and above;
color mode: color (RGB) color patterns for three channels, red, green, and blue;
FOV: 130 ° (horizontal field angle) 110 ° (vertical field angle) or about 130 ° (110 °).
In some examples, the focal length of the lens in a color camera may be 16mm or even shorter, with a FOV close to or equal to 180 ° (e.g., a fisheye camera).
Step S120: determining whether at least one wrist meets a preset condition according to the target image under the condition that the image content of the target image comprises at least one wrist; wherein the preset conditions include: the wrist is in a resting state.
In the case that the image content of the target image includes at least one wrist, for each wrist, a position of the wrist in the target image may be determined first, where the position may be, for example, an image pixel position of the wrist, and then it is determined whether the wrist is stationary (or substantially stationary) within a preset time, for example, it is determined whether the image pixel position of the wrist is changed (or a change value exceeds a preset threshold) within the preset time, if the determination result is yes, it is determined that the wrist is in a stationary state, and if the determination result is no, it is determined that the wrist is not in a stationary state.
The preset time and the preset threshold may be set by those skilled in the art according to actual situations, and the embodiment of the disclosure does not limit this.
In some embodiments, the preset conditions further include: the posture of the wrist is a preset posture.
Exemplarily, as shown in fig. 2, assuming that the user currently wears the HMD to observe a paper surface, the preset posture is such that the inner side of the wrist faces the head mounted display device; wherein, the inner side of the wrist and the palm center are positioned at the same side.
Of course, it is understood that the preset gesture may also be set by the user according to his own habits, for example, the preset gesture may be that the outer side of the wrist (the side facing away from the inner side) faces the head mounted display device, and so on.
In some embodiments, before performing step S120, it may be first detected whether at least one wrist exists in the target image, and if the detection result is yes, it is determined that the image content of the target image includes at least one wrist, and if the detection result is no, it is determined that the wrist does not exist in the image content of the target image. The manner of detecting whether at least one wrist exists in the target image may be set by a person skilled in the art according to actual conditions, for example, whether a wrist image exists in the target image can be identified with high accuracy and low time delay by combining a computer vision technology and an artificial intelligence technology, and the like, which is not limited in the embodiment of the present disclosure.
Step S130: if the result of the determination is positive, determining a target display area taking the target wrist as the center in the display interface; wherein the target wrist is one of the at least one wrist.
If the result of the determination is yes, namely that at least one wrist meets the preset condition, determining a target display area taking the target wrist as the center in the display interface.
The size and shape of the target display area can be set by those skilled in the art according to practical situations, and the embodiment of the present disclosure does not limit this.
In some embodiments, at least one wrist comprises one wrist, in which case the wrist is directly targeted in step S130.
In some embodiments, the at least one wrist comprises a plurality of wrists, in which case, before performing step S130, as shown in fig. 3, the following steps S310 to S320 may also be performed:
step S310: determining a preset wrist in a plurality of wrists according to the target image; wherein, predetermine the wrist and include: left wrist or right wrist.
In some examples, the preset wrist is a left wrist, for example, the preset wrist is a left wrist by default (in general, the right hand of the user is better at performing operation actions such as clicking, sliding and the like than the left hand, so the position of the left wrist is more suitable for displaying the interface elements that can be operated), or the preset wrist is a left wrist, in which case, the left wrist is recognized by multiple wrists, so that the left wrist is taken as the target wrist in the subsequent step (corresponding to step S320).
In some examples, the preset wrist is a right wrist, for example, the preset wrist is preset as the right wrist, in which case the right wrist is recognized in a plurality of wrists so that the right wrist is taken as the target wrist in the subsequent step (corresponding to step S320).
The manner of identifying the left wrist and the right wrist can be set by those skilled in the art according to actual situations, and the embodiment of the disclosure does not limit this.
Step S320: and taking the preset wrist as the target wrist.
The target wrist is used to locate the target display area in step S130.
Step S140: and displaying at least one interface element in the current scene in the target display area, so that after the interface element selected by the user is determined according to the gesture of the user, the information corresponding to the interface element is used as input information.
Interface elements may include User Interface (UI) elements and/or Graphical User Interface (GUI) elements.
Illustratively, the at least one interface element includes one or more of: windows, dialog boxes, menus, scroll bars, and graphical symbols.
The gesture is a gesture made by a target hand in the target image, and the wrist of the target hand is a target wrist or a non-target wrist.
In some examples, the target display area is positioned according to a wrist of one hand of the user, in which case the user selects or manipulates the interface element displayed in the target display area by a gesture of the other hand. The HMD responds to the gesture of the user, determines the interface element selected by the user according to the gesture, and then takes the information corresponding to the interface element as input information to realize the interaction process of the gesture and the interface element.
In other examples, the target display area is positioned according to a wrist of a hand of the user, in which case the user selects or manipulates the interface elements displayed in the target display area through gestures of the hand. The HMD responds to the gesture of the user, determines the interface element selected by the user according to the gesture, and then takes the information corresponding to the interface element as input information to realize the interaction process of the gesture and the interface element.
The wrist position of a user can be tracked in the display interface of the head-mounted display device, the target display area used for displaying at least one interface element in the current scene in the display interface is determined by positioning the position of the wrist in the display interface, and the at least one interface element in the current scene is displayed in the target display area, so that after the interface element selected by the user is determined according to the gesture of the user, information corresponding to the interface element is used as input information. Therefore, in the embodiment of the disclosure, the user can specify the display position of the interface element in the display interface in real time according to the virtual reality scene, and the convenience of the head-mounted display device in the using process is improved.
< apparatus embodiment >
Fig. 4 shows a functional structure block diagram of an interface element control apparatus of a head-mounted display device according to an embodiment of the present disclosure. The head-mounted display device includes at least one camera for capturing a target image of a target area, and a display interface in which the target image is displayed.
As shown in fig. 4, the interface element control apparatus 40 of the head-mounted display device may include an acquisition module 41, a determination module 42, a processing module 43, and a display module 44.
An acquiring module 41, configured to acquire a target image captured by at least one camera.
A determining module 42, configured to determine whether at least one wrist meets a preset condition according to the target image in a case that the image content of the target image acquired by the acquiring module 41 includes at least one wrist; wherein the preset conditions include: the wrist is in a resting state.
A processing module 43, configured to determine, if the determination result of the determining module 42 is yes, a target display area centered on the target wrist in the display interface; wherein the target wrist is one of the at least one wrist.
A display module 44, configured to display at least one interface element in the current scene in the target display area obtained by the processing module 43, so that after the interface element selected by the user is determined according to the gesture of the user, information corresponding to the interface element is used as input information.
Optionally, the preset conditions further include: the posture of the wrist is a preset posture.
Optionally, the preset gesture includes: the inner side of the wrist faces the head mounted display device; wherein, the inner side of the wrist and the palm center are positioned at the same side.
Optionally, the at least one wrist comprises a plurality of wrists; before determining the target display area centered on the target wrist in the display interface, the interface element control apparatus of the head-mounted display device further includes: the wrist determining module is used for determining a preset wrist in the plurality of wrists according to the target image; wherein, predetermine the wrist and include: a left or right wrist; and taking the preset wrist as the target wrist.
Optionally, the gesture is a gesture made by a target hand in the target image, and a wrist of the target hand is a target wrist or a non-target wrist.
Optionally, the at least one interface element comprises one or more of: windows, dialog boxes, menus, scroll bars, and graphical symbols.
Optionally, the at least one camera comprises two cameras, the two cameras being arranged at two eye positions, respectively.
The interface element control apparatus 40 of the head-mounted display device may be, for example, a chip in the head-mounted display device.
Fig. 5 shows a functional structure block diagram of a head-mounted display device provided by an embodiment of the present disclosure. As shown in fig. 5, the head-mounted display device 500 comprises a processor 510 and a memory 520, the memory 520 being configured to store an executable computer program, the processor 510 being configured to perform a method according to any of the above method embodiments according to the control of the computer program.
The modules of the head-mounted display device 500 may be implemented by the processor 510 in the present embodiment executing a computer program stored in the memory 520, or may be implemented by other circuit structures, which is not limited herein.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the present disclosure is defined by the appended claims.

Claims (10)

1. An interface element control method of a head-mounted display device, wherein the head-mounted display device comprises at least one camera for capturing a target image of a target area and a display interface, wherein the target image is displayed in the display interface;
the method comprises the following steps:
acquiring the target image captured by the at least one camera;
determining whether at least one wrist meets a preset condition according to the target image under the condition that the image content of the target image comprises the at least one wrist; wherein the preset conditions include: the wrist is in a static state;
if the result of the determination is positive, determining a target display area taking a target wrist as a center in the display interface; wherein the target wrist is one of the at least one wrist;
and displaying at least one interface element in the current scene in the target display area, so that after the interface element selected by the user is determined according to the gesture of the user, the information corresponding to the interface element is used as input information.
2. The method of claim 1, wherein the preset condition further comprises: the posture of the wrist is a preset posture.
3. The method of claim 2, wherein the preset gesture comprises:
an inner side of the wrist facing the head mounted display device; wherein, the inner side of the wrist and the palm center are positioned on the same side.
4. The method of claim 1, wherein the at least one wrist comprises a plurality of wrists;
before determining a target display area centered on a target wrist in the display interface, the method further comprises:
determining a preset wrist in a plurality of wrists according to the target image; wherein the preset wrist comprises: a left or right wrist;
and taking the preset wrist as the target wrist.
5. The method of claim 1, wherein the gesture is a gesture made by a target hand in the target image, a wrist of the target hand being the target wrist or not.
6. The method of claim 1, wherein the at least one interface element comprises one or more of: windows, dialog boxes, menus, scroll bars, and graphical symbols.
7. The method of claim 1, wherein the at least one camera comprises two cameras, the two cameras being respectively disposed at two human eye locations.
8. An interface element control apparatus of a head-mounted display device, wherein the head-mounted display device comprises at least one camera for capturing a target image of a target area, and a display interface in which the target image is displayed;
the device comprises:
an acquisition module for acquiring the target image captured by the at least one camera;
the determining module is used for determining whether at least one wrist meets a preset condition according to the target image under the condition that the image content of the target image acquired by the acquiring module comprises at least one wrist; wherein the preset conditions include: the wrist is in a static state;
the processing module is used for determining a target display area which takes a target wrist as a center in the display interface if the determination result of the determination module is positive; wherein the target wrist is one of the at least one wrist;
and the display module is used for displaying at least one interface element in the current scene in the target display area obtained by the processing module so as to take the information corresponding to the interface element as input information after the interface element selected by the user is determined according to the gesture of the user.
9. A head-mounted display device comprising at least one camera for capturing a target image of a target area, and a display interface in which the target image is displayed;
the head-mounted display device further comprises a memory for storing a computer program and a processor; the processor is adapted to execute the computer program to implement the method according to any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202110342716.XA 2021-03-30 2021-03-30 Interface element control method and device of head-mounted display equipment Pending CN113190110A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110342716.XA CN113190110A (en) 2021-03-30 2021-03-30 Interface element control method and device of head-mounted display equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110342716.XA CN113190110A (en) 2021-03-30 2021-03-30 Interface element control method and device of head-mounted display equipment

Publications (1)

Publication Number Publication Date
CN113190110A true CN113190110A (en) 2021-07-30

Family

ID=76974418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110342716.XA Pending CN113190110A (en) 2021-03-30 2021-03-30 Interface element control method and device of head-mounted display equipment

Country Status (1)

Country Link
CN (1) CN113190110A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103858073A (en) * 2011-09-19 2014-06-11 视力移动技术有限公司 Touch free interface for augmented reality systems
CN106951153A (en) * 2017-02-21 2017-07-14 联想(北京)有限公司 A kind of display methods and electronic equipment
CN108073432A (en) * 2016-11-07 2018-05-25 亮风台(上海)信息科技有限公司 A kind of method for displaying user interface of head-mounted display apparatus
TW202101170A (en) * 2019-06-07 2021-01-01 美商菲絲博克科技有限公司 Corner-identifying gesture-driven user interface element gating for artificial reality systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103858073A (en) * 2011-09-19 2014-06-11 视力移动技术有限公司 Touch free interface for augmented reality systems
CN108073432A (en) * 2016-11-07 2018-05-25 亮风台(上海)信息科技有限公司 A kind of method for displaying user interface of head-mounted display apparatus
CN106951153A (en) * 2017-02-21 2017-07-14 联想(北京)有限公司 A kind of display methods and electronic equipment
TW202101170A (en) * 2019-06-07 2021-01-01 美商菲絲博克科技有限公司 Corner-identifying gesture-driven user interface element gating for artificial reality systems

Similar Documents

Publication Publication Date Title
US11880956B2 (en) Image processing method and apparatus, and computer storage medium
CN109246463B (en) Method and device for displaying bullet screen
US20170160795A1 (en) Method and device for image rendering processing
US20170195664A1 (en) Three-dimensional viewing angle selecting method and apparatus
US20170163958A1 (en) Method and device for image rendering processing
EP2584403A2 (en) Multi-user interaction with handheld projectors
CN109840946B (en) Virtual object display method and device
KR20180013892A (en) Reactive animation for virtual reality
CN112105983B (en) Enhanced visual ability
TW202221380A (en) Obfuscated control interfaces for extended reality
EP2998845A1 (en) User interface based interaction method and related apparatus
US11521346B2 (en) Image processing apparatus, image processing method, and storage medium
CN113190109A (en) Input control method and device of head-mounted display equipment and head-mounted display equipment
EP3236336A1 (en) Virtual reality causal summary content
US20210158142A1 (en) Multi-task fusion neural network architecture
Li et al. Enhancing 3d applications using stereoscopic 3d and motion parallax
CN113269781A (en) Data generation method and device and electronic equipment
CN113190110A (en) Interface element control method and device of head-mounted display equipment
JP2020135290A (en) Image generation device, image generation method, image generation system, and program
US11224801B2 (en) Enhanced split-screen display via augmented reality
CN113141502B (en) Camera shooting control method and device of head-mounted display equipment and head-mounted display equipment
CN111290721A (en) Online interaction control method, system, electronic device and storage medium
JPWO2020031493A1 (en) Terminal device and control method of terminal device
US11762476B2 (en) Device and method for hand-based user interaction in VR and AR environments
Ryu et al. A Study on the Game Contents of Virtual Reality Based on the Smart Devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination