CN117555419A - Control method, control device, head-mounted display device and medium - Google Patents

Control method, control device, head-mounted display device and medium Download PDF

Info

Publication number
CN117555419A
CN117555419A CN202311361610.XA CN202311361610A CN117555419A CN 117555419 A CN117555419 A CN 117555419A CN 202311361610 A CN202311361610 A CN 202311361610A CN 117555419 A CN117555419 A CN 117555419A
Authority
CN
China
Prior art keywords
virtual
virtual identifier
controlling
identifier
stable state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311361610.XA
Other languages
Chinese (zh)
Inventor
李昱锋
杨明明
迟小羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN202311361610.XA priority Critical patent/CN117555419A/en
Publication of CN117555419A publication Critical patent/CN117555419A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure discloses a control method, a control device, a head-mounted display device and a medium, wherein the control method comprises the following steps: creating a plurality of virtual screens in a display area of a first device; wherein each of the plurality of virtual screens is operated with a corresponding application; in the process of controlling the virtual identifier through the second device, if the rotation angle of the second device is detected to be lower than the rotation angle threshold value within the preset time period, controlling the second device to stop controlling the virtual identifier so as to enable the virtual identifier to be in a stable state; wherein the virtual identifier points to the display area; and under the condition that the virtual identifier is in a stable state, if the virtual identifier collides with one of the plurality of virtual screens, controlling the virtual identifier according to the current gazing information of the user.

Description

Control method, control device, head-mounted display device and medium
Technical Field
The embodiment of the disclosure relates to the technical field of wearable equipment, and more particularly relates to a control method, a control device, a head-mounted display device and a computer-readable storage medium.
Background
With the development of augmented reality technology, more and more application scenes are presented in AR products, such as AR glasses, for example, multiple applications are implemented in the AR glasses, specifically, multiple applications are respectively opened in multiple virtual screens of the AR glasses, and texture information of the multiple virtual screens is respectively rendered and displayed in corresponding canvases of AR counter.
However, AR glasses currently lack an interactive way, and controlling AR glasses rays based on gesture information of a mobile phone is one of the solutions. However, in practical use, since it is often difficult for a human hand to maintain a stable state, this may lead to a wobbling of rays, which reduces the accuracy of the operation for the application.
Disclosure of Invention
The embodiment of the disclosure aims to provide a control method, a control device, head-mounted display equipment and a medium.
According to a first aspect of embodiments of the present disclosure, there is provided a control method, including:
creating a plurality of virtual screens in a display area of a first device; wherein each of the plurality of virtual screens is operated with a corresponding application;
in the process of controlling the virtual identifier through the second device, if the rotation angle of the second device is detected to be lower than the rotation angle threshold value within the preset time period, controlling the second device to stop controlling the virtual identifier so as to enable the virtual identifier to be in a stable state; wherein the virtual identifier points to the display area;
and under the condition that the virtual identifier is in a stable state, if the virtual identifier collides with one of the plurality of virtual screens, controlling the virtual identifier according to the current gazing information of the user.
Optionally, the method further comprises:
and under the condition that the virtual identifier is in a stable state, if the virtual identifier collides with the graphical user interface control of the rendering engine, controlling the virtual identifier to point to the central position of the graphical user interface control.
Optionally, the current gaze information comprises a current gaze area,
and under the condition that the virtual identifier is in a stable state, if the virtual identifier collides with one of the plurality of virtual screens, controlling the virtual identifier according to the current gazing information of the user, wherein the method comprises the following steps:
if the virtual identifier collides with one of the plurality of virtual screens under the condition that the virtual identifier is in a stable state, determining a first edge of a display screen of the first device nearest to the current gazing area according to the current gazing area of the user;
acquiring a first distance between the current gazing area and the first edge;
and controlling the virtual identifier according to the first distance.
Optionally, the controlling the virtual identifier according to the first distance includes:
and according to the first distance, controlling the virtual mark to rotate along a first direction by the rotation angle threshold value.
Optionally, the method further comprises controlling, by the second device,
the controlling the virtual identifier by the second device includes:
receiving a rotation matrix transmitted by the second device;
and converting the rotation matrix to obtain the direction of the virtual identifier.
According to a second aspect of embodiments of the present disclosure, there is provided a control apparatus comprising:
the creation module is used for creating a plurality of virtual screens in the display area of the first device; wherein each of the plurality of virtual screens is operated with a corresponding application;
the first control module is used for controlling the second device to stop controlling the virtual identifier so as to enable the virtual identifier to be in a stable state if detecting that the rotation angle of the second device is lower than a rotation angle threshold value within a preset time period in the process of controlling the virtual identifier through the second device; wherein the virtual identifier points to the display area;
and the second control module is used for controlling the virtual identifier according to the current gazing information of the user if the virtual identifier collides with one of the virtual screens under the condition that the virtual identifier is in a stable state.
Optionally, the apparatus further comprises a third control module for:
and under the condition that the virtual identifier is in a stable state, if the virtual identifier collides with the graphical user interface control of the rendering engine, controlling the virtual identifier to point to the central position of the graphical user interface control.
Optionally, the current gaze information comprises a current gaze area,
the second control module is specifically configured to determine, when the virtual identifier is in a stable state, a first edge of a display screen of the first device closest to the current gaze area according to the current gaze area of the user if the virtual identifier collides with one of the plurality of virtual screens; acquiring a first distance between the current gazing area and the first edge; and controlling the virtual identifier according to the first distance.
According to a third aspect of embodiments of the present disclosure, there is provided a head-mounted display device comprising:
a memory for storing executable computer instructions;
a processor for executing the control method according to the above first aspect, according to control of the executable computer instructions.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, perform the control method of the first aspect above.
The method and the device for controlling the virtual identification have the advantages that the first device controls the virtual identification through the second device on the basis of realizing multiple applications, in the process of controlling the virtual identification through the second device, if the rotation angle of the second device is detected to be lower than the rotation angle threshold value in the preset time period, the second device is controlled to stop controlling the virtual identification, so that the virtual identification is in a stable state, and under the condition that the virtual identification is in the stable state, if the virtual identification collides with one of the virtual screens, the virtual identification is controlled according to the current gazing information of a user. That is, the control is performed through the integration of the fixation information of the first device and the user, so that the shake of the virtual identifier is avoided, and the accurate control of the user on the application based on the virtual identifier is improved.
Other features of the present specification and its advantages will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description, serve to explain the principles of the specification.
FIG. 1 is a schematic diagram of a hardware configuration of a control system according to an embodiment of the present disclosure;
FIG. 2 is a flow diagram of a control method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a display screen of a first device according to an embodiment of the present disclosure;
FIG. 4 is a functional block diagram of a control device according to an embodiment of the present disclosure;
fig. 5 is a functional block diagram of a head mounted display device according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of parts and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the embodiments of the present disclosure unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
< hardware configuration >
Fig. 1 is a schematic diagram of a hardware configuration of a control system that may be used to implement the control method of one embodiment. Fig. 1 shows a first device 100, a second device 200, and a network 300. The first device 100 may be connected to the network 300 and may also be connected to the second device 200 by means of communication such as bluetooth. In one embodiment, the first device 100 is connected to the second device 200, respectively, only by means of communication such as bluetooth. Wherein a plurality of servers 301, 302 may be provided in the network 300. The network 300 may be a wireless communication network or a wired communication network. The network 300 may be a local area network or a wide area network. The network 300 may be a near field communication or a far field communication.
In one embodiment, as shown in fig. 1, a first device 100 may include a processor 101 and a memory 102. The first device 100 further comprises communication means 103, display means 104, user interface 105, camera means 106, audio/video interface 107, and sensor 108 etc. In addition, the first device 100 may further include a power management chip 109, a battery 110, and the like.
The processor 101 may be various processors. The memory 102 may store underlying software, system software, application software, data, etc. that are required for the operation of the first device 100. The memory 102 may include various forms of memory, such as ROM, RAM, flash, etc. The communication device 103 may include, for example, a WiFi communication device, a bluetooth communication device, a 3G, 4G, and 5G communication device, and the like. The first device 100 may be arranged in a network by means of the communication means 103. The display device 104 may be a liquid crystal display, an OLED display, or the like. In one example, the display device 104 may be a touch screen. The user may perform an input operation through the display device 104. In addition, the user can also conduct fingerprint identification and the like through the touch screen. The user interface 105 may include a USB interface, a lightning interface, a keyboard, etc. The camera 106 may be a single camera or multiple cameras. The audio/video interface 107 may include, for example, a speaker interface, a microphone interface, a video transmission interface such as HDMI, and the like. The sensor 108 may include, for example, a gyroscope, an accelerometer, a temperature sensor, a humidity sensor, a pressure sensor, and the like. For example, the posture information of the first device and the like may be determined by the sensor. The power management chip 109 may be used to manage the power of the power source input to the first device 100, and may also manage the battery 110 to ensure a greater utilization efficiency. The battery 110 is, for example, a lithium ion battery or the like.
The first device 100 may be a head mounted display device. For example, AR (augmented Reality) glasses, MR (Mixed Reality) glasses, and the like, to which the embodiments of the present disclosure are not limited. The various components shown in fig. 1 are merely illustrative. The first device 100 may include one or more of the components shown in fig. 1, and need not include all of the components in fig. 1. The first device 100 shown in fig. 1 is merely illustrative and is in no way intended to limit the embodiments herein, their applications or uses.
In this embodiment, the memory 102 of the first device 100 is used to store program instructions for controlling the processor 101 to operate to execute a control method, and a skilled person can design the instructions according to the disclosed solution. How the instructions control the processor to operate is well known in the art and will not be described in detail here.
In one embodiment, as shown in fig. 1, the second device 200 may include a processor 201 and a memory 202. The second device 200 further comprises communication means 203, display means 204, user interface 205, camera means 206, audio/video interface 207, and sensor 208 etc. In addition, the second device 200 may further include a power management chip 209, a battery 210, and the like.
The second device 200 may be a mobile phone, a portable computer, a tablet computer, a palm computer, a wearable device, etc., which is not limited in the embodiments of the present disclosure. The various components shown in fig. 1 are merely illustrative. The second device 200 may include one or more of the components shown in fig. 1, and need not include all of the components in fig. 1. The second device 200 shown in fig. 1 is merely illustrative and is in no way intended to limit the embodiments herein, their applications or uses.
It should be understood that although fig. 1 shows only one first device 100, second device 200, it is not meant to limit the respective number, and that a plurality of first devices 100, second devices 200 may be included in the control system.
In the above description, a skilled person may design instructions according to the solutions provided by the present disclosure. How the instructions control the processor to operate is well known in the art and will not be described in detail here.
< method example >
Fig. 2 illustrates a control method of an embodiment of the present disclosure, which may be implemented by the first apparatus illustrated in fig. 1, and as illustrated in fig. 2, the control method of the embodiment may include the following steps S2100 to S2300:
in step S2100, a plurality of virtual screens are created in a display area of a first device.
Wherein each of the plurality of virtual screens is operated with a corresponding application.
In this embodiment, the desktop starter of the first device is called AR starter, and when the first device is started, the AR starter is automatically started, the desktop environment of the first device starts to run, and after the user wears the first device, the first device displays the desktop environment in the display area. It will be appreciated that the desktop environment is typically a 3D desktop environment, and that the display area is within the field of view of the user.
In this embodiment, during the running process of the 3D desktop environment, the 3D desktop environment creates a plurality of virtual screens and a plurality of canvases, and establishes a correspondence between the virtual screens and the canvases. Typically, after creating multiple virtual screens, the first device will specify the applications running on each virtual screen. For example, virtual screen 1 may run a game application, the first device may display a rendering of texture information for virtual screen 1 in canvas 1, virtual screen 2 may run a cloud office application, the first device may display a rendering of texture information for virtual screen 2 in canvas 2, virtual screen 3 may run a video application, and the first device may display a rendering of texture information for virtual screen 3 in canvas 3.
After performing the above step S2100 to create a plurality of virtual screens in the display area of the first device, entering:
step S2200, if the rotation angle of the second device is detected to be lower than the rotation angle threshold value within the preset time period in the process of controlling the virtual identifier through the second device, controlling the second device to stop controlling the virtual identifier so as to enable the virtual identifier to be in a stable state.
Wherein the virtual identifier points to a display area of the first device, the virtual identifier may characterize a current gesture of the second device, and the user may control the first device based on the virtual identifier. For example, the virtual identification may be a virtual ray. The virtual ray may be a straight line, which is not limited in this embodiment.
Before describing this step S2200, it is described how the second device controls the first device based on the virtual identification. In an alternative embodiment, the control method of the embodiment of the disclosure further includes: receiving a rotation matrix transmitted by the second device; and converting the rotation matrix to obtain the direction of the virtual identifier.
Wherein the rotation matrix is used to characterize the pose change information of the second device. Specifically, the rotation matrix may represent the posture change information of the second device in a preset time period, where the preset time period may be a value set according to an actual application and an actual scene, the preset time period may be 1s, and of course, the preset time period may also be other values, which is not limited in this embodiment.
Specifically, when the user controls the virtual identifier based on the second device so as to interact with the first device, the second device may acquire IMU data acquired by the internal inertial measurement unit (Inertial Measurement Unit), obtain a rotation matrix of the second device based on the IMU data, and send the rotation matrix to the first device. And, because the display screen of the second device is generally in the horizontal direction when the second device is operating the first device, but the display screen of the first device is generally in the vertical direction, the first device needs to perform conversion processing on the rotation matrix after receiving the rotation matrix of the second device, so as to obtain the direction of the virtual identifier.
After the second device receives the rotation matrix transmitted by the first device, the rotation angle of the second device can be obtained according to the rotation matrix.
The rotation angle threshold may be a value set according to an operation habit of a human hand, and the rotation angle threshold may be 5 degrees. Illustratively, if the rotation angle of the second device is below 5 degrees within 1s, this indicates that the user's wrist is slightly dithered within 1s, however, the user's wrist is slightly dithered, which may result in a slight dithering of the virtual identification, reducing the accuracy of the operation for the application.
Specifically, if the first device detects that the rotation angle of the second device within 1s is lower than 5 degrees, the control of the virtual identifier according to the received rotation matrix of the second device is stopped, so that the virtual identifier is ensured to be in a stable state, and the virtual identifier is prevented from shaking caused by slight shake of the wrist.
It should be noted that, in the case where the virtual identifier is in a stable state, although the first device may stop controlling the virtual identifier according to the received rotation matrix of the second device, the virtual identifier is in a stable state. It should be emphasized here that, in the case where the virtual identifier is in a stable state, the first device still obtains the rotation angle of the second device according to the received rotation matrix of the second device, continuously detects whether the rotation angle of the second device in the preset time period is lower than the rotation angle threshold, and if the rotation angle of the second device in the preset time period is greater than the rotation angle threshold, continues to control the virtual identifier according to the received rotation matrix of the second device, that is, continues to control the virtual identifier through the second device, so that the virtual identifier exits from the stable state. Otherwise, if the rotation angle of the second device in the preset time period is lower than the rotation angle threshold, the first device continuously stops controlling the virtual identifier according to the received rotation matrix of the second device, so that the virtual identifier is continuously in a stable state.
In the process of executing the above step S2200 to control the virtual identifier by the second device, if detecting that the rotation angle of the second device is lower than the rotation angle threshold value within the preset time period, controlling the second device to stop controlling the virtual identifier, so that the virtual identifier is in a stable state, then entering:
step S2300, if the virtual identifier collides with one of the plurality of virtual screens when the virtual identifier is in a stable state, controlling the virtual identifier according to current gaze information of the user.
It should be noted that, when the virtual identifier collides with one of the plurality of virtual screens, it is also understood that the virtual identifier intersects with the canvas corresponding to one of the plurality of virtual screens, that is, the virtual identifier intersects with the canvas corresponding to the arbitrary one of the plurality of virtual screens.
In this embodiment, if the virtual identifier collides with one of the multiple virtual screens under the condition that the virtual identifier is in a stable state, the eyetracking (eyetracking) intervenes to control the virtual identifier, so as to realize accurate control.
The current gaze information includes at least a current gaze area, and of course, the current gaze information may also include a current gaze direction. Specifically, the movement of the eyeballs of the wearer can be captured through the camera module in the first device, so that the current gazing information of the wearer is obtained.
In an optional embodiment, in the case that the virtual identifier is in a stable state in this step S2300, if the virtual identifier collides with one of the virtual screens, the controlling the virtual identifier according to the current gaze information of the user may further include steps S2310 to S2330 as follows:
step S2310, if the virtual identifier collides with one of the virtual screens when the virtual identifier is in a stable state, determining a first edge of the display screen of the first device closest to the current gaze area according to the current gaze area of the user.
The first edge of the display screen of the first device closest to the current viewing area may be one of an upper edge of the display screen, a lower edge of the display screen, a left edge of the display screen, and a right edge of the display screen. In general, referring to fig. 3, the coordinate system of the first device may be an X-axis and a Y-axis established with the center of the display screen as an origin O, and the Z-axis is perpendicular to the display screen of the first device. Thus, the upper edge of the display screen is parallel to the X-axis and the lower edge of the display screen, and the left edge of the display screen and the right edge of the display screen are parallel to the Y-axis.
In example 1, taking the virtual identifier as an example of the virtual ray, in the case where the virtual ray is in a stable state, if the virtual ray collides with the virtual screen 3, that is, in the case where the virtual ray has an intersection point with the canvas 3, the first device records a current gazing area of the wearer when the virtual ray enters the stable state, and determines that the current gazing area is closest to a lower edge of the display screen, that is, the user's line of sight is directed toward the lower edge of the display screen.
In example 2, taking the virtual identifier as an example of the virtual ray, in the case where the virtual ray is in a stable state, if the virtual ray collides with the virtual screen 3, that is, in the case where the virtual ray has an intersection point with the canvas 3, the first device records a current gazing area of the wearer when the virtual ray enters the stable state, and determines that the current gazing area is closest to an upper edge of the display screen, that is, the user's line of sight is directed toward the upper edge of the display screen.
In example 3, taking the virtual identifier as an example of the virtual ray, in the case where the virtual ray is in a stable state, if the virtual ray collides with the virtual screen 3, that is, in the case where the virtual ray has an intersection point with the canvas 3, the first device records a current gazing area of the wearer when the virtual ray enters the stable state, and determines that the current gazing area is closest to the left edge of the display screen, that is, the user's line of sight is directed toward the left edge of the display screen.
In example 4, taking the virtual identifier as the virtual ray, in the case where the virtual ray is in a stable state, if the virtual ray collides with the virtual screen 3, that is, in the case where the virtual ray has an intersection point with the canvas 3, the first device records the current gazing area of the wearer when the virtual ray enters the stable state, and determines that the current gazing area is closest to the right edge of the display screen, that is, the user's line of sight faces the right edge of the display screen.
Step S2320, obtaining a first distance between the current gazing area and the first edge.
Continuing with example 1 above, the first device determines that the current gaze area is 200 pixels from the lower edge of the display screen.
Continuing with example 2 above, the first device determines that the current gaze area is 300 pixels from the top edge of the display screen.
Continuing with example 3 above, the first device determines that the current gaze area is 200 pixels from the left edge of the display screen.
Continuing with example 4 above, the first device determines that the current gaze area is 300 pixels from the right edge of the display screen.
Step S2330, controlling the virtual identifier according to the first distance.
Optionally, controlling the virtual identifier according to the first distance in step S2330 may further include: and according to the first distance, controlling the virtual mark to rotate along a first direction by the rotation angle threshold value.
The movement of the eye tracking in the Y axis is typically mapped to the rotation of the virtual marker in the X axis, and the movement of the eye tracking in the X axis is typically mapped to the rotation of the virtual marker in the Y axis.
Continuing with example 1, since the first device determines that the current gaze area is 200 pixels away from the lower edge of the display screen, that is, the user's line of sight moves 200 pixels toward the direction away from the lower edge of the display screen, at this time, the first device may map 200 pixels away from the lower edge of the display screen into a virtual mark rotated 5 degrees along the negative X-axis direction.
Continuing with example 2 above, since the first device determines that the current gaze area is 300 pixels away from the top edge of the display screen, that is, the user's line of sight moves 300 pixels toward the direction away from the top edge of the display screen, at this time, the first device may map 300 pixels away from the top edge of the display screen to be 5 degrees of rotation of the virtual marker along the positive X-axis direction.
Continuing with example 3 above, since the first device determines that the current gaze area is 200 pixels away from the left edge of the display screen, that is, the user's line of sight moves 200 pixels toward the direction away from the left edge of the display screen, at this time, the first device may map 200 pixels away from the left edge of the display screen to be 5 degrees of rotation of the virtual logo along the negative Y-axis direction.
Continuing with example 4 above, since the first device determines that the current gaze area is 300 pixels away from the right edge of the display screen, that is, the user's line of sight moves 300 pixels toward the direction away from the right edge of the display screen, at this time, the first device may map 300 pixels away from the right edge of the display screen to be 5 degrees of rotation of the virtual logo along the positive Y-axis direction.
According to the embodiment of the disclosure, the first device controls the virtual identifier through the second device on the basis of realizing multiple applications, and if the rotation angle of the second device is detected to be lower than the rotation angle threshold value within the preset time period in the process of controlling the virtual identifier through the second device, the second device is controlled to stop controlling the virtual identifier so as to enable the virtual identifier to be in a stable state, and if the virtual identifier collides with one of the virtual screens under the condition that the virtual identifier is in the stable state, the virtual identifier is controlled according to the current gaze information of the user. That is, the control is performed through the integration of the fixation information of the first device and the user, so that the shake of the virtual identifier is avoided, and the accurate control of the user on the application based on the virtual identifier is improved.
In one embodiment, the control method of the embodiment of the present disclosure further includes: and under the condition that the virtual identifier is in a stable state, if the virtual identifier collides with a Graphical User Interface (GUI) control of a rendering engine, controlling the virtual identifier to point to the center position of the GUI control.
Wherein the rendering engine is typically a Unity engine.
Taking the virtual identifier as an example, if the virtual identifier is in a stable state and the virtual ray collides with a Button (Button) in the Unity engine, the virtual ray is automatically cheaper to the center of the Button, so as to achieve the effect of adsorbing the Button by the virtual ray.
According to the embodiment, when the application is opened more, the virtual identifier jitter can be avoided, and accurate control of the GUI control in the scene and the application in the virtual screen can be realized.
< device example >
Fig. 4 is a schematic diagram of a control apparatus according to an embodiment, and referring to fig. 4, the control apparatus 400 includes a creation module 410, a first control module 420, a second control module, and a second control module 430.
A creation module 410, configured to create a plurality of virtual screens in a display area of the first device; wherein each of the plurality of virtual screens is operated with a corresponding application;
the first control module 420 is configured to, in a process of controlling the virtual identifier by the second device, if detecting that the rotation angle of the second device is lower than the rotation angle threshold value within a preset period of time, control the second device to stop controlling the virtual identifier, so that the virtual identifier is in a stable state; wherein the virtual identifier points to the display area;
and the second control module 430 is configured to control the virtual identifier according to current gaze information of the user if the virtual identifier collides with one of the virtual screens when the virtual identifier is in a stable state.
In one embodiment, the apparatus further comprises a third control module (not shown in the figure) for: and under the condition that the virtual identifier is in a stable state, if the virtual identifier collides with the graphical user interface control of the rendering engine, controlling the virtual identifier to point to the central position of the graphical user interface control.
In one embodiment, the current gaze information includes a current gaze area.
The second control module 430 is specifically configured to determine, when the virtual identifier is in a stable state, a first edge of a display screen of the first device closest to the current gaze area according to the current gaze area of the user if the virtual identifier collides with one of the plurality of virtual screens; acquiring a first distance between the current gazing area and the first edge; and controlling the virtual identifier according to the first distance.
In one embodiment, the second control module 430 is specifically configured to control the virtual identifier to rotate the rotation angle threshold in a first direction according to the first distance.
In one embodiment, the first control module 420 is further configured to receive a rotation matrix transmitted by the second device; and converting the rotation matrix to obtain the direction of the virtual identifier.
According to the embodiment of the disclosure, the first device controls the virtual identifier through the second device on the basis of realizing multiple applications, and if the rotation angle of the second device is detected to be lower than the rotation angle threshold value within the preset time period in the process of controlling the virtual identifier through the second device, the second device is controlled to stop controlling the virtual identifier so as to enable the virtual identifier to be in a stable state, and if the virtual identifier collides with one of the virtual screens under the condition that the virtual identifier is in the stable state, the virtual identifier is controlled according to the current gaze information of the user. That is, the control is performed through the integration of the fixation information of the first device and the user, so that the shake of the virtual identifier is avoided, and the accurate control of the user on the application based on the virtual identifier is improved.
< device example >
Fig. 5 is a schematic diagram of a hardware structure of a head-mounted display device according to one embodiment. As shown in fig. 5, the head mounted display device 500 includes a processor 510 and a memory 520.
The memory 520 may be used to store executable computer instructions.
The processor 510 may be configured to execute a control method according to an embodiment of the method of the present disclosure, according to control of the executable computer instructions.
The head-mounted display device 500 may be the first device 100 shown in fig. 1, or may be a device having another hardware configuration, and is not limited thereto.
In further embodiments, the head mounted display device 500 may include the above control apparatus 500.
In one embodiment, the modules of the control apparatus 400 above may be implemented by the processor 510 executing computer instructions stored in the memory 520.
< computer-readable storage Medium >
The disclosed embodiments also provide a computer-readable storage medium having stored thereon computer instructions that, when executed by a processor, perform the control methods provided by the disclosed embodiments.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are all equivalent.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the present disclosure is defined by the appended claims.

Claims (10)

1. A control method, characterized in that the method comprises:
creating a plurality of virtual screens in a display area of a first device; wherein each of the plurality of virtual screens is operated with a corresponding application;
in the process of controlling the virtual identifier through the second device, if the rotation angle of the second device is detected to be lower than the rotation angle threshold value within the preset time period, controlling the second device to stop controlling the virtual identifier so as to enable the virtual identifier to be in a stable state; wherein the virtual identifier points to the display area;
and under the condition that the virtual identifier is in a stable state, if the virtual identifier collides with one of the plurality of virtual screens, controlling the virtual identifier according to the current gazing information of the user.
2. The method according to claim 1, wherein the method further comprises:
and under the condition that the virtual identifier is in a stable state, if the virtual identifier collides with the graphical user interface control of the rendering engine, controlling the virtual identifier to point to the central position of the graphical user interface control.
3. The method of claim 1, wherein the current gaze information comprises a current gaze area,
and under the condition that the virtual identifier is in a stable state, if the virtual identifier collides with one of the plurality of virtual screens, controlling the virtual identifier according to the current gazing information of the user, wherein the method comprises the following steps:
if the virtual identifier collides with one of the plurality of virtual screens under the condition that the virtual identifier is in a stable state, determining a first edge of a display screen of the first device nearest to the current gazing area according to the current gazing area of the user;
acquiring a first distance between the current gazing area and the first edge;
and controlling the virtual identifier according to the first distance.
4. A method according to claim 3, wherein said controlling said virtual identification according to said first distance comprises:
and according to the first distance, controlling the virtual mark to rotate along a first direction by the rotation angle threshold value.
5. The method of claim 1, further comprising controlling the virtual identification by the second device,
the controlling the virtual identifier by the second device includes:
receiving a rotation matrix transmitted by the second device;
and converting the rotation matrix to obtain the direction of the virtual identifier.
6. A control apparatus, characterized in that the apparatus comprises:
the creation module is used for creating a plurality of virtual screens in the display area of the first device; wherein each of the plurality of virtual screens is operated with a corresponding application;
the first control module is used for controlling the second device to stop controlling the virtual identifier so as to enable the virtual identifier to be in a stable state if detecting that the rotation angle of the second device is lower than a rotation angle threshold value within a preset time period in the process of controlling the virtual identifier through the second device; wherein the virtual identifier points to the display area;
and the second control module is used for controlling the virtual identifier according to the current gazing information of the user if the virtual identifier collides with one of the virtual screens under the condition that the virtual identifier is in a stable state.
7. The apparatus of claim 6, further comprising a third control module to:
and under the condition that the virtual identifier is in a stable state, if the virtual identifier collides with the graphical user interface control of the rendering engine, controlling the virtual identifier to point to the central position of the graphical user interface control.
8. The apparatus of claim 6, wherein the current gaze information comprises a current gaze area,
the second control module is specifically configured to determine, when the virtual identifier is in a stable state, a first edge of a display screen of the first device closest to the current gaze area according to the current gaze area of the user if the virtual identifier collides with one of the plurality of virtual screens; acquiring a first distance between the current gazing area and the first edge; and controlling the virtual identifier according to the first distance.
9. A head-mounted display device, the head-mounted display device comprising:
a memory for storing executable computer instructions;
a processor for executing the control method according to any one of claims 1-5, according to control of the executable computer instructions.
10. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, perform the control method of any of claims 1-5.
CN202311361610.XA 2023-10-19 2023-10-19 Control method, control device, head-mounted display device and medium Pending CN117555419A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311361610.XA CN117555419A (en) 2023-10-19 2023-10-19 Control method, control device, head-mounted display device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311361610.XA CN117555419A (en) 2023-10-19 2023-10-19 Control method, control device, head-mounted display device and medium

Publications (1)

Publication Number Publication Date
CN117555419A true CN117555419A (en) 2024-02-13

Family

ID=89813652

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311361610.XA Pending CN117555419A (en) 2023-10-19 2023-10-19 Control method, control device, head-mounted display device and medium

Country Status (1)

Country Link
CN (1) CN117555419A (en)

Similar Documents

Publication Publication Date Title
CN107743604B (en) Touch screen hover detection in augmented and/or virtual reality environments
KR102595150B1 (en) Method for controlling multiple virtual characters, device, apparatus, and storage medium
CN106716302B (en) Method, apparatus, and computer-readable medium for displaying image
US20170256096A1 (en) Intelligent object sizing and placement in a augmented / virtual reality environment
CN111701238A (en) Virtual picture volume display method, device, equipment and storage medium
WO2015188614A1 (en) Method and device for operating computer and mobile phone in virtual world, and glasses using same
CN111324250B (en) Three-dimensional image adjusting method, device and equipment and readable storage medium
US10878070B2 (en) Method of controlling a terminal based on motion of the terminal, terminal therefore, and storage medium
Chen et al. A case study of security and privacy threats from augmented reality (ar)
CN115494951A (en) Interaction method and device and display equipment
CN117555419A (en) Control method, control device, head-mounted display device and medium
CN114826799A (en) Information acquisition method, device, terminal and storage medium
US20240078734A1 (en) Information interaction method and apparatus, electronic device and storage medium
WO2023231666A1 (en) Information exchange method and apparatus, and electronic device and storage medium
US20230405475A1 (en) Shooting method, apparatus, device and medium based on virtual reality space
WO2024012106A1 (en) Information interaction method and apparatus, electronic device, and storage medium
WO2024016880A1 (en) Information interaction method and apparatus, and electronic device and storage medium
CN117354484A (en) Shooting processing method, device, equipment and medium based on virtual reality
CN117215688A (en) Control method, control device, electronic equipment and medium
CN117148965A (en) Interactive control method, device, electronic equipment and medium
CN115617164A (en) Interaction method and device and near-to-eye display equipment
CN115543082A (en) Interaction method, interaction device, terminal equipment and computer-readable storage medium
CN117478931A (en) Information display method, information display device, electronic equipment and storage medium
CN115981544A (en) Interaction method and device based on augmented reality, electronic equipment and storage medium
CN117934769A (en) Image display method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination