WO2023207226A1 - 操作界面的生成方法、控制方法和装置 - Google Patents

操作界面的生成方法、控制方法和装置 Download PDF

Info

Publication number
WO2023207226A1
WO2023207226A1 PCT/CN2023/071748 CN2023071748W WO2023207226A1 WO 2023207226 A1 WO2023207226 A1 WO 2023207226A1 CN 2023071748 W CN2023071748 W CN 2023071748W WO 2023207226 A1 WO2023207226 A1 WO 2023207226A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
information
operation interface
preset
operator
Prior art date
Application number
PCT/CN2023/071748
Other languages
English (en)
French (fr)
Inventor
胡安妮
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023207226A1 publication Critical patent/WO2023207226A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Definitions

  • the present disclosure relates to the field of virtual reality technology, and specifically to a generation method, control method and device for an operation interface.
  • gesture-based interaction can free users from the limitations of the screen and provide users with a natural and humane interaction method, allowing users to directly control virtual objects in the virtual world.
  • gesture interaction methods include: gesture extension line operation and direct gesture interaction.
  • gesture extension line operation method if the user increases the operating angle of his hand, the user's selection range for distant objects will be doubled, reducing the user's operating accuracy for distant objects; when using gesture interaction directly, the efficiency of the user's operation of virtual objects is reduced because the difficulty of the user's operation of virtual objects in different locations in the virtual world is different.
  • the present disclosure provides an operation interface generation method, control method and device.
  • Embodiments of the present disclosure provide a method for generating an operation interface.
  • the method includes: obtaining the movement information of the user's operator; determining the three-dimensional movement path of the operator based on the movement information of the operator; mapping the three-dimensional movement path of the operator to a preset Assume that an operation interface is generated in a cambered mesh model, where the preset cambered mesh model is a model determined based on a preset wiring method and a preset angle.
  • the embodiment of the present disclosure provides a method for controlling an operation interface. Any method for generating an operation interface in the embodiment of the present disclosure generates an operation interface. The method includes: obtaining the user's operation information in the operation interface; and determining based on the operation information. How users control virtual objects.
  • Embodiments of the present disclosure provide a device for generating an operation interface, including: a first acquisition module configured to obtain movement information of the user's operator; a path determination module configured to determine the movement information of the operator based on the movement information of the operator.
  • the three-dimensional movement path; the generation module is configured to map the three-dimensional movement path of the operator to a preset arc grid model and generate an operation interface, wherein the preset arc grid model is based on the preset wiring method and preset Angle-determined model.
  • Embodiments of the present disclosure provide a control device for an operation interface, which generates an operation interface based on any method for generating an operation interface in the embodiments of the present disclosure.
  • the device includes: a second acquisition module configured to obtain the user's information on the operation interface.
  • the operation information in; the control module is configured to determine the user's control method for the virtual object based on the operation information.
  • Embodiments of the present disclosure provide an electronic device, including: one or more processors; and a memory on which one or more programs are stored.
  • one or more A processor implements any method of generating an operation interface in the embodiments of the disclosure, or any method of controlling an operation interface in the embodiments of the disclosure.
  • Embodiments of the present disclosure provide a computer-readable storage medium.
  • the readable storage medium stores a computer program.
  • the computer program is executed by a processor, the method for generating any operation interface in the embodiments of the present disclosure is implemented, or, Any control method of the operation interface in the embodiments of the present disclosure.
  • FIG. 1 shows a schematic flowchart of a method for generating an operation interface provided by an embodiment of the present disclosure.
  • FIG. 2 shows a schematic diagram of an operation interface provided by an embodiment of the present disclosure.
  • FIG. 3 shows a schematic diagram of a preset arc mesh model provided by an embodiment of the present disclosure.
  • Figure 4 shows a schematic diagram of a preset arc mesh model construction method provided by an embodiment of the present disclosure.
  • FIG. 5 shows a schematic diagram of the user operation method in the process of building a preset arc mesh model according to an embodiment of the present disclosure.
  • FIG. 6 shows a schematic diagram of the user operation method in the process of building a preset arc mesh model provided by yet another embodiment of the present disclosure.
  • FIG. 7 shows a schematic diagram of a three-dimensional operation interface provided by an embodiment of the present disclosure.
  • FIG. 8 shows a schematic flowchart of an operation interface control method provided by an embodiment of the present disclosure.
  • FIG. 9 shows a block diagram of a device for generating an operation interface provided by an embodiment of the present disclosure.
  • FIG. 10 shows a block diagram of a control device for an operation interface provided by an embodiment of the present disclosure.
  • FIG. 11 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • FIG. 12 shows a schematic flowchart of a working method of an electronic device provided by an embodiment of the present disclosure.
  • Figure 13 shows a schematic diagram of a three-dimensional operation interface provided by yet another embodiment of the present disclosure.
  • Figure 14 shows a schematic diagram of a three-dimensional operation interface provided by yet another embodiment of the present disclosure.
  • FIG. 15 shows a structural diagram of an exemplary hardware architecture of a computing device capable of implementing a method of generating an operation interface or controlling an operation interface according to an embodiment of the present disclosure.
  • MR technology is a technology that presents virtual scene information in real scenes.
  • MR technology can set up an interactive feedback information loop between the real world, the virtual world and the user to enhance the user's The real feel of the experience.
  • MR technology can include: Augmented Reality (AR) technology and Virtual Reality (VR) technology.
  • AR technology is a system simulation that integrates multi-source information and interactively creates three-dimensional dynamic scenes and entity behaviors, allowing users to immerse themselves in the virtual environment.
  • AR technology is a technology that calculates the position and angle of camera images in real time, and adds corresponding images, videos, and three-dimensional models. AR technology can skillfully integrate virtual information with the real world.
  • gesture interaction methods include: 1) Selection and operation of distant objects based on gesture extension lines.
  • the operation method based on gesture extension lines has very low operating accuracy for distant objects. For example, the hand Small angle changes can cause large changes in where distant objects are selected. Moreover, this method will also make the user's operation instructions unreachable, reducing the user's experience. 2) Directly use gesture interaction to operate objects in the three-dimensional space; however, the display mode of the operating interface of the existing AR device in the three-dimensional space is still displayed in a flat manner, so that the display area shown to the user and the user's The display position of the operation area is inconsistent, which reduces the accessibility of user gestures in space and prevents users from operating virtual objects conveniently and quickly.
  • FIG. 1 shows a schematic flowchart of a method for generating an operation interface provided by an embodiment of the present disclosure.
  • the method for generating an operation interface can be applied to a device for generating an operation interface.
  • the method for generating an operation interface in an embodiment of the present disclosure may include the following steps.
  • Step S101 Obtain movement information of the user's operating hand.
  • Step S102 Determine the three-dimensional movement path of the operator based on the movement information of the operator.
  • Step S103 Map the three-dimensional movement path of the operator into the preset arc mesh model to generate an operation interface.
  • the preset arc surface mesh model is a model determined based on a preset wiring method and a preset angle.
  • the specific movement path of the user's operator can be accurately known, which facilitates the establishment of a personalized operation method for the user; Map the three-dimensional movement path of the operator to a preset arc mesh model to generate an operation interface.
  • the preset arc mesh model is a model determined based on the preset wiring method and preset angle, so that the user can Virtual objects can be accurately controlled through a three-dimensional operating interface, so that users can improve the accuracy of operating remote virtual objects; at the same time, it can reduce the difficulty of operating virtual objects and improve the efficiency of operating virtual objects, so that users can gain The best user experience.
  • the operation interface includes: at least one control area; the control area is used to display multiple control elements that can be controlled by the user.
  • the control elements include: at least one of operation buttons, operation keyboards, letter buttons and emoticon buttons.
  • FIG. 2 shows a schematic diagram of an operation interface provided by an embodiment of the present disclosure.
  • the operation interface includes but is not limited to the following control areas: keyboard 201 and/or operation buttons 202.
  • the user operates and controls objects in the virtual world by operating the keyboard 201 and/or the operation buttons 202 .
  • the operation buttons 202 include four letter buttons (such as buttons “A”, “B”, “C”, and “D") and a black button.
  • the black button can be used to control the direction of the virtual object and can improve the control of the virtual object. Control accuracy of the direction of virtual objects.
  • the operation interface shown in Figure 2 only displays different control elements on a plane (i.e., two-dimensional space).
  • a plane i.e., two-dimensional space.
  • the user uses an AR device, VR device, or MR device, he or she needs to display them in a three-dimensional space.
  • each control element displayed only in two-dimensional space is not conducive to the user's manual operation.
  • the operation interface needs to be mapped to the three-dimensional space to facilitate the user's operation.
  • the preset arc mesh model can also be determined based on the preset wiring method and the preset angle; and then the three-dimensional movement path of the operator in step S103 can be mapped to the preset arc mesh model. , generate an operation interface so that the operation interface in Figure 2 can be three-dimensionally displayed in the preset arc mesh model.
  • the preset wiring methods include: a method of dividing and wiring the sphere based on longitude and latitude lines, or a method of dividing and wiring the sphere based on polygons.
  • a polygon is a shape displayed in a plane constructed by connecting three or more line segments end to end.
  • a polygon may include a triangle, a quadrilateral,... N-gon, etc., where N is an integer greater than or equal to 3.
  • FIG. 3 shows a schematic diagram of a preset arc mesh model provided by an embodiment of the present disclosure.
  • sphere 3-1 represents a latitude and longitude sphere that divides the sphere based on longitude and latitude lines
  • sphere 3-2 represents an angular sphere that divides the sphere based on triangles
  • sphere 3-3 represents a sphere that divides the sphere based on quadrilaterals. Wired polygonal sphere.
  • the preset arc mesh model can be the entire sphere or a part of the sphere, which is suitable for user-friendly spatial operations.
  • the fitting degree of user operations is the degree of fitting between the three-dimensional movement path of the user's operator hand and the operating interface. .
  • step S101 before obtaining the movement information of the user's operating hand in step S101, it also includes: obtaining attribute information of the user's operating hand; and establishing a preset arc mesh model based on the attribute information of the operating hand.
  • the attribute information of the user's operating hand includes: at least one of the user's arm length, palm attribute information, and attribute information of multiple fingers.
  • the preset arc network model also varies from person to person.
  • the user's operation information in the three-dimensional space can be more personalized and the built preset
  • the arc grid model is more in line with the operating habits of different users, making it easier for users to operate in three-dimensional space, allowing users to more accurately control objects in the virtual world, and improving the user experience.
  • the attribute information of the user's operating hand includes: the user's arm attribute information and the user's shoulder joint position information.
  • Establishing a preset arc surface mesh model based on the attribute information of the operator includes: building a preset arc surface mesh model based on the position information of the user's shoulder joint and the user's arm attribute information.
  • FIG. 4 shows a schematic diagram of a preset arc mesh model construction method provided by an embodiment of the present disclosure.
  • the left half of Figure 4 shows the position information of the user's shoulder joint (such as the coordinate values (x, y, z) in three-dimensional space coordinates, where x, y, z are all real numbers), and the user
  • the length of the big arm is L1 and the length of the small arm is L2, where L1 and L2 are both real numbers greater than 1.
  • the attribute information of the user's operating hand (such as the right hand or the left hand) can be embodied, so as to facilitate subsequent use of the attribute information of the operating hand to construct a preset arc mesh model.
  • the right half of Figure 4 shows a circle constructed with the position coordinate point (x, y, z) of the user's shoulder joint as the center and the length of the upper arm and lower arm (i.e., L1+L2) as the radius.
  • the wiring method of the mesh sphere can be any of the preset wiring methods. Then all or part of the arc mesh of the mesh sphere can be used as the preset arc mesh model.
  • a preset arc mesh model By constructing a preset arc mesh model based on the position information of the user's shoulder joint and the user's arm attribute information, it can reflect the personalized use needs of different users, making the preset arc mesh model more in line with the user's needs. , improve the user experience.
  • step S101 before obtaining the movement information of the user's operating hand in step S101, it also includes: obtaining the attribute information of the user's operating hand and the user's visual range information; establishing based on the attribute information of the operating hand and the user's visual range information. Preset arc mesh model.
  • the user's visual range information includes: the visual range of the user's eyes.
  • the first grid sphere constructed based on the attribute information of the user's operating hand is constrained and matched through the visual range of the user's eyes, thereby obtaining a part of the three-dimensional arc surface in the first grid sphere, and converting the part of the three-dimensional arc surface into As a preset arc mesh model. It can more accurately plan the spatial range that users can operate, making the preset arc grid model more in line with users' operation needs and convenient for users to operate and use.
  • the attribute information of the user's operating hand includes: the user's arm attribute information and the user's shoulder joint position information; based on the attribute information of the operating hand and the user's visual range information, a preset arc grid is established
  • the model includes: constructing a mesh sphere based on the position information of the user's shoulder joint and the user's arm attribute information, where the mesh sphere is a sphere determined based on the preset wiring method; determining the preset angle based on the user's visual range information ;Determine the preset arc mesh model based on the mesh sphere and the preset angle.
  • the preset angle can be used to characterize the user's viewing angle. For example, when the user keeps his head still, he sets the ray with the user's eyes as the center of the circle and extending in front of the user as 0 degrees. The user can see The viewing angle range is: 50 degrees upward to 70 degrees downward. For another example, when the user keeps his head still, the user's eyes are set as the center of the circle and the line of sight is extended to the left and right of the user respectively. The user's viewing angle can be 90 degrees to the left to 90 degrees to the right, etc. . By stereoscopically representing the user's viewing angle, the visual range information that the user can view can be clarified, making it easier for the user to operate.
  • a grid sphere is constructed, which can facilitate the user to operate the operating elements on the grid sphere based on the length of the arm.
  • the preset angle is determined based on the user's visual range information (such as the user's head coordinates, eye coordinates, etc.), so that part of the arc mesh in the mesh sphere can be mapped based on the preset angle. Intercept the grid to obtain the best control area model, that is, the preset arc grid model, so that the preset arc grid model can be more in line with the user's controllable range and improve the user's operation accuracy.
  • determining the three-dimensional movement path of the operator based on the movement information of the operator in step S102 includes: determining the boundary displacement information and boundary angle information based on the movement information of the operator; information to determine the three-dimensional movement path of the operator.
  • At least one activity range can be determined based on the movement information of the operator hand (for example, at least one activity range includes: the first activity range of the palm based on the displacement information of the closed route, and/or the first activity range of the palm based on the preset number of swings).
  • the second activity range determined by the displacement information by determining the boundary displacement information and boundary angle information based on at least one activity range, the farthest spatial position that the user can operate (such as the farthest displacement information and/or operator hand) can be obtained The maximum angle information that can be rotated, etc.) to facilitate the determination of the movement path of the user's operator; based on the boundary displacement information and boundary angle information, the three-dimensional movement path of the operator is determined to improve the accuracy of the user's operation in the three-dimensional space .
  • the movement information of the operator's hand includes: the displacement information of the palm based on the closed path, and/or the displacement information of the palm based on the preset number of swings (such as at least two swings in opposite directions).
  • FIG. 5 shows a schematic diagram of the user operation method in the process of building a preset arc mesh model provided by an embodiment of the present disclosure.
  • the user's operating hand waves in a circular motion in space, thereby generating a closed path in the space.
  • This closed path can reflect the displacement information of the user's palm based on the closed path, thereby obtaining the displacement information of the palm based on the closed path.
  • first range of activities can reflect the displacement information of the user's palm based on the closed path.
  • FIG. 6 shows a schematic diagram of the user operation method in the process of building a preset arc mesh model provided by yet another embodiment of the present disclosure.
  • the user's operating hand swings at least twice in opposite directions in space, thereby obtaining the displacement information of the user's palm during the swing process, and then obtaining the displacement information of the palm based on at least two swings in opposite directions. Determined second scope of activity.
  • the user's preset number of swipes during the swipe process can be set according to actual detection needs (for example, set 3 times, 5 times, etc.).
  • Figure 6 is only The direction and number of swiping times of the user's palm are shown as an example. Other preset swiping times not shown are also within the protection scope of the present disclosure and will not be described again here.
  • Obtaining the first range of movement and/or the second range of movement through the above operations can clarify the boundary displacement information and boundary angle information of the user in the three-dimensional space, so that the three-dimensional movement path of the operator can be passed through based on the boundary displacement information and boundary angle information.
  • the grid is optimized for selection and/or filling, so as to obtain a more accurate user operation range and improve the accuracy of determining the user's controllable operation range.
  • FIG. 7 shows a schematic diagram of a three-dimensional operation interface provided by an embodiment of the present disclosure.
  • a VR device an AR device, or an XR device
  • the user can view the three-dimensional operation interface, and each operation element in the three-dimensional operation interface supports the user to perform touch operations directly through gestures.
  • displaying the operation keyboard through a three-dimensional operation interface can make it more convenient for users to control different buttons in the keyboard at close range in the best control space, improving the user's use experience.
  • the corresponding degree of fitting is also different when the user operates the keyboard in the three-dimensional operation interface.
  • the number of wiring is less than the preset quantity threshold, the user's operation accuracy will be reduced; when the number of wiring is greater than the preset quantity threshold, the user's operation sensitivity will be improved; the number of wiring can be set according to the actual needs of the user, just like the user's control of the mouse.
  • the control accuracy settings are the same, and can be set according to the user's usage habits to meet the user's personalized needs.
  • the three-dimensional movement path of the operator is mapped to the preset arc mesh model.
  • the operation interface After generating the operation interface, it also includes: when it is determined that the user's preset part has moved, obtaining the preset part. Movement information; update the movement information of the operator based on the movement information of the preset parts; update the three-dimensional movement path of the operator based on the updated movement information of the operator; update the operation interface based on the updated three-dimensional movement path of the operator .
  • the preset parts include: any one of the shoulder joint, head, nose, mouth and eyes.
  • the user's preset parts may be displaced or rotated (for example, the shoulder joint undergoes a short horizontal displacement, the head rotates a certain angle, etc.), at this time, the generating device of the operating interface needs to obtain the movement information of the preset part that has changed, so as to update the movement information of the operator based on the movement information of the preset part (for example, if the shoulder joint changes in the horizontal direction If the corresponding preset arc mesh model has a shorter displacement, the corresponding preset arc mesh model may undergo spatial changes), and the relevant parameters used to construct the preset arc mesh model will be fine-tuned (for example, update the shoulder joint as the center of the circle
  • the position information of the shoulder joint is updated, and the original shoulder joint position coordinates (x, y, z) are updated to the changed shoulder joint position coordinates (x0, y0, z0), etc., where x0, y0 and z0 are all real
  • the updated three-dimensional operation interface can follow the movement or rotation operation relative to the original three-dimensional operation interface, so that the relative position of the new three-dimensional operation interface and the user's shoulder joint points can be consistent and meet the user's operation needs.
  • the three-dimensional movement path of the operator is mapped to the preset arc mesh model.
  • After generating the operation interface it also includes: in the case of determining a reconstruction request to obtain user feedback, based on the obtained updated The movement information of the operator is used to reconstruct the three-dimensional operation interface.
  • the change of the preset part changes significantly (for example, the change of the preset part is greater than the preset displacement threshold (for example, the change of displacement is greater than 1 meter, etc.), or the change of the preset part is When the change angle is greater than the preset angle threshold (for example, the change angle is greater than 90 degrees, etc.), the user will send a reconstruction request to the operation interface generation device due to inconvenient operation, so that the operation interface generation device can be based on the updated The three-dimensional operation interface is reconstructed based on the movement information of the operator's hand.
  • the reconstruction method is the same as the generation method of any operation interface in the embodiments of the present disclosure, and will not be described again here.
  • the user can obtain a three-dimensional operation interface that is more suitable for his current location. Based on the updated three-dimensional operation interface, users can operate virtual objects more accurately, improving the user experience.
  • FIG. 8 shows a schematic flowchart of an operation interface control method provided by an embodiment of the present disclosure.
  • the method for generating the operation interface can be applied to the control device of the operation interface.
  • the operation interface is an interface generated based on any operation interface generation method in the embodiments of the present disclosure.
  • control method of the operation interface in the embodiment of the present disclosure may include the following steps.
  • Step S801 Obtain the user's operation information in the operation interface.
  • the operation information may include the user's control information on different operation elements in the operation interface. For example, the letter information corresponding to the letter button pressed by the user, the movement direction information corresponding to the direction button operated by the user, etc.
  • Step S802 Determine the user's control method for the virtual object based on the operation information.
  • the operation information includes the user controlling the direction button upward, and the input displacement information is 1 meter, it can be clear that the user wants to control the virtual object to move upward 1 meter.
  • the operation information includes the input letter information as consecutive M, O, V, and E, it can be learned that the user needs to "move" the virtual object (that is, move the virtual object).
  • the user's different control methods for virtual objects can be clarified, so that the user can accurately control the virtual object and improve the interaction efficiency between the user and the machine device.
  • the operation interface is generated by using any of the operation interface generation methods in the embodiments of the present disclosure, so that the user can accurately control the virtual object through the three-dimensional operation interface, so that the user can accurately control the virtual object through the three-dimensional operation interface.
  • the operation interface accurately controls virtual objects; obtains the user's operation information in the operation interface, so that the user's operation accuracy for distant virtual objects is improved; based on the operation information, the user's control method for the virtual object is determined to reduce the user's control of the virtual object.
  • the difficulty of operating virtual objects allows users to control virtual objects more accurately and improves the efficiency of interaction between users and machines.
  • FIG. 9 shows a schematic structural diagram of a device for generating an operation interface provided by an embodiment of the present disclosure.
  • the operation interface generating device 900 may include the following modules.
  • the first acquisition module 901 is configured to obtain the movement information of the user's operator; the path determination module 902 is configured to determine the three-dimensional movement path of the operator based on the movement information of the operator; the generation module 903 is configured to generate the operator's movement information.
  • the three-dimensional movement path of the hand is mapped to a preset arc mesh model to generate an operation interface, where the preset arc mesh model is a model determined based on the preset wiring method and the preset angle.
  • the path determination module determines the three-dimensional movement path of the operator based on the acquired movement information of the user's operator, thereby accurately knowing the specific movement path of the user's operator, which is convenient for The user establishes a personalized operation method; the generation module maps the three-dimensional movement path of the operator to a preset arc grid model to generate an operation interface, in which the preset arc grid model is based on the preset wiring method.
  • FIG. 10 shows a schematic structural diagram of a control device for an operation interface provided by an embodiment of the present disclosure.
  • An operation interface is generated based on any operation interface generation method in the embodiments of the present disclosure.
  • the control device 1000 of the operation interface may include the following modules.
  • the second acquisition module 1001 is configured to acquire the user's operation information in the operation interface.
  • the control module 1002 is configured to determine the user's control method for the virtual object based on the operation information.
  • an operation interface is generated by using any one of the operation interface generation methods in the embodiments of the present disclosure, so that the user can accurately control virtual objects through the three-dimensional operation interface; the third The second acquisition module obtains the user's operation information in the operation interface, so that the user's operation accuracy of remote virtual objects can be improved; the control module determines the user's control method of the virtual object based on the operation information, and reduces the user's operation of the virtual object. difficulty, allowing users to control virtual objects more accurately and improve the efficiency of interaction between users and machines.
  • FIG. 11 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the electronic device may be any one of a VR device, an AR device, and an MR device.
  • the user interacts with the machine device based on gesture information, thereby realizing the user's operation in the three-dimensional space.
  • the electronic device includes but is not limited to the following modules: acquisition module 1101, display module 1102, control module 1103, judgment module 1104, and calculation and storage module 1105.
  • the acquisition module 1101 is configured to acquire user data.
  • sensors in VR devices, sensors in AR devices, or sensors in XR devices are used to obtain attribute information of the user's operator hand, movement information of the operator hand, etc.
  • the sensor may include: at least one of a myoelectric sensor, a posture sensor, and an audio receiving sensor.
  • a three-dimensional depth camera and/or a binocular camera can also be installed in the VR device.
  • the attribute information of the operator includes: position information of the user's shoulder joint in the three-dimensional space, at least one of the length of the upper arm and the length of the lower arm.
  • the movement information of the operating hand includes: the position information of the user's palm during movement, as well as the data of the user's gesture interaction with the display interface, etc.
  • the above user data may be stored in the computing and storage module 1105.
  • Display module 1102 configured to display the information stored in the calculation and storage module 1105 into a three-dimensional space to facilitate user viewing.
  • Control module 1103 is configured to control the acquisition module 1101 to acquire user data and control the display module 1102 to display information upon receiving the control instruction sent by the judgment module 1104.
  • Determination module 1104 configured to determine whether to send a control instruction to the control module 1103 based on the user data stored in the computing and storage module 1105 (for example, determine whether to generate a control instruction according to the user's usage requirements, etc.). When it is determined that a control instruction needs to be sent, the control instruction is sent to the control module 1103 so that the control model 1103 performs corresponding operations according to the control instruction.
  • the control instructions may include: controlling the display module 1102 to display multiple control elements in at least one control area so that the user can control these control elements.
  • Calculation and storage module 1105 configured to calculate and store the user data acquired by the acquisition module 1101. For example, based on the movement information of the operator, the three-dimensional movement path of the operator is determined, and the three-dimensional movement path of the operator is mapped to the preset arc grid model to generate an operation interface that can be performed in a three-dimensional space. Displayed operation interface. In addition, the user's operation information in the operation interface will also be stored to facilitate the confirmation of the user's control method of the virtual object based on the operation information.
  • FIG. 12 shows a schematic flowchart of a working method of an electronic device provided by an embodiment of the present disclosure. As shown in Figure 12, the working method of the electronic device includes but is not limited to the following steps.
  • Step S1201 Obtain the characteristics of multiple control elements displayed on the plane, and determine the plane operation interface based on the characteristics of multiple control remote ends.
  • control elements can include letter buttons, direction buttons, emoticon buttons, letter buttons, function keys and other different elements. Multiple control elements can be arranged and combined according to the user's usage habits to determine the flat operation interface.
  • the flat operation interface can be displayed using an operation keyboard and/or an operation button control area to facilitate the user's use.
  • Step S1202 Obtain the attribute information of the user's operating hand and the movement information of the operating hand.
  • the attribute information of the operating hand includes: position information of the user's shoulder joint in the three-dimensional space, at least one of the length of the upper arm and the length of the lower arm.
  • the movement information of the operating hand includes: the displacement information of the palm based on the closed path, and/or the displacement information of the palm based on the preset number of swings, as well as the data of the user's gesture interaction with the display interface, etc.
  • Step S1203 Create a preset arc mesh model based on the attribute information of the operator.
  • the preset arc network model is a model determined based on a preset wiring method and a preset angle.
  • the preset wiring method includes a method of dividing and wiring the sphere based on longitude and latitude lines, or a method of dividing and wiring the sphere based on polygons.
  • a polygon is a plane figure composed of three or more line segments connected end to end.
  • Step S1204 Determine the three-dimensional movement path of the operator based on the movement information of the operator.
  • the movement information of the operating hand includes: the movement trajectory in the three-dimensional space formed by the user waving his palm at least twice. Through the motion trajectory, the three-dimensional movement path of the operating hand can be determined, and then the boundary position information and boundary angle information of the user's palm movement can be clarified.
  • Step S1205 Map the three-dimensional movement path of the operator to the preset arc mesh model to generate a three-dimensional operation interface that can be displayed in a three-dimensional space.
  • FIG. 13 shows a schematic diagram of a three-dimensional operation interface provided by yet another embodiment of the present disclosure.
  • the operation button 202 in FIG. 2 by mapping the operation button 202 in FIG. 2 to the preset arc mesh model, the operation button 202 can be displayed to the user in a three-dimensional manner.
  • the display range of the generated three-dimensional operation interface is the optimal display range determined based on the boundary position information and boundary angle information that the user's palm can move.
  • the operation button 202 can be fully displayed. To facilitate user operations. For users, each operation button displayed in the three-dimensional operation interface is within the best reach of the user, which can improve the user's touch operation efficiency and improve the user's operation experience.
  • users can also be supported to create multiple three-dimensional operating interfaces at the same time.
  • multiple three-dimensional operation interfaces are constructed through the three-dimensional movement paths of the user's left and right hands.
  • Figure 14 shows a schematic diagram of a three-dimensional operation interface provided by yet another embodiment of the present disclosure. As shown in Figure 14, by mapping the keyboard 201 and the operation buttons 202 in Figure 2 to the preset arc grid model, multiple three-dimensional operation interfaces can be displayed to the user in a three-dimensional manner to facilitate the user's operation.
  • Step S1206 Display a three-dimensional operation interface to the user so that the user can operate and control the virtual object in the three-dimensional operation interface.
  • the user can use his right hand to operate each control element in the keyboard 201 in the three-dimensional operation interface.
  • the user can also use his left hand to operate each button in the operation button 202 in the three-dimensional operation interface, which can satisfy The personalized needs of users improve the user’s operational efficiency.
  • the specific movement path of the user's operator can be accurately known, which facilitates the establishment of a personalized operation method for the user; Map the three-dimensional movement path of the operator to the preset arc mesh model to generate a three-dimensional operating interface, so that the user can accurately control virtual objects through the three-dimensional three-dimensional operating interface, allowing the user to operate distant virtual objects The accuracy is improved; at the same time, the user's difficulty in operating virtual objects is reduced, and the user's operating efficiency of virtual objects is improved, so that users can control virtual objects more intuitively through gesture operations and obtain the best user experience.
  • FIG. 15 shows a structural diagram of an exemplary hardware architecture of a computing device capable of implementing a method of generating an operation interface or controlling an operation interface according to an embodiment of the present disclosure.
  • computing device 1500 includes an input device 1501 , an input interface 1502 , a central processing unit 1503 , a memory 1504 , an output interface 1505 , and an output device 1506 .
  • the input interface 1502, the central processing unit 1503, the memory 1504, and the output interface 1505 are connected to each other through the bus 1507.
  • the input device 1501 and the output device 1506 are connected to the bus 1507 through the input interface 1502 and the output interface 1505 respectively, and then to the computing device 1500 to connect other components.
  • the input device 1501 receives input information from the outside and transmits the input information to the central processor 1503 through the input interface 1502; the central processor 1503 processes the input information based on computer-executable instructions stored in the memory 1504 to generate output.
  • Information, the output information is temporarily or permanently stored in the memory 1504, and then the output information is transmitted to the output device 1506 through the output interface 1505; the output device 1506 outputs the output information to the outside of the computing device 1500 for use by the user.
  • the computing device shown in FIG. 15 may be implemented as an electronic device, and the electronic device may include: a memory configured to store a program; a processor configured to run the program stored in the memory to Execute any one of the operation interface generation methods described in the above embodiments, or any one of the operation interface control methods in the embodiments of the present disclosure.
  • the computing device shown in Figure 15 can be implemented as an operation interface generation system.
  • the operation interface generation system can include: a memory configured to store programs; a processor configured to run the memory program stored in to execute any of the operation interface generation methods described in the above embodiments.
  • the computing device shown in Figure 15 can be implemented as a control system of an operation interface.
  • the control system of the operation interface can include: a memory configured to store programs; a processor configured to run the memory program stored in it to execute any one of the operation interface control methods described in the above embodiments.
  • Embodiments of the present disclosure may be implemented by a data processor of a mobile device executing computer program instructions, such as in a processor entity, or by hardware, or by a combination of software and hardware.
  • Computer program instructions may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code written in any combination of one or more programming languages or target code.
  • ISA instruction set architecture
  • Any block diagram of a logic flow in the figures of this disclosure may represent program steps, or may represent interconnected logic circuits, modules, and functions, or may represent a combination of program steps and logic circuits, modules, and functions.
  • Computer programs can be stored on memory.
  • the memory may be of any type appropriate to the local technical environment and may be implemented using any suitable data storage technology, such as, but not limited to, read only memory (ROM), random access memory (RAM), optical storage devices and systems (digital versatile disc DVD or CD), etc.
  • Computer-readable media may include non-transitory storage media.
  • the data processor may be of any type suitable for the local technical environment, such as, but not limited to, general purpose computers, special purpose computers, microprocessors, digital signal processors (DSP), application specific integrated circuits (ASIC), programmable logic devices (FGPA) and processors based on multi-core processor architecture.
  • DSP digital signal processors
  • ASIC application specific integrated circuits
  • FGPA programmable logic devices

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Manipulator (AREA)
  • Position Input By Displaying (AREA)

Abstract

本公开提出一种操作界面的生成方法、控制方法和装置,涉及虚拟现实技术领域。操作界面的生成方法包括:获取用户的操作手的移动信息;依据操作手的移动信息,确定操作手的三维移动路径;将操作手的三维移动路径映射到预设弧面网格模型中,生成操作界面,其中,预设弧面网格模型为基于预设布线方式和预设角度确定的模型。通过该生成方法,可准确获知用户的操作手的具体移动路径,方便为该用户建立个性化的操作方式,以使用户可以通过立体的操作界面对虚拟物体进行准确控制,使用户对远处的虚拟物体的操作精度得到提升,并降低用户对虚拟物体的操作难度,提升用户对虚拟物体的操作效率。

Description

操作界面的生成方法、控制方法和装置
相关申请的交叉引用
本公开基于2022年04月24日提交的发明名称为“操作界面的生成方法、控制方法和装置”的中国专利申请CN202210450377.1,并且要求该专利申请的优先权,通过引用将其所公开的内容全部并入本公开。
技术领域
本公开涉及虚拟现实技术领域,具体涉及一种操作界面的生成方法、控制方法和装置。
背景技术
在人机交互领域,用户通常采用基于屏幕的触控交互方式,与机器设备进行信息的交互。随着混合现实(Mixed Reality,MR)技术的发展,用户还可以采用基于手势的交互方式,与机器设备进行信息的交互。其中,基于手势的交互方式能够使用户摆脱屏幕的限制,为用户提供一种自然的、符合人性的交互方式,从而使用户可以直接对虚拟世界中的虚拟物体进行操控。
目前,常用的手势交互方式包括:手势延长线的操作方式和直接使用手势交互的方式。但是,在采用手势延长线的操作方式时,若用户增加其手部的操作角度,则会导致用户对远处物体的选择幅度成倍增加,降低了用户对远处物体的操作精度;在采用直接使用手势交互的方式时,由于用户对虚拟世界中的、不同位置上的虚拟物体的操作难度不同,降低了用户对虚拟物体的操作效率。
发明内容
本公开提供一种操作界面的生成方法、控制方法和装置。
本公开实施例提供一种操作界面的生成方法,方法包括:获取用户的操作手的移动信息;依据操作手的移动信息,确定操作手的三维移动路径;将操作手的三维移动路径映射到预设弧面网格模型中,生成操作界面,其中,预设弧面网格模型为基于预设布线方式和预设角度确定的模型。
本公开实施例提供一种操作界面的控制方法,本公开实施例中的任意一种操作界面的生成方法,生成操作界面,方法包括:获取用户在操作界面中的操作信息;依据操作信息,确定用户对虚拟物体的控制方式。
本公开实施例提供一种操作界面的生成装置,包括:第一获取模块,被配置为获取用户的操作手的移动信息;路径确定模块,被配置为依据操作手的移动信息,确定操作手的三维移动路径;生成模块,被配置为将操作手的三维移动路径映射到预设弧面网格模型中,生成操作界面,其中,预设弧面网格模型为基于预设布线方式和预设角度确定的模型。
本公开实施例提供一种操作界面的控制装置,基于本公开实施例中的任意一种操作界面的生成方法,生成操作界面,该装置包括:第二获取模块,被配置为获取用户在操作界面中的操作信息;控制模块,被配置为依据操作信息,确定用户对虚拟物体的控制方式。
本公开实施例提供一种电子设备,包括:一个或多个处理器;存储器,其上存储有一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现本公开实施例中的任意一种操作界面的生成方法,或,本公开实施例中的任意一种操作界面的控制方法。
本公开实施例提供了一种计算机可读的存储介质,该可读存储介质存储有计算机程序,计算机程序被处理器执行时实现本公开实施例中的任意一种操作界面的生成方法,或,本公开实施例中的任意一种操作界面的控制方法。
关于本公开的以上实施例和其他方面以及其实现方式,在附图说明、具体实施方式和权利要求中提供更多说明。
附图说明
图1示出本公开实施例提供的一种操作界面的生成方法的流程示意图。
图2示出本公开实施例提供的操作界面的示意图。
图3示出本公开实施例提供的预设弧面网格模型的示意图。
图4示出本公开实施例提供的预设弧面网格模型构建方式示意图。
图5示出本公开一实施例提供的在构建预设弧面网格模型过程中的用户操作方式的示意图。
图6示出本公开又一实施例提供的在构建预设弧面网格模型过程中的用户操作方式的示意图。
图7示出本公开一实施例提供的一种三维操作界面的示意图。
图8示出本公开实施例提供的一种操作界面的控制方法的流程示意图。
图9示出本公开实施例提供的一种操作界面的生成装置的组成方框图。
图10示出本公开实施例提供的一种操作界面的控制装置的组成方框图。
图11示出本公开实施例提供的一种电子设备的结构示意图。
图12示出本公开实施例提供的一种电子设备的工作方法的流程示意图。
图13示出本公开又一实施例提供的一种三维操作界面的示意图。
图14示出本公开再一实施例提供的一种三维操作界面的示意图。
图15示出能够实现根据本公开实施例的操作界面的生成或操作界面的控制方法的计算设备的示例性硬件架构的结构图。
具体实施方式
为使本公开的目的、技术方案和优点更加清楚明白,下文中将结合附图对本公开的实施例进行详细说明。需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互任意组合。
混合现实(Mixed Reality,简称为MR)技术是一种在现实场景中呈现虚拟场景信息的技术,MR技术能够在现实世界、虚拟世界和用户之间搭起一个交互反馈的信息回路,以增强用户体验的真实感受。MR技术可以包括:增强现实(Augmented Reality,简称为AR)技术和虚拟现实(Virtual Reality,简称为VR)技术。其中,VR技术是一种将多源信息进行融合、交互式的三维动态视景和实体行为的系统仿真,使用户沉浸到虚拟环境中。AR技术是一种实 时的计算摄影机影像的位置及角度,并添加相应图像、视频和三维模型的技术,AR技术可以将虚拟信息与真实世界进行巧妙的融合。
目前,常用的手势交互方式包括:1)基于手势延长线实现的对远处物体的选取和操作,但基于手势延长线的操作方式,对于远处物体的操作精度很低,例如,手部的微弱角度变化会造成对远处物体的选择的地方发生巨大变化。并且,该方式还会使用户的操作指令不可达的情况出现,降低了用户的使用体验。2)直接使用手势交互对三维空间中的物体进行操作;但现有的AR设备在三维空间中的操作界面的显示方式,仍旧使用平面方式进行显示,使得显示给用户观看的显示区域和用户的操作区域的显示位置不一致,降低了用户手势在空间中的可达性,使用户无法便捷快速的对虚拟物体进行操作。
图1示出本公开实施例提供的一种操作界面的生成方法的流程示意图。该操作界面的生成方法可应用于操作界面的生成装置。如图1所示,本公开实施例中的操作界面的生成方法可以包括以下步骤。
步骤S101,获取用户的操作手的移动信息。
步骤S102,依据操作手的移动信息,确定操作手的三维移动路径。
步骤S103,将操作手的三维移动路径映射到预设弧面网格模型中,生成操作界面。
其中,预设弧面网格模型为基于预设布线方式和预设角度确定的模型。
在本实施例中,通过依据获取到的用户的操作手的移动信息,确定操作手的三维移动路径,可准确获知用户的操作手的具体移动路径,方便为该用户建立个性化的操作方式;将操作手的三维移动路径映射到预设弧面网格模型中,生成操作界面,其中的预设弧面网格模型为基将于预设布线方式和预设角度确定的模型,以使用户可以通过立体的操作界面对虚拟物体进行准确控制,使用户对远处的虚拟物体的操作精度得到提升;同时降低用户对虚拟物体的操作难度,提升用户对虚拟物体的操作效率,以使用户获得最好的使用体验。
其中,操作界面,包括:至少一个操控区域;操控区域用于显示多个可供用户操控的操控元素。其中,操控元素包括:操作按钮、操作键盘、字母按钮和表情按钮中的至少一种。
通过提供至少一个操作区域给用户,以使用户可以通过操作区域中的多个不同的操控元素对虚拟物体进行操控,不仅使用户可以更方便的对虚拟物体进行处理,还能够提高用户对该虚拟物体的操控精度,使用户获得更好的使用体验。
例如,图2示出本公开实施例提供的操作界面的示意图。如图2所示,操作界面包括但不限于如下操控区域:键盘201和/或操作按钮202。用户通过对键盘201和/或操作按钮202的操作,来实现对虚拟世界中的物体进行操作和控制。
其中,在键盘201中,依次排列出26个英文字母、“回车”键、“shift”键、数字键(如,“123”按钮)、表情键(如“
Figure PCTCN2023071748-appb-000001
”按钮)、“空格键”、“发送”键等不同功能的按钮,以方便用户的操作。
操作按钮202包括四个字母按钮(如,按钮“A”、“B”、“C”、“D”)以及一个黑色按钮,该黑色按钮可以用于对虚拟物体的方向进行操控,能够提升对虚拟物体的方向的控制精度。
需要说明的是,图2所示出的操作界面仅是在平面(即二维空间上)上展示的不同的操控元素,当用户使用AR设备、VR设备或MR设备时,需要在三维空间在对虚拟物体进行操控,而仅在二维空间上展示的各个操控元素不利于用户的手动操作,需要将该操作界面映射到三维空间中,以方便用户的操作。
在执行步骤S103之前,还可以通过基于预设布线方式和预设角度确定预设弧面网格模型;再执行步骤S103中的将操作手的三维移动路径映射到预设弧面网格模型中,生成操作界面,以使图2中的操作界面可以立体的显示在预设弧面网格模型中。
其中,预设布线方式,包括:基于经纬线对球体进行分割布线的方式,或,基于多边形对球体进行分割布线的方式。其中,多边形是由三条或三条以上的线段首尾顺次连接构建的平面显示的形状,例如,多边形可以包括三角形、四边形、……N边形等,N为大于或等于3的整数。
需要说明的是,上述对于预设布线方式仅是举例说明,可以根据实际需要进行具体设置,其他未说明的预设布线方式也在本公开的保护范围内,在此不再赘述。
图3示出本公开实施例提供的预设弧面网格模型的示意图。如图3所示,球体3-1表示基于经纬线对球体进行分割布线的经纬球体,球体3-2表示基于三角形对球体进行分割布线的棱角球体,球体3-3表示基于四边形对球体进行分割布线的多边形球体。而预设弧面网格模型可以是整个球体,也可以是球体中的一部分,以方便用户的空间操作为宜。
需要说明的是,上述具有不同布线方式的网格球体中的布线数量与用户操作拟合度成正比,用户操作拟合度为用户的操作手的三维移动路径与操作界面之间的拟合程度。
例如,预设弧面网格模型中的布线数量越多,即对网格球体的分割越细致,则用户操作拟合度越高,即,用户的操作手的三维移动路径与操作界面之间的匹配程度越高。
在一些具体实现中,在执行步骤S101中的获取用户的操作手的移动信息之前,还包括:获取用户的操作手的属性信息;依据操作手的属性信息,建立预设弧面网格模型。
其中,用户的操作手的属性信息包括:用户的手臂长度、手掌的属性信息和多个手指的属性信息中的至少一种。
需要说明的是,由于不同的用户的操作手的属性信息不同(例如,用户的手臂长度不同,手掌的属性信息也不同,对应的不同的用户的手指长度也都不同等),对应的构建的预设弧面网络模型也因人而异。
通过依据不同的用户的操作手的不同维度的属性信息,构建与不同的用户相匹配的预设弧面网格模型,能够使用户在三维空间上的操作信息更具个性化,使构建的预设弧面网格模型更符合不同的用户的操作习惯,使用户在三维空间上的操作更方便,从而使用户更准确的对虚拟世界中的物体进行精准控制,提升用户的使用体验。
在一些具体实现中,用户的操作手的属性信息,包括:用户的手臂属性信息和用户的肩关节的位置信息。依据操作手的属性信息,建立预设弧面网格模型,包括:依据用户的肩关节的位置信息和用户的手臂属性信息,构建预设弧面网格模型。
例如,图4示出本公开实施例提供的预设弧面网格模型构建方式示意图。图4中的左半部分示出了用户的肩关节的位置信息(如,三维空间坐标下的坐标值(x,y,z),其中,x,y,z均为实数),以及该用户的大臂长度为L1,小臂长度为L2,其中,L1和L2均为大于1的实数。通过上述属性信息,能够具体体现该用户的操作手(如,右手或左手)的属性信息,以方便后续使用上述操作手的属性信息构建预设弧面网格模型。
图4中的右半部分示出了以该用户的肩关节的位置坐标点(x,y,z)为圆心,以大臂和小臂的长度之后(即,L1+L2)为半径,构建一个网格球体,该网格球体的布线方式可以是预设布线方式中的任意一种。进而可以采用该网格球体的全部或部分弧面网格作为预设弧面 网格模型。
通过依据用户的肩关节的位置信息和用户的手臂属性信息,构建预设弧面网格模型,能够体现不同用户的个性化使用需求,使预设弧面网格模型可以更符合用户的使用需求,提升用户的使用体验。
例如,在执行步骤S101中的获取用户的操作手的移动信息之前,还包括:获取用户的操作手的属性信息和用户的视觉范围信息;依据操作手的属性信息和用户的视觉范围信息,建立预设弧面网格模型。
其中,用户的视觉范围信息,包括:用户眼睛的可视范围。
通过用户眼睛的可视范围来对基于用户的操作手的属性信息构建的第一网格球体进行约束和匹配,从而获得第一网格球体中的一部分立体弧面,并将该部分立体弧面作为预设弧面网格模型。能够更精确的对用户能够操作到的空间范围进行规划,使预设弧面网格模型更符合用户的操作需求,方便用户的操作和使用。
在一些具体实现中,用户的操作手的属性信息,包括:用户的手臂属性信息和用户的肩关节的位置信息;依据操作手的属性信息和用户的视觉范围信息,建立预设弧面网格模型,包括:依据用户的肩关节的位置信息和用户的手臂属性信息,构建网格球体,其中,网格球体为基于预设布线方式确定的球体;依据用户的视觉范围信息,确定预设角度;依据网格球体和预设角度,确定预设弧面网格模型。
其中,预设角度可以用于表征用户的可视角度,例如,用户在保持头部不动的情况下,设置以用户的眼睛为圆心、向用户前面延伸的射线作为0度,用户能够观看到的可视角度范围为:向上50度~向下70度。又例如,用户在保持头部不动的情况下,设置以用户的眼睛为圆心、分别向用户的左右进行视线的延伸,该用户的可视角度可以为向左90度~向右90度等。通过立体的表征用户的可视角度,能够明确用户可以观看到的视觉范围信息,方便用户的操作。
通过以用户的肩关节的位置信息为圆心,以用户的大臂长度和/或小臂长度为半径,构建网格球体,能够方便用户可以基于手臂的长度对网格球体上的操作元素进行操作。因用户的可视范围有限,根据用户的视觉范围信息(如,用户的头部坐标、眼睛坐标等信息)确定预设角度,从而可以根据该预设角度对网格球体中的部分弧面网格进行截取,获得最佳操控区域模型,即预设弧面网格模型,以使该预设弧面网格模型能够更符合用户的可操控范围,提升用户的操作准确性。
在一些具体实现中,步骤S102中的依据操作手的移动信息,确定操作手的三维移动路径,包括:依据操作手的移动信息,确定边界位移信息和边界角度信息;依据边界位移信息和边界角度信息,确定操作手的三维移动路径。
例如,可以依据操作手的移动信息,确定至少一个活动范围(例如,至少一个活动范围,包括:手掌基于闭合路线的位移信息确定的第一活动范围,和/或,手掌基于预设挥动次数的位移信息确定的第二活动范围);通过依据至少一个活动范围,确定边界位移信息和边界角度信息,能够获知用户最远能够操作到的空间位置(如,最远的位移信息和/或操作手能够转动的最大角度信息等),以方便对用户的操作手的移动路径进行确定;依据边界位移信息和边界角度信息,确定操作手的三维移动路径,提升对用户在三维空间上的操作准确性。
其中,操作手的移动信息,包括:手掌基于闭合路线的位移信息,和/或,手掌基于预设 挥动次数(如,方向相反的至少两次挥动)的位移信息。
例如,图5示出本公开一实施例提供的在构建预设弧面网格模型过程中的用户操作方式的示意图。如图5所示,用户的操作手在空间上进行环形挥动,从而产生一个空间的闭合路线,该闭合路线能够反应用户的手掌基于闭合路线的位移信息,从而获得手掌基于闭合路线的位移信息确定的第一活动范围。
又例如,图6示出本公开又一实施例提供的在构建预设弧面网格模型过程中的用户操作方式的示意图。如图6所示,用户的操作手在空间上进行方向相反的至少两次挥动,从而获得该用户的手掌在挥动过程中的位移信息,进而获得手掌基于方向相反的至少两次挥动的位移信息确定的第二活动范围。
需要说明的是,为了保证第二活动范围的准确性,用户在挥动的过程中的预设挥动次数可以根据实际检测的需要进行设置(如,设置3次、5次等),图6仅是示例性的示出用户的手掌挥动的方向和次数,其他未示出的预设挥动次数也在本公开的保护范围之内,在此不再赘述。
通过上述操作获得第一活动范围和/或第二活动范围,能够明确用户在三维空间上的边界位移信息和边界角度信息,从而依据边界位移信息和边界角度信息,将操作手的三维移动路径经过的网格进行选中和/或补齐等优化处理,从而获得更准确的用户操作范围,提升对用户的可控的操作范围的确定准确性。
进一步地,对图2所示的键盘201的尺寸进行调整,以使生成的三维操作界面能够以最大尺寸完整展示该键盘201。图7示出本公开一实施例提供的一种三维操作界面的示意图。用户通过使用VR设备或AR设备或XR设备,能够观看到该三维操作界面,并且,该三维操作界面中的各个操作元素支持用户直接通过手势进行触碰操作。
如图7所示,通过立体的操作界面来显示操作键盘,能够更方便用户在最佳的操控空间上对键盘中的不同按钮进行近距离操控,提升用户的使用感受。
需要说明的是,因预设弧面网格模型中的预设布线方式不同,以及布线数量的不同,用户在对该三维操作界面中的键盘进行操作时,对应的拟合程度也不同。当布线数量小于预设数量阈值时,会降低用户的操作精度;当布线数量大于预设数量阈值时,会提高用户的操作灵敏度;该布线数量可以根据用户的实际需要进行设置,犹如用户对鼠标的操控精准度的设置一样,根据用户的使用习惯进行设置,以满足用户的个性化需求。
当用户直接利用手势操作对三维操作界面上的立体键盘进行触碰操作时,该立体键盘的所有操控元素对用户来说都在最佳可达范围内,使用户对立体键盘的操作体验和触控效率获得明显提升。
在一些具体实现中,将操作手的三维移动路径映射到预设弧面网格模型中,生成操作界面之后,还包括:在确定用户的预设部位发生移动的情况下,获取预设部位的移动信息;依据预设部位的移动信息,更新操作手的移动信息;基于更新后的操作手的移动信息,更新操作手的三维移动路径;依据更新后的操作手的三维移动路径,更新操作界面。
其中,预设部位包括:肩关节、头部、鼻子、嘴巴和眼睛中的任意一种。
用户在使用三维操作界面中的操控元素对虚拟物体进行操控的过程中,该用户的预设部位可能发生位移或转动(例如,肩关节发生水平方向上的较短的位移,头部转动某个角度等),此时,操作界面的生成装置需要获取发生变化的预设部位的移动信息,从而依据该预设部位 的移动信息对操作手的移动信息进行更新(例如,肩关节发生水平方向上的较短的位移,则对应的预设弧面网格模型就可能发生空间上的变化),将用于构建预设弧面网格模型的相关参数进行微调(例如,更新作为圆心的肩关节的位置信息,将原来的肩关节的位置坐标(x,y,z)更新为变化后的肩关节的位置坐标(x0,y0,z0)等,其中,x0,y0和z0均为实数);并基于更新后的操作手的移动信息,更新操作手的三维移动路径(例如,依据肩关节的位置坐标(x,y,z)和(x0,y0,z0)之间的变换关系,对操作手的移动信息(如,操作手的位移距离或转动角度等)进行对应的换算,以获得更新后的操作手的三维移动路径),以使该更新后的操作手的三维移动路径与更新后的预设弧面网格模型更匹配,从而是更新后的操作界面能够更契合用户的使用范围,实现三维操作界面在用户的预设部位发生变化的过程中的跟随效果,提升用户的使用体验。
例如,使更新后的三维操作界面相对应原来的三维操作界面是跟随移动或旋转的操作,从而新后的三维操作界面与用户的肩关节点的相对位置能够保持一致,符合用户的操作需求。
在一些具体实现中,将操作手的三维移动路径映射到预设弧面网格模型中,生成操作界面之后,还包括:在确定获得用户反馈的重建请求的情况下,依据获取到的更新后的操作手的移动信息,对三维操作界面进行重建。
需要说明的是,当用户的预设部位的位移或角度发生较大变更(例如,预设部位的变化位移大于预设位移阈值(如,变化位移大于1米等),或,预设部位的变化角度大于预设角度阈值(如,变化角度大于90度等)的情况下,用户因操作不方便,会向操作界面的生成装置发送重建请求,以使该操作界面的生成装置可以基于更新后的操作手的移动信息,对三维操作界面进行重建,该重建方法与本公开实施例中的任意一种操作界面的生成方法相同,在此不再赘述。
通过在确定获得用户反馈的重建请求的情况下,依据获取到的更新后的操作手的移动信息,对三维操作界面进行重建,能够使用户可以获得更符合其当前所在位置的三维操作界面,使用户基于更新后的三维操作界面可以更准确的对虚拟物体进行操作,提升用户的使用体验。
图8示出本公开实施例提供的一种操作界面的控制方法的流程示意图。该操作界面的生成方法可应用于操作界面的控制装置。其中,操作界面是基于本公开实施例中的任意一种操作界面的生成方法生成的界面。
如图8所示,本公开实施例中的操作界面的控制方法可以包括以下步骤。
步骤S801,获取用户在操作界面中的操作信息。
其中,操作信息可以包括用户对操作界面中的不同的操作元素的控制信息。例如,用户按下的字母按钮对应的字母信息、用户操作方向按钮对应的移动方向信息等。
步骤S802,依据操作信息,确定用户对虚拟物体的控制方式。
例如,当操作信息包括用户控制方向按钮向上,以及输入位移信息为1米时,能够明确该用户希望控制虚拟物体向上移动1米。又例如,当操作信息包括输入的字母信息为连续的M、O、V、E时,可以获知用户需要对虚拟物体进行“move”(即移动虚拟物体)。
通过不同的操作信息,能够明确用户对虚拟物体的不同控制方式,是用户可以对虚拟物体进行精准控制,提升用户与机器设备之间的交互效率。
在本实施例中,通过采用本公开实施例中的任意一种操作界面的生成方法,生成操作界面,以使用户可以通过立体的操作界面对虚拟物体进行准确控制,以使用户可以通过立体的 操作界面对虚拟物体进行准确控制;获取用户在操作界面中的操作信息,使用户对远处的虚拟物体的操作精度得到提升;依据该操作信息,确定用户对虚拟物体的控制方式,降低用户对虚拟物体的操作难度,使用户可以对虚拟物体进行更精准的操控,提升用户与机器设备之间的交互效率。
下面结合附图,详细介绍根据本公开实施例的操作界面的生成装置。图9示出本公开一实施例提供的操作界面的生成装置的结构示意图。如图9所示,操作界面的生成装置900可以包括如下模块。
第一获取模块901,被配置为获取用户的操作手的移动信息;路径确定模块902,被配置为依据操作手的移动信息,确定操作手的三维移动路径;生成模块903,被配置为将操作手的三维移动路径映射到预设弧面网格模型中,生成操作界面,其中,预设弧面网格模型为基于预设布线方式和预设角度确定的模型。
根据本公开实施例的操作界面的生成装置,通过路径确定模块依据获取到的用户的操作手的移动信息,确定操作手的三维移动路径,可准确获知用户的操作手的具体移动路径,方便为该用户建立个性化的操作方式;生成模块将操作手的三维移动路径映射到预设弧面网格模型中,生成操作界面,其中的预设弧面网格模型为基将于预设布线方式和预设角度确定的模型,以使用户可以通过立体的操作界面对虚拟物体进行准确控制,使用户对远处的虚拟物体的操作精度得到提升;同时降低用户对虚拟物体的操作难度,提升用户对虚拟物体的操作效率,以使用户获得最好的使用体验。
图10示出本公开一实施例提供的操作界面的控制装置的结构示意图。基于本公开实施例中的任意一种操作界面的生成方法,生成操作界面。如图10所示,操作界面的控制装置1000可以包括如下模块。
第二获取模块1001,被配置为获取用户在操作界面中的操作信息。
控制模块1002,被配置为依据操作信息,确定用户对虚拟物体的控制方式。
根据本公开实施例的操作界面的生成装置,通过采用本公开实施例中的任意一种操作界面的生成方法,生成操作界面,以使用户可以通过立体的操作界面对虚拟物体进行准确控制;第二获取模块获取用户在操作界面中的操作信息,使用户对远处的虚拟物体的操作精度得到提升;控制模块依据该操作信息,确定用户对虚拟物体的控制方式,降低用户对虚拟物体的操作难度,使用户可以对虚拟物体进行更精准的操控,提升用户与机器设备之间的交互效率。
图11示出本公开实施例提供的一种电子设备的结构示意图。例如,该电子设备可以是VR设备、AR设备和MR设备中的任意一种。用户通过使用该电子设备,与机器设备之间进行基于手势的信息的交互,实现用户在三维空间上的操作。
如图11所示,该电子设备包括但不限于如下模块:获取模块1101、显示模块1102、控制模块1103、判断模块1104和计算与存储模块1105。
其中,获取模块1101:设置为获取用户数据。例如,利用VR设备中的传感器、或AR设备中的传感器、或XR设备中的传感器等获取用户的操作手的属性信息,以及操作手的移动信息等。其中,传感器可以包括:肌电传感器、位姿传感器、音频接收传感器中的至少一种。并且,VR设备中还可以安装有三维深度相机和/或双目相机等。操作手的属性信息包括:用户的肩关节在三维空间中的位置信息、大臂长度和小臂的长度中的至少一种。操作手的移动信息包括:用户的手掌在运动过程中的经过的位置信息,以及该用户与显示界面进行手势 互动的数据等。上述用户数据可以存储在计算与存储模块1105中。
显示模块1102:设置为将计算与存储模块1105中存储的信息显示到三维空间中,以方便用户的查看。
控制模块1103:设置为在接收到判断模块1104发送的控制指令的情况下,控制获取模块1101进行用户数据的获取,并控制显示模块1102进行信息的显示。
判断模块1104:设置为基于计算与存储模块1105中存储的用户数据,判断是否发送控制指令给控制模块1103(例如,根据用户的使用需求判断是否生成控制指令等)。在确定需要发送控制指令的情况下,向控制模块1103发送控制指令,以使控制模型1103根据该控制指令进行对应的操作。其中,控制指令可以包括:控制显示模块1102显示至少一个操控区域中的多个操控元素,以供用户对这些操控元素进行控制。
计算与存储模块1105:设置为对获取模块1101获取到的用户数据进行计算和存储。例如,依据操作手的移动信息,确定操作手的三维移动路径,并将操作手的三维移动路径映射到预设弧面网格模型中,生成操作界面,该操作界面是能够在三维空间中进行显示的操作界面。并且,还会将用户在操作界面中的操作信息也进行存储,以方便根据该操作信息确认用户对虚拟物体的控制方式。
图12示出本公开实施例提供的一种电子设备的工作方法的流程示意图。如图12所示,该电子设备的工作方法包括但不限于如下步骤。
步骤S1201,获取平面显示的多个操控元素的特征,并依据多个操控远端的特征,确定平面操作界面。
其中,操控元素可以包括字母按钮、方向按钮、表情按钮、字母按钮、功能键等多种不同的元素,可以根据用户的使用习惯对多个操控元素进行排列组合,从而确定平面操作界面。
例如,该平面操作界面可以采用操作键盘,和/或,操作按钮控制区等方式进行显示,以方便用户的使用。
步骤S1202,获取用户的操作手的属性信息和该操作手的移动信息。
其中,操作手的属性信息包括:用户的肩关节在三维空间中的位置信息、大臂长度和小臂的长度中的至少一种。
操作手的移动信息包括:手掌基于闭合路线的位移信息,和/或,手掌基于预设挥动次数的位移信息,以及该用户与显示界面进行手势互动的数据等。
步骤S1203,依据操作手的属性信息,建立预设弧面网格模型。
其中,预设弧面网络模型是基于预设布线方式和预设角度确定的模型,其中的预设布线方式包括基于经纬线对球体进行分割布线的方式,或,基于多边形对球体进行分割布线的方式,多边形是由三条或三条以上的线段首尾顺次连接所组成的平面图形。
步骤S1204,依据操作手的移动信息,确定操作手的三维移动路径。
其中,操作手的移动信息包括:用户通过至少两次挥动手掌形成的在三维空间中的运动轨迹。通过该运动轨迹,能够确定该操作手的三维移动路径,进而明确用户的手掌能够移动的边界位置信息和边界角度信息。
步骤S1205,将操作手的三维移动路径映射到预设弧面网格模型中,生成能够在三维空间上显示的三维操作界面。
例如,图13示出本公开又一实施例提供的一种三维操作界面的示意图。如图13所示, 通过将图2中的操作按钮202映射至预设弧面网格模型中,从而可以立体的向用户展示该操作按钮202。
需要说明的是,生成的三维操作界面的显示范围是根据用户的手掌能够移动的边界位置信息和边界角度信息,确定的最佳显示范围,在该三维操作界面中,能够充分展示操作按钮202,以方便用户的操作。对用户来说,三维操作界面中所显示的各个操作按钮都在用户的最佳可达范围内,能够提升用户的触控操作效率,并提升用户的操作体验。
又例如,如果操作元素过多,还可以支持用户同时建立多个三维操作界面。例如,通过用户的左手和右手的三维移动路径,构建多个三维操作界面。
图14示出本公开再一实施例提供的一种三维操作界面的示意图。如图14所示,通过将图2中的键盘201和操作按钮202分别映射至预设弧面网格模型中,从而可以立体的向用户展示多个三维操作界面,以方便用户的操作。
步骤S1206,向用户显示三维操作界面,以使用户可以在该三维操作界面中对虚拟物体进行操作和控制。
例如,用户可以通过其右手对三维操作界面中的键盘201中的各个操控元素进行操作,同时,该用户还可以通过其左手对三维操作界面中的操作按钮202中的各个按钮进行操作,能够满足用户的个性化需求,提升用户的操作效率。
在本实施例中,通过依据获取到的用户的操作手的移动信息,确定操作手的三维移动路径,可准确获知用户的操作手的具体移动路径,方便为该用户建立个性化的操作方式;将操作手的三维移动路径映射到预设弧面网格模型中,生成三维操作界面,以使用户可以通过立体的三维操作界面对虚拟物体进行准确控制,使用户对远处的虚拟物体的操作精度得到提升;同时降低用户对虚拟物体的操作难度,提升用户对虚拟物体的操作效率,以使用户可以更直观的通过手势操作对虚拟物体进行控制,获得最好的使用体验。
需要明确的是,本公开并不局限于上文实施例中所描述并在图中示出的特定配置和处理。为了描述的方便和简洁,这里省略了对已知方法的详细描述,并且上述描述的系统、模块和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
图15示出能够实现根据本公开实施例的操作界面的生成或操作界面的控制方法的计算设备的示例性硬件架构的结构图。
如图15所示,计算设备1500包括输入设备1501、输入接口1502、中央处理器1503、存储器1504、输出接口1505、以及输出设备1506。其中,输入接口1502、中央处理器1503、存储器1504、以及输出接口1505通过总线1507相互连接,输入设备1501和输出设备1506分别通过输入接口1502和输出接口1505与总线1507连接,进而与计算设备1500的其他组件连接。
具体地,输入设备1501接收来自外部的输入信息,并通过输入接口1502将输入信息传送到中央处理器1503;中央处理器1503基于存储器1504中存储的计算机可执行指令对输入信息进行处理以生成输出信息,将输出信息临时或者永久地存储在存储器1504中,然后通过输出接口1505将输出信息传送到输出设备1506;输出设备1506将输出信息输出到计算设备1500的外部供用户使用。
在一个实施例中,图15所示的计算设备可以被实现为一种电子设备,该电子设备可以包括:存储器,被配置为存储程序;处理器,被配置为运行存储器中存储的程序,以执行上述 实施例描述的任意一种操作界面的生成方法,或,本公开实施例中的任意一种操作界面的控制方法。
在一个实施例中,图15所示的计算设备可以被实现为一种操作界面的生成系统,该操作界面的生成系统可以包括:存储器,被配置为存储程序;处理器,被配置为运行存储器中存储的程序,以执行上述实施例描述的任意一种操作界面的生成方法。
在一个实施例中,图15所示的计算设备可以被实现为一种操作界面的控制系统,该操作界面的控制系统可以包括:存储器,被配置为存储程序;处理器,被配置为运行存储器中存储的程序,以执行上述实施例描述的任意一种操作界面的控制方法。
以上所述,仅为本公开的示例性实施例而已,并非用于限定本公开的保护范围。一般来说,本公开的多种实施例可以在硬件或专用电路、软件、逻辑或其任何组合中实现。例如,一些方面可以被实现在硬件中,而其它方面可以被实现在可以被控制器、微处理器或其它计算装置执行的固件或软件中,尽管本公开不限于此。
本公开的实施例可以通过移动装置的数据处理器执行计算机程序指令来实现,例如在处理器实体中,或者通过硬件,或者通过软件和硬件的组合。计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码。
本公开附图中的任何逻辑流程的框图可以表示程序步骤,或者可以表示相互连接的逻辑电路、模块和功能,或者可以表示程序步骤与逻辑电路、模块和功能的组合。计算机程序可以存储在存储器上。存储器可以具有任何适合于本地技术环境的类型并且可以使用任何适合的数据存储技术实现,例如但不限于只读存储器(ROM)、随机访问存储器(RAM)、光存储器装置和系统(数码多功能光碟DVD或CD光盘)等。计算机可读介质可以包括非瞬时性存储介质。数据处理器可以是任何适合于本地技术环境的类型,例如但不限于通用计算机、专用计算机、微处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、可编程逻辑器件(FGPA)以及基于多核处理器架构的处理器。
通过示范性和非限制性的示例,上文已提供了对本公开的示范实施例的详细描述。但结合附图和权利要求来考虑,对以上实施例的多种修改和调整对本领域技术人员来说是显而易见的,但不偏离本公开的范围。因此,本公开的恰当范围将根据权利要求确定。

Claims (13)

  1. 一种操作界面的生成方法,所述方法包括:
    获取用户的操作手的移动信息;
    依据所述操作手的移动信息,确定所述操作手的三维移动路径;
    将所述操作手的三维移动路径映射到预设弧面网格模型中,生成操作界面,其中,所述预设弧面网格模型为基于预设布线方式和预设角度确定的模型。
  2. 根据权利要求1所述的方法,其中,所述依据所述操作手的移动信息,确定所述操作手的三维移动路径,包括:
    依据所述操作手的移动信息,确定边界位移信息和边界角度信息;
    依据所述边界位移信息和边界角度信息,确定所述操作手的三维移动路径。
  3. 根据权利要求2所述的方法,其中,所述操作手的移动信息,包括:手掌基于闭合路线的位移信息,和/或,所述手掌基于预设挥动次数的位移信息。
  4. 根据权利要求1所述的方法,其中,所述将所述操作手的三维移动路径映射到预设弧面网格模型中,生成操作界面之后,还包括:
    在确定所述用户的预设部位发生移动的情况下,获取所述预设部位的移动信息;
    依据所述预设部位的移动信息,更新所述操作手的移动信息;
    基于更新后的所述操作手的移动信息,更新所述操作手的三维移动路径;
    依据更新后的所述操作手的三维移动路径,更新所述操作界面。
  5. 根据权利要求1所述的方法,其中,所述获取用户的操作手的移动信息之前,还包括:
    获取所述用户的操作手的属性信息,其中,所述用户的操作手的属性信息包括:所述用户的手臂属性信息和所述用户的肩关节的位置信息;
    依据所述用户的肩关节的位置信息和所述用户的手臂属性信息,构建所述预设弧面网格模型。
  6. 根据权利要求1所述的方法,其中,所述获取用户的操作手的移动信息之前,还包括:
    获取所述用户的操作手的属性信息和所述用户的视觉范围信息,其中,所述用户的操作手的属性信息包括:所述用户的手臂属性信息和所述用户的肩关节的位置信息;
    依据所述用户的肩关节的位置信息和所述用户的手臂属性信息,构建网格球体,其中,所述网格球体为基于所述预设布线方式确定的球体;
    依据所述用户的视觉范围信息,确定所述预设角度;
    依据所述网格球体和所述预设角度,确定所述预设弧面网格模型。
  7. 根据权利要求6所述的方法,其中,所述预设布线方式,包括:基于经纬线对所述球体进行分割布线的方式,或,基于多边形对所述球体进行分割布线的方式。
  8. 根据权利要求1至7中任一项所述的方法,其中,所述操作界面,包括:至少一个操控区域;所述操控区域用于显示多个可供所述用户操控的操控元素。
  9. 一种操作界面的控制方法,基于权利要求1至8中任一项所述的操作界面的生成方法,生成操作界面,所述方法包括:
    获取用户在所述操作界面中的操作信息;
    依据所述操作信息,确定所述用户对虚拟物体的控制方式。
  10. 一种操作界面的生成装置,所述装置包括:
    第一获取模块,被配置为获取用户的操作手的移动信息;
    路径确定模块,被配置为依据所述操作手的移动信息,确定所述操作手的三维移动路径;
    生成模块,被配置为将所述操作手的三维移动路径映射到预设弧面网格模型中,生成操作界面,其中,所述预设弧面网格模型为基于预设布线方式和预设角度确定的模型。
  11. 一种操作界面的控制装置,基于权利要求1至8中任一项所述的操作界面的生成方法,生成操作界面,所述装置包括:
    第二获取模块,被配置为获取用户在所述操作界面中的操作信息;
    控制模块,被配置为依据所述操作信息,确定所述用户对虚拟物体的控制方式。
  12. 一种电子设备,包括:
    一个或多个处理器;
    存储器,其上存储有一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1至8中任一项所述的操作界面的生成方法,或如权利要求9所述的操作界面的控制方法。
  13. 一种计算机可读存储介质,所述可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至8中任一项所述的操作界面的生成方法,或如权利要求9所述的操作界面的控制方法。
PCT/CN2023/071748 2022-04-24 2023-01-10 操作界面的生成方法、控制方法和装置 WO2023207226A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210450377.1 2022-04-24
CN202210450377.1A CN116974435A (zh) 2022-04-24 2022-04-24 操作界面的生成方法、控制方法和装置

Publications (1)

Publication Number Publication Date
WO2023207226A1 true WO2023207226A1 (zh) 2023-11-02

Family

ID=88480246

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/071748 WO2023207226A1 (zh) 2022-04-24 2023-01-10 操作界面的生成方法、控制方法和装置

Country Status (2)

Country Link
CN (1) CN116974435A (zh)
WO (1) WO2023207226A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951211A (zh) * 2014-03-24 2015-09-30 联想(北京)有限公司 一种信息处理方法和电子设备
CN105843371A (zh) * 2015-01-13 2016-08-10 上海速盟信息技术有限公司 人机隔空交互方法及系统
CN110488974A (zh) * 2014-03-21 2019-11-22 三星电子株式会社 用于提供虚拟输入界面的方法和可穿戴装置
US20190377473A1 (en) * 2018-06-06 2019-12-12 Sony Interactive Entertainment Inc. VR Comfort Zones Used to Inform an In-VR GUI Editor
US20200183556A1 (en) * 2017-08-14 2020-06-11 Guohua Liu Interaction position determination method and system, storage medium and smart terminal
US20210081052A1 (en) * 2019-09-17 2021-03-18 Gaganpreet Singh User interface control based on elbow-anchored arm gestures

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110488974A (zh) * 2014-03-21 2019-11-22 三星电子株式会社 用于提供虚拟输入界面的方法和可穿戴装置
CN104951211A (zh) * 2014-03-24 2015-09-30 联想(北京)有限公司 一种信息处理方法和电子设备
CN105843371A (zh) * 2015-01-13 2016-08-10 上海速盟信息技术有限公司 人机隔空交互方法及系统
US20200183556A1 (en) * 2017-08-14 2020-06-11 Guohua Liu Interaction position determination method and system, storage medium and smart terminal
US20190377473A1 (en) * 2018-06-06 2019-12-12 Sony Interactive Entertainment Inc. VR Comfort Zones Used to Inform an In-VR GUI Editor
US20210081052A1 (en) * 2019-09-17 2021-03-18 Gaganpreet Singh User interface control based on elbow-anchored arm gestures

Also Published As

Publication number Publication date
CN116974435A (zh) 2023-10-31

Similar Documents

Publication Publication Date Title
CN110603509B (zh) 计算机介导的现实环境中直接和间接交互的联合
US11221730B2 (en) Input device for VR/AR applications
EP3250983B1 (en) Method and system for receiving gesture input via virtual control objects
CN108762482B (zh) 一种大屏幕和增强现实眼镜间数据交互方法和系统
KR101453815B1 (ko) 사용자의 시점을 고려하여 동작인식하는 인터페이스 제공방법 및 제공장치
KR20220030294A (ko) 인공 현실 환경들에서 주변 디바이스를 사용하는 가상 사용자 인터페이스
CN110163942B (zh) 一种图像数据处理方法和装置
CN105912110A (zh) 一种在虚拟现实空间中进行目标选择的方法、装置及系统
Telkenaroglu et al. Dual-finger 3d interaction techniques for mobile devices
WO2021083133A1 (zh) 图像处理方法、装置、设备及存储介质
Wolf et al. Performance envelopes of in-air direct and smartwatch indirect control for head-mounted augmented reality
JP6360509B2 (ja) 情報処理プログラム、情報処理システム、情報処理方法、および情報処理装置
Stork et al. Efficient and precise solid modelling using a 3D input device
Kim et al. ViewfinderVR: configurable viewfinder for selection of distant objects in VR
Yu et al. Blending on-body and mid-air interaction in virtual reality
WO2024066756A1 (zh) 交互方法、装置和显示设备
WO2023207226A1 (zh) 操作界面的生成方法、控制方法和装置
Halim et al. Designing ray-pointing using real hand and touch-based in handheld augmented reality for object selection
WO2019127325A1 (zh) 信息处理方法、装置、云处理设备及计算机程序产品
Liu et al. COMTIS: Customizable touchless interaction system for large screen visualization
US20200285325A1 (en) Detecting tilt of an input device to identify a plane for cursor movement
US11281351B2 (en) Selecting objects within a three-dimensional point cloud environment
KR102392675B1 (ko) 3차원 스케치를 위한 인터페이싱 방법 및 장치
WO2024131405A1 (zh) 对象移动控制方法、装置、设备及介质
Steinicke et al. VR and laser-based interaction in virtual environments using a dual-purpose interaction metaphor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23794662

Country of ref document: EP

Kind code of ref document: A1