CN117008774A - Window control method, device, storage medium and electronic equipment - Google Patents

Window control method, device, storage medium and electronic equipment Download PDF

Info

Publication number
CN117008774A
CN117008774A CN202210974762.6A CN202210974762A CN117008774A CN 117008774 A CN117008774 A CN 117008774A CN 202210974762 A CN202210974762 A CN 202210974762A CN 117008774 A CN117008774 A CN 117008774A
Authority
CN
China
Prior art keywords
target
control
window
virtual
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210974762.6A
Other languages
Chinese (zh)
Inventor
蔡文琪
陈维
徐婧
陈红凌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210974762.6A priority Critical patent/CN117008774A/en
Publication of CN117008774A publication Critical patent/CN117008774A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a window control method and device, a storage medium and electronic equipment. Wherein the method comprises the following steps: displaying a virtual operation window and a target operation control in a virtual reality scene; triggering a target event for preparing to control the virtual operation window under the condition that a target gesture executed on a target operation control is acquired; in the process that the target event is in the trigger state, the first spatial attribute of the target operation control is adjusted in response to the control operation executed on the target operation control, the second spatial attribute of the virtual operation window is displayed to change along with the change of the first spatial attribute, and the method can be applied to artificial intelligence scenes and relates to the technologies of computer vision and the like. The application solves the technical problem of lower window control efficiency.

Description

Window control method, device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computers, and in particular, to a window control method, a device, a storage medium, and an electronic apparatus.
Background
Along with the rapid development of the virtual reality technology, more operation problems are also caused, for example, for a virtual reality operation window in the related technology, the control threshold of the window is higher, and a user is required to accurately find a virtual control button of the operation window to further complete the control of the operation window, so that the problem of reduced control efficiency of the operation window is caused. That is, there is a problem in that the control efficiency of the window is low.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides a window control method and device, a storage medium and electronic equipment, and aims to at least solve the technical problem of low window control efficiency.
According to an aspect of an embodiment of the present application, there is provided a window control method including: displaying a virtual operation window and a target operation control in a virtual reality scene; triggering a target event for preparing to control the virtual operation window under the condition that a target gesture executed on the target operation control is acquired; and in the process that the target event is in a trigger state, responding to the control operation executed on the target operation control, adjusting the first spatial attribute of the target operation control, and displaying the second spatial attribute of the virtual operation window to change along with the change of the first spatial attribute.
According to another aspect of the embodiment of the present application, there is also provided a window control apparatus, including: the first display unit is used for displaying a virtual operation window and a target operation control in the virtual reality scene;
the first control unit is used for triggering a target event for preparing to control the virtual operation window under the condition that a target gesture executed on the target operation control is acquired;
And the first adjusting unit is used for responding to the control operation executed on the target operation control in the process that the target event is in the trigger state, adjusting the first spatial attribute of the target operation control, and displaying the second spatial attribute of the virtual operation window to change along with the change of the first spatial attribute.
As an alternative, the apparatus includes:
a second display unit, configured to display, on the target operation control, a plurality of gesture detection points before the target event for triggering the preparation for controlling the virtual operation window, where the gesture detection points are used to detect a gesture performed on the target operation control;
and the first acquisition unit is used for acquiring the target gesture by using the plurality of gesture detection points before the target event for triggering the preparation control virtual operation window.
As an alternative, the first obtaining unit includes:
the first acquisition module is used for acquiring first suspension data detected on a first gesture detection point in the gesture detection points;
the second acquisition module is used for acquiring second suspension data detected on a second gesture detection point in the gesture detection points;
The first processing module is used for integrating the first suspension data and the second suspension data to obtain target suspension data;
the first determining module is configured to determine the hover gesture as the target gesture when a similarity between the hover gesture corresponding to the target hover data and a first preset gesture is greater than or equal to a first preset threshold.
As an alternative, the first adjusting unit includes at least one of:
the first adjusting module is used for adjusting the first position of the target operation control and displaying the second position of the virtual operation window to change along with the change of the first position;
the second adjusting module is used for adjusting the first form of the target operation control and displaying the second form of the virtual operation window to change along with the change of the first form.
As an alternative, the first adjusting unit further includes:
and a third obtaining module, configured to obtain a movement operation triggered by the target operation control when the execution position of the target gesture changes before the first position of the target operation control is adjusted and the second position of the virtual operation window is displayed to change along with the change of the first position, where the movement operation is used to adjust the first position.
As an optional solution, the third obtaining module further includes:
the first processing sub-module is used for reducing the display area of the virtual operation window when the distance between the execution position and the interface boundary of the display interface corresponding to the virtual reality scene is smaller than or equal to a second preset threshold value and larger than or equal to a third preset threshold value;
and the second processing sub-module is used for hiding or closing the virtual operation window under the condition that the distance between the execution position and the interface boundary is smaller than the third preset threshold value.
As an optional solution, the second adjusting module further includes:
a fourth obtaining module, configured to obtain a first distribution position corresponding to a first portion in the target gesture and a second distribution position corresponding to a second portion in the target gesture before the first configuration of the target operation control is adjusted and the second configuration of the virtual operation window is displayed as the first configuration changes;
and a fifth obtaining module, configured to obtain a second zoom operation triggered by the target operation control when a relative distance between the first distribution position and the second distribution position changes before the first configuration of the target operation control is adjusted and a second configuration of the virtual operation window is displayed and changed along with the change of the first configuration, where the second zoom operation is used to adjust the first configuration.
As an alternative, the apparatus further includes:
a first display module, configured to display a virtual slider and a virtual progress panel where the virtual slider is located in the virtual reality scene in response to a first zoom operation performed on a window zoom control displayed in the virtual reality scene before the target event for triggering preparation to control the virtual operation window, where the second aspect is related to a slider position of the virtual slider in the virtual progress panel;
a sixth obtaining module, configured to obtain a slider gesture performed on the virtual slider before the target event for triggering and preparing to control the virtual operation window, and before the target event for triggering and preparing to control the virtual operation window;
and the second determining module is used for determining the slider gesture as the target gesture under the condition that the similarity between the slider gesture and the second preset gesture is greater than or equal to a fourth preset threshold value before the target event of triggering the target event of preparing to control the virtual operation window.
As an alternative, the apparatus further includes:
a seventh obtaining module, configured to obtain, when the position of the slider changes before the first form of the target operation control is adjusted and the second form of the virtual operation window is displayed changes with the change of the first form, a first zoom operation triggered by the target operation control, where the first zoom operation is used to adjust the first form.
As an alternative, the device
The displaying the virtual operation window and the target operation control in the virtual reality scene includes: the second display module is used for displaying the target content related to the virtual reality scene in the virtual operation window;
in the process of responding to the control operation executed on the target operation control, adjusting the first spatial attribute of the target operation control and displaying the second spatial attribute of the virtual operation window to change along with the change of the first spatial attribute, the method further comprises: and the first hiding module is used for hiding the target content displayed in the virtual operation window.
As an alternative, the device is characterized in that,
After the triggering of the target event in preparation for controlling the virtual operation window, the apparatus further comprises at least one of: a third display module, configured to display first adjustment information of the first spatial attribute, where the first adjustment information is candidate information of a control operation that is currently allowed to be performed by the first spatial attribute; displaying second adjustment information of the second spatial attribute, wherein the first adjustment information is candidate information of a control operation which is currently allowed to be executed by the second spatial attribute;
in the process of responding to the control operation executed on the target operation control, adjusting the first spatial attribute of the target operation control and displaying the second spatial attribute of the virtual operation window to change along with the change of the first spatial attribute, the device further comprises: a fourth display module, configured to display first preview information of the first spatial attribute, where the first preview information is preview information of the first spatial attribute after adjustment according to the current control operation; and displaying second preview information of the second spatial attribute, wherein the second preview information is preview information of the second spatial attribute after being adjusted according to the current control operation.
According to yet another aspect of embodiments of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the window control method as above.
According to still another aspect of the embodiment of the present application, there is further provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the window control method described above through the computer program.
In the embodiment of the application, a virtual operation window and a target operation control are displayed in a virtual reality scene; triggering a target event for preparing to control the virtual operation window under the condition that a target gesture executed on the target operation control is acquired; in the process that the target event is in a trigger state, the first spatial attribute of the target operation control is adjusted in response to the control operation executed on the target operation control, the second spatial attribute of the virtual operation window is displayed to change along with the change of the first spatial attribute, and the target operation control for gesture recognition is added below the virtual operation window, so that a user does not need to remember different interface window operation modes when facing functions such as moving and zooming the window, and can control any window only through general operation executed on the target operation control, the purpose of reducing the operation threshold of the target operation control is achieved, the technical effect of improving window control efficiency is achieved, and the technical problem of lower window control efficiency is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a schematic illustration of an application environment for an alternative window control method according to an embodiment of the application;
FIG. 2 is a schematic illustration of a flow of an alternative window control method according to an embodiment of the application;
FIG. 3 is a schematic diagram of an alternative window control method according to an embodiment of the application;
FIG. 4 is a schematic diagram of another alternative window control method according to an embodiment of the application;
FIG. 5 is a schematic diagram of another alternative window control method according to an embodiment of the application;
FIG. 6 is a schematic diagram of another alternative window control method according to an embodiment of the application;
FIG. 7 is a schematic diagram of another alternative window control method according to an embodiment of the application;
FIG. 8 is a schematic diagram of another alternative window control method according to an embodiment of the application;
FIG. 9 is a schematic diagram of another alternative window control method according to an embodiment of the application;
FIG. 10 is a schematic diagram of another alternative window control method according to an embodiment of the application;
FIG. 11 is a schematic diagram of another alternative window control method according to an embodiment of the application;
FIG. 12 is a schematic diagram of another alternative window control method according to an embodiment of the application;
FIG. 13 is a schematic diagram of another alternative window control method according to an embodiment of the application;
FIG. 14 is a schematic diagram of another alternative window control method according to an embodiment of the application;
FIG. 15 is a schematic diagram of another alternative window control method according to an embodiment of the application;
FIG. 16 is a schematic diagram of another alternative window control method according to an embodiment of the application;
FIG. 17 is a schematic diagram of an alternative window control device according to an embodiment of the application;
fig. 18 is a schematic structural view of an alternative electronic device according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, the terms of the application are explained:
artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Computer Vision (CV) is a science of studying how to "look" at a machine, and more specifically, to replace human eyes with a camera and a Computer to perform machine Vision such as recognition, acquisition and measurement on a target, and further perform graphic processing to make the Computer process an image more suitable for human eyes to observe or transmit to an instrument for detection. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, among others, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and others.
With research and advancement of artificial intelligence technology, research and application of artificial intelligence technology is being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and with increasing importance value.
The scheme provided by the embodiment of the application relates to the technologies such as computer vision technology of artificial intelligence, and the like, and is specifically described by the following embodiments:
according to an aspect of the embodiment of the present application, there is provided a window control method, optionally, as an alternative implementation, the window control method may be applied, but not limited to, in the environment shown in fig. 1. Which may include, but is not limited to, a user device 102 and a server 112, which may include, but is not limited to, a display 104, a processor 106, and a memory 108, the server 112 including a database 114 and a processing engine 116.
The specific process comprises the following steps:
step S102, the user equipment 102 obtains gesture information of the target operation control 1004 on the virtual operation window 1002;
Step S104-S106, the gesture information of the target operation control 1004 is sent to the server 112 through the network 110;
step S108, the server 112 determines a target gesture from the gesture information of the target operation control 1004 through the processing engine;
steps S110-S112, the target gesture is sent to the user device 102 through the network 110, the user device 102 determines a first spatial attribute of the target operation control through the processor 106, further adjusts a second spatial attribute of the virtual operation window and displays the adjustment on the display 108, and stores the first spatial attribute and the second spatial attribute in the memory 104.
In addition to the example shown in fig. 1, the above steps may be performed by the client or the server independently, or by the client and the server cooperatively, such as by the user equipment 102 performing the above step S108, etc., to thereby relieve the processing pressure of the server 112. The user device 102 includes, but is not limited to, a handheld device (e.g., a mobile phone), a notebook computer, a desktop computer, a vehicle-mounted device, etc., and the application is not limited to a particular implementation of the user device 102.
Optionally, as an alternative embodiment, as shown in fig. 2, the window control method includes:
S202, displaying a virtual operation window and a target operation control in a virtual reality scene;
s204, triggering a target event for preparing to control the virtual operation window under the condition that a target gesture executed on the target operation control is acquired;
s206, in the process that the target event is in the trigger state, responding to the control operation executed on the target operation control, adjusting the first spatial attribute of the target operation control, and displaying the second spatial attribute of the virtual operation window to change along with the change of the first spatial attribute.
The window control method can be applied to a virtual reality scene, such as a medical scene, a game scene, an education scene and the like, and the existing operation window of the virtual reality scene has different control modes, such as different modes for displaying homepage movement or zooming, due to different applied scenes, the user is required to memorize different operation modes, the operation threshold is higher, the user is difficult to find a moving hot zone and how to guide the user to conduct operations such as kneading and zooming by two hands in a zooming mode, so that the technical problem of lower window control efficiency exists.
Alternatively, in the present embodiment, in the step S202, the virtual reality scene may be, but is not limited to, a real scene simulated by a computer three-dimensional model; the virtual operation window is shown in fig. 3, and may be, but not limited to, a closable panel for providing user selectable or displaying information in a virtual reality scene, and may be, but not limited to, formed by a window title area 302, a window content area 304 and a window operation area 306, wherein the window title area 302 is a name display of an application; window content area 304 displays the core content of the application; window manipulation area 306 may then provide manipulation options for the entire window, such as closing, zooming, moving target manipulation controls, and the like. The target operation control 308 may be, but not limited to, a 3D pinch sphere, and a concave arc surface is displayed to guide a player to use a body part to realize accurate control over the target operation control 308, and the position of the target operation control 308 may not be limited to be at the edge of the virtual operation window by default, and may be dragged by custom placement according to the use habit of the user or different application scenarios.
Optionally, in this embodiment, in step S204, the target gesture may be, but is not limited to, determining that the gesture is the target gesture when the similarity between the preset gesture and the acquired gesture is greater than a certain threshold, acquiring the acquired gesture by one or more cameras, preprocessing the acquired data, extracting a plurality of features such as a sine value, a cosine value, a total length of the gesture, and the like of the initial angle of any input gesture by using a rule algorithm and the like, and accurately identifying the features. Gesture recognition is not limited to recognizing hand features such as fingers, palms, fists, and the like, but may include, but is not limited to, recognizing other body parts such as arms, legs, feet, and the like, and is not limited herein. The target event may be, but is not limited to, an event that is allowed to directly control the virtual operating window, or an event that is allowed to control the virtual operating window by the target operating control.
Optionally, in this embodiment, in step S206, the first spatial attribute may be, but is not limited to, an attribute of moving, dragging, deforming, zooming, rotating, etc. the target control; the second spatial attribute may be, but is not limited to, a spatial attribute such as movement, enlargement, reduction, morphology, transparency, and the like of the window according to deformation such as movement, dragging, and the like of the target control.
Optionally, in this embodiment, a single virtual operation window may be displayed, but not limited to, and multiple virtual operation windows may also be displayed simultaneously, and when a user opens multiple virtual windows simultaneously, a preset target gesture may be used to trigger combining multiple virtual operation windows into one virtual operation window, and combining multiple virtual operation controls into one virtual operation control, so that the user controls multiple virtual operation windows through one virtual operation control.
Further, as shown in fig. 4, after the first virtual operation window 402 is opened, the second virtual operation window 404 is opened by the user a, the preset gesture of the merging window is an instruction of using five fingers to grab any one of the virtual operation controls 406 for 3 seconds, to trigger merging of the first virtual operation window 402 and the second virtual operation window 404, at this time, the first virtual operation window 402 and the second virtual operation window 404 are merged into a target virtual operation window, and are respectively displayed as a first sub-window 502 and a second sub-window 504, as shown in fig. 5, and at the same time, the virtual operation controls are merged to generate a sliding area 506 of a target virtual operation control 508, and when the user uses five fingers to grab the target virtual operation control 508 to slide to the area a corresponding to the first sub-window 502, at this time, the operation authority of the user to the first sub-window 502 is opened, and the operation authority of the user to the second sub-window 504 is closed.
For further illustration, taking an educational scenario as an example, a student M first opens a first window to play classroom content in a virtual reality scenario, when learning a knowledge point, opens a second window to review relevant information of the knowledge point, and performs merging operation on the opened first window and the second window, that is, as shown in fig. 6, displays a first sub-window 602, a second sub-window 604 and a target virtual operation control 606 in the merged target virtual window, at this time, when the student M reviews relevant information of the knowledge point, grabs and slides the target operation control 606 to a B area corresponding to the second sub-window, and grabs and slides the target operation control 606 to an a area corresponding to the first sub-window again after completing the review, so as to continue learning classroom content.
It should be noted that, under the technical scene of virtual reality, the operable area of the user is continuously enriched and developed, when the user opens a plurality of windows at the same time, the plurality of windows may correspond to different operation interfaces, so that the user has a certain operation difficulty, the user may need to use the left hand and the right hand to control and operate different windows at the same time, and the opening of the plurality of windows also occupies space, and is easy to bring unsmooth interaction experience to the player.
Optionally, in this embodiment, the gesture actions may be combined with the time length to increase the diversity of control means for the target operation control, and the multi-functional suspension window mode is opened when the virtual operation control is held for a long time by using the preset gesture, as shown in fig. 7 (a), for example, when the user uses the thumb and the index finger to trigger the target virtual operation control 702 to rest for 5 seconds, the multi-functional suspension ball 704 mode is opened, and the functions of the multi-functional suspension ball 704 may include, but are not limited to: the functions of recording, splitting, rotating, locking, etc. may be, but are not limited to, the user defining the functions in the multi-function hover sphere 704.
By way of further illustration, for example, as shown in fig. 8 (a), user a opens a virtual reality scene, displays a virtual operation window 802 and a target operation control 804 under the virtual operation window, and uses an index finger and a thumb to pinch and move the target operation control 804 to the left by 5 cm, and when detecting that the pinch action of the index finger and the thumb is as high as 90 percent of the gesture similarity of the preset moving window, determines that the target operation control 804 adjusts the control operation of moving to the left, and when the movement attribute of the target operation control 804 changes, the virtual operation window 802 follows the change, and particularly as shown in fig. 8 (b), realizes that the virtual operation window 802 moves to the left.
It should be noted that, the user controls the target operation control to realize indirect control over the virtual operation window, so as to overcome the technical problems that the operation threshold is higher and the control efficiency is lower when the virtual operation window is directly controlled, for example, the scaling operation of the interface a is to use a single finger to drag the interface frame to realize the amplified window, the interface B needs to use two hands to present opening and closing actions to realize the amplified window, the user can easily perform misoperation or can only control the window respectively to cause the threshold of control operation to rise.
By the embodiment of the application, a virtual operation window and a target operation control are displayed in a virtual reality scene; triggering a target event for preparing to control the virtual operation window under the condition that a target gesture executed on a target operation control is acquired; in the process that the target event is in the trigger state, the first space attribute of the target operation control is adjusted in response to the control operation executed on the target operation control, the second space attribute of the virtual operation window is displayed to change along with the change of the first space attribute, and the target operation control for gesture recognition is added below the virtual operation window, so that a user does not need to remember different interface window operation modes when facing the functions of moving, zooming and the like of the window, and the control of any window can be realized only through the general operation executed on the target operation control, and the purpose of reducing the operation threshold of the target operation control is further achieved, and the technical effect of improving the window control efficiency is achieved.
As an alternative, before triggering the target event for preparing to control the virtual operation window, the method further comprises:
displaying a plurality of gesture detection points on the target operation control, wherein the gesture detection points are used for detecting gestures executed on the target operation control;
and acquiring the target gesture by utilizing the plurality of gesture detection points.
Optionally, in this embodiment, the gesture detection point is a position point on the target operation control for identifying the target gesture, which may but is not limited to guiding the user to pinch with the index finger and the thumb, and the gesture detection point may but is not limited to identifying various key parts of the body, such as an arm, a leg, and the like.
According to the embodiment provided by the application, a plurality of gesture detection points are displayed on the target operation control, wherein the gesture detection points are used for detecting gestures executed on the target operation control; the target gestures are acquired by utilizing the gesture detection points, so that the aim of improving the accuracy of gesture detection is fulfilled, and the technical effect of improving the accuracy of controlling the target operation control is realized.
As an alternative, acquiring the target gesture using a plurality of gesture detection points includes:
s1, acquiring first suspension data detected on a first gesture detection point in a plurality of gesture detection points;
S2, acquiring second suspension data detected on a second gesture detection point in the gesture detection points;
s3, integrating the first suspension data and the second suspension data to obtain target suspension data;
and determining the hover gesture as the target gesture under the condition that the similarity between the hover gesture corresponding to the target hover data and the first preset gesture is greater than or equal to a first preset threshold value.
Optionally, in this embodiment, the first gesture detection point may be, but is not limited to, a representative point in the plurality of gesture detection points, such as an off-point, a turning point, and the like, the second gesture detection point may be, but is not limited to, a detection point different from the first gesture detection point in the plurality of gesture detection points, the first floating data may be, but is not limited to, data after the identification of the first gesture detection point in the space by the device such as a sensor, an infrared ray, and the like, and the second floating data may be, but is not limited to, data after the identification of the second gesture detection point in the space by the device.
Optionally, in this embodiment, a first preset threshold is set according to the severity of the application scenario, and the first suspension data and the second suspension data are integrated, where the integration process may not be limited to improving the accuracy of the integration process by using a coordinate axis or a labeling number, and the integration process may not be limited to determining the suspension gesture corresponding to the suspension data by combining the historical gesture and the identified key point.
It is to be noted that, by acquiring the suspended data corresponding to the gesture points and performing the integration processing, whether the gesture corresponding to the integrated data is the target gesture is determined according to the similarity, so that the accuracy of gesture recognition is improved.
Further illustratively, as shown in fig. 9, optionally, the acquired floating data is corresponding to a coordinate axis, and a special number is marked, the first floating data 902 and the second floating data 904 are integrated to obtain a changed key point 906, and the floating gesture is determined to be the target gesture 910 in combination with a data point 908 corresponding to the history gesture.
According to the embodiment provided by the application, first suspension data detected on a first gesture detection point in a plurality of gesture detection points are obtained; acquiring second suspension data detected on a second gesture detection point in the gesture detection points; integrating the first suspension data and the second suspension data to obtain target suspension data; and under the condition that the similarity between the hanging gesture corresponding to the target hanging data and the first preset gesture is larger than or equal to a first preset threshold value, determining the hanging gesture as the target gesture, thereby achieving the purpose of improving the accurate extraction of the gesture detection point, and further achieving the technical effect of improving the accuracy of target gesture recognition.
As an alternative, adjusting the first spatial attribute of the target operation control, and displaying the second spatial attribute of the virtual operation window as the first spatial attribute changes, including at least one of:
s1, adjusting a first position of a target operation control, and displaying a second position of a virtual operation window to change along with the change of the first position;
s2, adjusting a first form of the target operation control, and displaying a second form of the virtual operation window to change along with the change of the first form.
Optionally, in this embodiment, the first position may, but is not limited to, be a position corresponding to the target operation control in a certain coordinate in the coordinates of the virtual reality scene space, and the second position may, but is not limited to, be a position corresponding to a second coordinate of the coordinates of the virtual operation window, which varies with the first coordinate.
Optionally, in this embodiment, the first aspect may be, but not limited to, a change in appearance of the target operation control upon receiving the zoom operation adjustment, may be, but not limited to, a change in size, shape, the second aspect may be, but not limited to, a size aspect of the virtual operation window, and the like.
According to the embodiment provided by the application, the first position of the target operation control is adjusted, and the second position of the virtual operation window is displayed to change along with the change of the first position; the first form of the target operation control is adjusted, the second form of the virtual operation window is displayed to change along with the change of the first form, and the purpose of changing the position and the form of the target virtual window by using the target operation control is achieved, so that the technical effect of guiding a user to recognize and move and zoom the target virtual window is achieved.
As an alternative, before adjusting the first position of the target operation control and displaying the second position of the virtual operation window with the change of the first position, the method further includes:
and when the execution position of the target gesture changes, acquiring a movement operation triggered by the target operation control, wherein the movement operation is used for adjusting the first position.
Alternatively, in the present embodiment, the movement operation may be, but not limited to, an operation of dragging the target operation control to move in any direction and angle, and may be, but not limited to, movement in any direction including up, down, left, and right.
The user controls the target operation control to indirectly control the window to move, so that the technical effect of controlling different virtual operation windows by using a unified operation means is achieved, and the control efficiency of the virtual operation windows is improved.
By way of further illustration, as shown in fig. 10, optionally, a gesture 1002-1 performed by the user on the target operation control is detected, and the similarity between the gesture and the preset gesture 1002-2 is greater than a preset threshold, which is determined as a movement gesture 1004, where the user moves the target operation control 1006 to the left, and where the virtual operation window 1008 moves to the left along with the movement of the target operation control 1006, and where the relative positions of the virtual operation window 1008 and the target operation control 1006 are unchanged.
According to the embodiment of the application, when the execution position of the target gesture changes, the movement operation triggered by the target operation control is acquired, wherein the movement operation is used for adjusting the first position, so that the purpose of controlling the movement of the virtual operation window through the target operation control is achieved, and the technical effect of improving the control efficiency of the virtual operation window is realized.
As an alternative, in the process of adjusting the first position of the target operation control and displaying the second position of the virtual operation window along with the change of the first position, the method further comprises:
reducing the display area of the virtual operation window under the condition that the distance between the execution position and the interface boundary of the display interface corresponding to the virtual reality scene is smaller than or equal to a second preset threshold value and larger than or equal to a third preset threshold value;
and hiding or closing the virtual operation window under the condition that the distance between the execution position and the interface boundary is smaller than a third preset threshold value.
Optionally, in this embodiment, the second preset threshold may be, but is not limited to, a value of a relative distance between the target operation control and the interface boundary when the boundary of the virtual operation window reaches the interface boundary; the third preset threshold may be, but is not limited to, a relative distance value between the target operation control and the interface boundary when the virtual operation window is reduced to the area minimum window.
When the user moves the window to the edge area, the window can adaptively change according to the relative distance value between the target operation control and the interface boundary, adaptively reduce and hide, more space is reserved for other spaces needing to be displayed, and a shortcut means for hiding or closing the window can be directly realized through moving operation, so that the flexibility of the target operation control and the diversity of control means are improved.
Optionally, in this embodiment, the user's viewing angle moves in a direction opposite to the moving direction of the window when moving the window, thereby reducing hand swing examples and improving movement efficiency.
Further illustratively, when the boundary of the virtual operation window 1104 is detected to overlap with the interface boundary 1106 when the user performs the leftward movement operation on the virtual operation window 1104 through the target operation control 1102, as shown in fig. 11 (a), the area of the virtual operation window is optionally reduced if the user still detects that the user performs the rightward operation on the target operation control 1102, as shown in fig. 11 (b), and when the area of the virtual operation window 1104 is minimized, the virtual operation window 1104 is closed or hidden.
According to the embodiment provided by the application, when the distance between the execution position and the interface boundary of the display interface corresponding to the virtual reality scene is smaller than or equal to the second preset threshold value and larger than or equal to the third preset threshold value, the display area of the virtual operation window is reduced;
under the condition that the distance between the execution position and the interface boundary is smaller than a third preset threshold value, the virtual operation window is hidden or closed, so that the purposes of reducing the occupied area of the window and improving the interface display efficiency are achieved, and the technical effects of improving the flexibility and the diversity of control means are achieved.
As an alternative, before adjusting the first aspect of the target operation control and displaying the second aspect of the virtual operation window with the change of the first aspect, the method further includes:
acquiring a first distribution position corresponding to a first part in a target gesture and a second distribution position corresponding to a second part in the target gesture;
and when the relative distance between the first distribution position and the second distribution position is changed, acquiring a second scaling operation triggered by the target operation control, wherein the second scaling operation is used for adjusting the first form.
Optionally, in this embodiment, the first location may be, but is not limited to being, any body location in the target gesture, and the second location may be, but is not limited to being, a body location in the target gesture that is different from the first location; the first distributed location may be, but is not limited to being, a location on the target operational control that is triggered by the body part, and the second distributed location may be, but is not limited to being, a location on the target operational control that is triggered by the body part that is different from the first distributed location.
Optionally, in the present embodiment, the first form may be, but is not limited to, a form in which the target operation control is deformed by the body part, including a size, a shape, and the like; the first scaling operation is triggered by a target operation control and is used for realizing an operation instruction for scaling the window; the second zoom operation is a gesture instruction initiated by the user for controlling the target operation control.
The method and the device realize the operation of enlarging and reducing the window by controlling the morphological change of the target operation control, reduce the operation difficulty of directly enlarging and reducing the window and improve the control efficiency.
Further by way of example, a first distribution position corresponding to the index finger of the user and a second distribution position corresponding to the thumb of the user are optionally obtained, when the index finger and the thumb of the user have a pinching action, a change of a relative distance between the first distribution position and the second distribution position is detected, a scaling operation triggered by the target operation control is obtained, and the window is scaled.
According to the embodiment provided by the application, the first distribution position corresponding to the first part in the target gesture and the second distribution position corresponding to the second part in the target gesture are obtained; when the relative distance between the first distribution position and the second distribution position is changed, a second scaling operation triggered by the target operation control is obtained, wherein the second scaling operation is used for adjusting the first form, and further the purpose of reducing the operation difficulty of directly amplifying or reducing the window by a user is achieved, and therefore the technical effect of improving the control efficiency is achieved.
As an alternative, before triggering the target event for preparing to control the virtual operation window, the method further includes:
in response to a first zoom operation performed on a window zoom control displayed in a virtual reality scene, displaying a virtual slider and a virtual progress panel where the virtual slider is located in the virtual reality scene, wherein the second modality is related to a slider position of the virtual slider in the virtual progress panel;
acquiring a slider gesture executed on the virtual slider;
and determining the slider gesture as a target gesture when the similarity between the slider gesture and the second preset gesture is greater than or equal to a fourth preset threshold.
Optionally, in this embodiment, the virtual slider is a mold assembly displayed at the window and capable of sliding, and may be, but not limited to, placed on a virtual progress panel, and the scaling degree and the scaling ratio are determined according to the sliding direction and the sliding distance; the virtual progress panel may be, but is not limited to, a slide for a virtual slider, may be, but is not limited to, a display of a scale progress that can be zoomed in and out, etc.; the fourth preset threshold may be, but is not limited to, a value for similarity between the slider gesture and the preset gesture.
According to the embodiment of the application, the virtual sliding block and the virtual progress panel where the virtual sliding block is positioned are displayed in the virtual reality scene in response to the first zooming operation executed on the window zooming control displayed in the virtual reality scene, wherein the second form is related to the sliding block position of the virtual sliding block in the virtual progress panel; acquiring a slider gesture executed on the virtual slider; and under the condition that the similarity between the sliding block gesture and the second preset gesture is larger than or equal to a fourth preset threshold value, determining the sliding block gesture as a target gesture, and further achieving the purpose of sliding the virtual sliding block on the virtual progress panel to realize the enlargement and the reduction of the window, thereby realizing the technical effect of improving the efficiency of controlling the enlargement and the reduction of the virtual operation window.
As an alternative, before adjusting the first aspect of the target operation control and displaying the second aspect of the virtual operation window with the change of the first aspect, the method further includes:
and when the position of the sliding block is changed, acquiring a first scaling operation triggered by the target operation control, wherein the first scaling operation is used for adjusting the first form.
It should be noted that, the zoom window is used as the operation with higher threshold, and the embodiment of the application achieves the purpose of zooming the window by presetting the parameter of the zoom scale and kneading the sliding block, reduces the difficulty of the zoom operation and the selected threshold, and improves the interaction efficiency of the user in the virtual reality scene.
By way of further illustration, alternatively, as shown in (a) of fig. 12, for example, upon detecting that the user performs a zoom gesture on the target virtual operation control 1202, determining that the user enters into a zoom mode, at which point the user uses the thumb and index finger to pinch the virtual slider 1206 on the virtual progress panel 1204 to move left and right to select a zoom scale, as shown in (b) of fig. 12, options of 50%, 75%, 100%, 125%, 150% are displayed on the virtual progress panel 1204, respectively, the virtual operation window scales equally with the adjustment scale, and when the virtual slider 1206 slides to the target zoom scale, in response to a click operation performed on the determination control 1208, the zoom mode is exited, returning to (a) of fig. 12.
According to the embodiment of the application, when the position of the sliding block is changed, the first scaling operation triggered by the target operation control is obtained, wherein the first scaling operation is used for adjusting the first form, so that the purpose of displaying the scaling progress of the scaling operation is achieved, and the technical effect of improving the scaling display efficiency is achieved.
As an alternative, the above method is characterized in that:
displaying a virtual operation window and a target operation control in a virtual reality scene, comprising: displaying target content associated with the virtual reality scene in the virtual operation window;
in response to a control operation performed on the target operation control, adjusting a first spatial attribute of the target operation control and displaying a second spatial attribute of the virtual operation window as a function of the first spatial attribute, the method further comprises: and hiding the target content displayed in the virtual operation window.
It should be noted that, by hiding the target content displayed in the virtual operation window in the moving process, the position and the rendering content of the target content in each frame of picture moving along with the virtual operation window do not need to be calculated in the moving process, so that the workload and the CPU load of rendering are reduced.
Further by way of example, optionally, for example, detecting that the user performs a target gesture of moving to the left on the target operation control and triggering a moving operation of the target operation control, where the window moves to the left following the movement of the target operation control, hiding target content associated with the virtual scene displayed by the virtual operation window in the process of moving to the left, that is, the window content disappears to become a transparent gray panel, canceling hiding when determining the position where the final movement arrives, and displaying the target content in the virtual operation window.
According to the embodiment provided by the application, the target content associated with the virtual reality scene is displayed in the virtual operation window; in response to a control operation performed on the target operation control, adjusting a first spatial attribute of the target operation control and displaying a second spatial attribute of the virtual operation window as a function of the first spatial attribute, the method further comprises: the target content displayed in the virtual operation window is hidden, so that the purpose that the window content disappears and becomes a transparent gray panel in the moving process is achieved, and the technical effect of reducing unnecessary rendering workload in the moving process is achieved.
As an alternative, after triggering the target event ready to control the virtual operation window, the method further comprises at least one of: displaying first adjustment information of the first spatial attribute, wherein the first adjustment information is candidate information of a control operation which is currently allowed to be executed by the first spatial attribute; displaying second adjustment information of a second spatial attribute, wherein the second adjustment information is candidate information of a control operation which is currently allowed to be executed by the second spatial attribute;
in response to a control operation performed on the target operation control, adjusting a first spatial attribute of the target operation control and displaying a second spatial attribute of the virtual operation window as a function of the first spatial attribute, the method further comprises: displaying first preview information of the first spatial attribute, wherein the first preview information is preview information of the first spatial attribute after adjustment according to the current control operation; and displaying second preview information of the second spatial attribute, wherein the second preview information is preview information of the second spatial attribute after adjustment according to the current control operation.
Alternatively, in the present embodiment, the first adjustment information may be, but is not limited to, candidate information for a control operation that the first spatial attribute is currently allowed to perform, and the second adjustment information may be, but is not limited to, candidate information for a control operation that the second spatial attribute is currently allowed to perform; candidate information may include, but is not limited to, directions and distances in which the target operational control may undergo positional movement and morphological changes.
In the moving process, the movable direction and the movable distance are displayed on the target virtual operation control, so that the window is prevented from being dragged out of the interface in use by a user, a controllable range is displayed on the window through data, good feedback experience is brought to the user, and the control efficiency of the user and the interactive experience in a virtual reality scene are improved.
Further by way of example, as shown in fig. 13 (a), the user may pinch the target operation control 1302 with the index finger and thumb, at which point the virtual operation window 1304 content disappears as a transparent gray panel, and the direction and moving distance that can be moved is displayed on the target operation control 1302, wherein the moving distance can be displayed on the digital label 1306, and without special cases, the x, y, z axes can all be moved, and the digital label 1306 becomes red when moved to the maximum boundary value, such as (b) in fig. 13 and the corresponding direction appears as a red transparent frame boundary 1308.
Optionally, in this embodiment, the first preview information may be, but not limited to, preview information after the first spatial attribute is adjusted according to the current control operation, and the second preview information may be, but not limited to, preview information after the second spatial attribute is adjusted according to the current control operation; the preview information may, but is not limited to, displaying the preview information of the entire interface on a small area of the virtual operating window, and the preview information may, but is not limited to, include the form and location of the virtual operating window adjustment completion.
It should be noted that, through displaying the preview information, the user can conveniently and clearly preview the adjustment process in the control operation process, the misoperation that appears in the adjustment is convenient for in time discover, thereby correct, be convenient for check whether the adjustment result reaches user's demand, the preview interface can assist in adjusting the display sequence and the window layout of window in addition, the user can be according to the virtual position of putting of operating control of demand customization, improve the accuracy of control effect.
Further by way of example, as shown in fig. 14, alternatively, preview information 1404 of the entire interface is displayed in an area at an edge of the virtual operation window 1402, and when the user controls the virtual operation control 1406, the corresponding virtual operation window 1402 changes accordingly, and the change process is displayed on the preview information 1404 of the interface.
According to the embodiment of the application, the first adjustment information of the first spatial attribute is displayed, wherein the first adjustment information is candidate information of the control operation which is allowed to be executed currently by the first spatial attribute; displaying second adjustment information of a second spatial attribute, wherein the second adjustment information is candidate information of a control operation which is currently allowed to be executed by the second spatial attribute; in response to a control operation performed on the target operation control, adjusting a first spatial attribute of the target operation control and displaying a second spatial attribute of the virtual operation window as a function of the first spatial attribute, the method further comprises: displaying first preview information of the first spatial attribute, wherein the first preview information is preview information of the first spatial attribute after adjustment according to the current control operation; and displaying second preview information of the second spatial attribute, wherein the second preview information is preview information of the second spatial attribute after being adjusted according to the current control operation, so that the purpose of feeding back according to the position of the virtual operation window on the interface when a user controls a target operation control is achieved, and the technical effects of improving the accuracy and efficiency of the control of the virtual operation window are achieved.
As an alternative, the window control method is applied in a virtual reality scenario, for example, as shown in fig. 15, and specific steps are as follows:
step S1502, a user enters a virtual reality scene, opens a virtual operation window and a display interface, and the VR client ADK interface recognizes a gesture trigger operation of the user, and the user performs a gesture pinch operation at this time;
step S1504, binding a kneading ball to an initiating event, setting a window panel as a kneading ball object, and enabling the window panel and the kneading ball to synchronously move;
step S1506, recognizing a kneading ball event sent by a user, and obtaining kneading ball space coordinates;
step S1508, hiding and displaying the content of the window panel, wherein the coordinate position is Y-axis +20 of the kneading ball, and the kneading ball triggers the execution of the triggering event;
step S1510, kneading ball position moves following the gesture;
step S1512, dynamically displaying the coordinate value of the window in the VR space by the coordinate control in the process of following gesture movement, and obtaining the vector value of the normal line of the window, wherein the vector value of the normal line of the window consists of the numerical values of the X, Y, Z vector directions, and dynamically assigning the value of the vector X, Y, Z to an input frame corresponding to the coordinate control module in the display process;
Step S1514, after displaying the coordinate values, calculating coordinate values of the window in the VR control range, triggering a response event, obtaining a distance value from the camera to the window Z axis, calculating coordinates p1 (x 1, y 1), p2 (x 2, y 2) of the maximum range in the circular orbit under the condition that the Z axis is kept unchanged, where the calculation formula is x=r×cos (angle×pi/180), y=r×sin (angle×pi/180), assuming that the center of a circle is o (x 0, y 0), assuming that the maximum wide angle Fov value of the camera is 120 degrees, the radius is r, and the angle is 60 (counterclockwise is a negative number, clockwise is a positive number);
step S1516, detecting window coordinates, judging whether the gesture reaches the left and right boundaries;
step S1518, when the window vertex coordinates exceed the out-of-range direction, the input frame background displays a red value, and the numerical value is not changed any more;
step S1520, when the P1> coordinate > P2, the P1, P2 value is assigned to the control frame corresponding to the coordinate control, the control frame style presents red, the P1, P2 value assignment is assigned to the window coordinate value no longer exceeding the range;
step S1522, when detecting that the hand movement coordinate exceeds the boundary coordinate, dynamically acquiring a difference value X3 from the gesture to the maximum boundary sitting value X1, dynamically calculating a set window width (window width value w=w-X3), and when W < a design default value, keeping the width value unchanged;
In step S1524, window scaling logic is implemented.
As an alternative, the window control method is applied in a virtual reality scenario, for example, as shown in fig. 16, and specific steps are as follows:
step S1602, a user enters a virtual reality scene, opens a virtual operation window and a display interface, and the VR client ADK interface recognizes a user gesture triggering operation, and the user gesture clicks a pinch ball;
step S1604, executing a button in the gesture click panel triggers the scroll progress bar control UI display;
step S1606, triggering an event of the progress control On Value Changed;
step S1608, performing obtaining a value of the scrollar, wherein if value=0.4 window width value W: w= W X0.5; if value = 0.6 window width value W: w= W X0.75; if value = 0.8 window width value W: w= W X1; if value=1 window width value W: w= W X1.25.25;
step S1610, triggering the submit function upon detecting that the user clicks the confirm button;
step S1612, storing the window data;
step S1614, hiding the control bar, setting the scroll progress bar control to be displayed as False, and ending the flow.
It will be appreciated that in the specific embodiments of the present application, related data such as user information is involved, and when the above embodiments of the present application are applied to specific products or technologies, user permissions or consents need to be obtained, and the collection, use and processing of related data need to comply with related laws and regulations and standards of related countries and regions.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
According to another aspect of the embodiment of the present application, there is also provided a window control apparatus for implementing the above window control method. As shown in fig. 17, the apparatus includes:
a first display unit 1702 configured to display a virtual operation window and a target operation control in a virtual reality scene;
the first control unit 1704 is configured to trigger a target event for preparing to control the virtual operation window when a target gesture performed on the target operation control is acquired;
the first adjusting unit 1706 is configured to, in a process that the target event is in the triggered state, respond to a control operation performed on the target operation control, adjust a first spatial attribute of the target operation control, and display a second spatial attribute of the virtual operation window to change along with a change of the first spatial attribute.
Specific embodiments may refer to the examples shown in the window control device, and in this example, details are not repeated here.
As an alternative, the apparatus includes:
the second display unit is used for displaying a plurality of gesture detection points on the target operation control before triggering a target event for preparing to control the virtual operation window, wherein the gesture detection points are used for detecting gestures executed on the target operation control;
the first acquisition unit is used for acquiring target gestures by utilizing a plurality of gesture detection points before triggering target events for preparing to control the virtual operation window.
Specific embodiments may refer to examples shown in the window control method, and in this example, details are not described herein.
As an alternative, the first obtaining unit includes:
the first acquisition module is used for acquiring first suspension data detected on a first gesture detection point in the gesture detection points;
the second acquisition module is used for acquiring second suspension data detected on a second gesture detection point in the gesture detection points;
the first processing module is used for integrating the first suspension data and the second suspension data to obtain target suspension data;
The first determining module is configured to determine the hover gesture as the target gesture when a similarity between the hover gesture corresponding to the target hover data and the first preset gesture is greater than or equal to a first preset threshold.
Specific embodiments may refer to examples shown in the window control method, and in this example, details are not described herein.
As an alternative, the first adjusting unit 1706 includes at least one of the following:
the first adjusting module is used for adjusting the first position of the target operation control and displaying the second position of the virtual operation window to change along with the change of the first position;
the second adjusting module is used for adjusting the first form of the target operation control and displaying the second form of the virtual operation window to change along with the change of the first form.
Specific embodiments may refer to examples shown in the window control method, and in this example, details are not described herein.
As an alternative, the first adjusting unit 1706 further includes:
the third obtaining module is used for obtaining a movement operation triggered by the target operation control when the execution position of the target gesture changes before the first position of the target operation control is adjusted and the second position of the virtual operation window is displayed to change along with the change of the first position, wherein the movement operation is used for adjusting the first position.
Specific embodiments may refer to examples shown in the window control method, and in this example, details are not described herein.
As an optional solution, the third obtaining module further includes:
the first processing sub-module is used for reducing the display area of the virtual operation window under the condition that the distance between the execution position and the interface boundary of the display interface corresponding to the virtual reality scene is smaller than or equal to a second preset threshold value and larger than or equal to a third preset threshold value;
and the second processing sub-module is used for hiding or closing the virtual operation window under the condition that the distance between the execution position and the interface boundary is smaller than a third preset threshold value.
Specific embodiments may refer to examples shown in the window control method, and in this example, details are not described herein.
As an optional solution, the second adjusting module further includes:
the fourth obtaining module is used for obtaining a first distribution position corresponding to the first part in the target gesture and a second distribution position corresponding to the second part in the target gesture before the first form of the target operation control is adjusted and the second form of the virtual operation window is displayed to change along with the change of the first form;
The fifth obtaining module is configured to obtain, when the relative distance between the first distribution position and the second distribution position changes before the first form of the target operation control is adjusted and the second form of the virtual operation window is displayed to change along with the change of the first form, a second scaling operation triggered by the target operation control, where the second scaling operation is used to adjust the first form.
Specific embodiments may refer to examples shown in the window control method, and in this example, details are not described herein.
As an alternative, the apparatus further includes:
the first display module is used for responding to a first zooming operation executed on a window zooming control displayed in a virtual reality scene before triggering a target event for preparing to control a virtual operation window, and displaying a virtual sliding block and a virtual progress panel where the virtual sliding block is positioned in the virtual reality scene, wherein the second form is related to the sliding block position of the virtual sliding block in the virtual progress panel;
a sixth obtaining module, configured to obtain, before triggering a target event for preparing to control the virtual operation window, a slider gesture performed on the virtual slider before triggering a target event for preparing to control the virtual operation window;
And the second determining module is used for determining the slider gesture as a target gesture under the condition that the similarity between the slider gesture and the second preset gesture is greater than or equal to a fourth preset threshold value before triggering the target event for preparing to control the virtual operation window.
Specific embodiments may refer to examples shown in the window control method, and in this example, details are not described herein.
As an alternative, the apparatus further includes:
the seventh obtaining module is configured to obtain, when the position of the slider changes, a first scaling operation triggered by the target operation control before the first form of the target operation control is adjusted and the second form of the virtual operation window is displayed to change along with the change of the first form, where the first scaling operation is used to adjust the first form.
Specific embodiments may refer to examples shown in the window control method, and in this example, details are not described herein.
As an optional solution, the apparatus displays a virtual operation window and a target operation control in a virtual reality scene, including: the second display module is used for displaying target content associated with the virtual reality scene in the virtual operation window;
In response to a control operation performed on the target operation control, adjusting a first spatial attribute of the target operation control and displaying a second spatial attribute of the virtual operation window as a function of the first spatial attribute, the method further comprises: and the first hiding module is used for hiding the target content displayed in the virtual operation window.
Specific embodiments may refer to examples shown in the window control method, and in this example, details are not described herein.
As an alternative, the apparatus further includes:
after triggering the target event ready to control the virtual operating window, the apparatus further comprises at least one of: the third display module is used for displaying first adjustment information of the first spatial attribute, wherein the first adjustment information is candidate information of control operation which is allowed to be executed currently by the first spatial attribute; displaying second adjustment information of a second spatial attribute, wherein the first adjustment information is candidate information of a control operation which is currently allowed to be executed by the second spatial attribute;
in response to a control operation performed on the target operation control, adjusting a first spatial attribute of the target operation control and displaying a second spatial attribute of the virtual operation window as a function of the first spatial attribute, the apparatus further comprises: the fourth display module is used for displaying first preview information of the first spatial attribute, wherein the first preview information is preview information of the first spatial attribute after adjustment according to the current control operation; and displaying second preview information of the second spatial attribute, wherein the second preview information is preview information of the second spatial attribute after adjustment according to the current control operation.
Specific embodiments may refer to examples shown in the window control method, and in this example, details are not described herein.
According to a further aspect of the embodiments of the present application there is also provided an electronic device for implementing the above window control method, as shown in fig. 18, the electronic device comprising a memory 1802 and a processor 1804, the memory 1802 having stored therein a computer program, the processor 1804 being arranged to perform the steps of any of the method embodiments described above by means of the computer program.
Alternatively, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of the computer network.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, displaying a virtual operation window and a target operation control in a virtual reality scene;
s2, triggering a target event for preparing to control the virtual operation window under the condition that a target gesture executed on a target operation control is acquired;
and S3, in the process that the target event is in the trigger state, responding to the control operation executed on the target operation control, adjusting the first spatial attribute of the target operation control, and displaying the change of the second spatial attribute of the virtual operation window along with the change of the first spatial attribute.
Alternatively, it will be understood by those skilled in the art that the structure shown in fig. 18 is only schematic, and the electronic device may also be a terminal device such as a smart phone (e.g. an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 18 is not limited to the structure of the above-described electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 18, or have a different configuration than shown in FIG. 18.
The memory 1802 may be used for storing software programs and modules, such as program instructions/modules corresponding to the window control method and apparatus in the embodiment of the present application, and the processor 1804 executes the software programs and modules stored in the memory 1802, thereby performing various functional applications and data processing, that is, implementing the window control method described above. The memory 1802 may include high-speed random access memory, but may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1802 may further include memory that is remotely located relative to the processor 1804, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 1802 may be used for storing information of a first spatial attribute, a second spatial attribute, and the like, in particular, but not limited to. As an example, as shown in fig. 18, the memory 1802 may include, but is not limited to, the first display unit 1702, the first control unit 1704, and the first adjustment unit 1706 in the window control device. In addition, other module units in the window control device may be included, but are not limited to, and are not described in detail in this example.
Optionally, the transmission device 1806 is used to receive or transmit data via a network. Specific examples of the network described above may include wired networks and wireless networks. In one example, the transmission means 1806 includes a network adapter (Network Interface Controller, NIC) that may be connected to other network devices and routers via a network cable to communicate with the internet or a local area network. In one example, the transmission device 1806 is a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
In addition, the electronic device further includes: a display 1808 for displaying information such as the first spatial attribute and the second spatial attribute; and a connection bus 1810 for connecting the various module components in the electronic device described above.
In other embodiments, the terminal device or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting the plurality of nodes through a network communication. Among them, the nodes may form a Peer-To-Peer (P2P) network, and any type of computing device, such as a server, a terminal, etc., may become a node in the blockchain system by joining the Peer-To-Peer network.
According to one aspect of the present application, there is provided a computer program product comprising a computer program/instruction containing program code for executing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via a communication portion, and/or installed from a removable medium. When executed by a central processing unit, performs various functions provided by embodiments of the present application.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
It should be noted that the computer system of the electronic device is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
The computer system includes a central processing unit (Central Processing Unit, CPU) which can execute various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) or a program loaded from a storage section into a random access Memory (Random Access Memory, RAM). In the random access memory, various programs and data required for the system operation are also stored. The CPU, the ROM and the RAM are connected to each other by bus. An Input/Output interface (i.e., I/O interface) is also connected to the bus.
The following components are connected to the input/output interface: an input section including a keyboard, a mouse, etc.; an output section including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and the like, and a speaker, and the like; a storage section including a hard disk or the like; and a communication section including a network interface card such as a local area network card, a modem, and the like. The communication section performs communication processing via a network such as the internet. The drive is also connected to the input/output interface as needed. Removable media such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, and the like are mounted on the drive as needed so that a computer program read therefrom is mounted into the storage section as needed.
In particular, the processes described in the various method flowcharts may be implemented as computer software programs according to embodiments of the application. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such embodiments, the computer program may be downloaded and installed from a network via a communication portion, and/or installed from a removable medium. The computer program, when executed by a central processing unit, performs the various functions defined in the system of the application.
According to one aspect of the present application, there is provided a computer-readable storage medium, from which a processor of a computer device reads the computer instructions, the processor executing the computer instructions, causing the computer device to perform the methods provided in the various alternative implementations described above.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, displaying a virtual operation window and a target operation control in a virtual reality scene;
s2, triggering a target event for preparing to control the virtual operation window under the condition that a target gesture executed on a target operation control is acquired;
and S3, in the process that the target event is in the trigger state, responding to the control operation executed on the target operation control, adjusting the first spatial attribute of the target operation control, and displaying the change of the second spatial attribute of the virtual operation window along with the change of the first spatial attribute.
Alternatively, in this embodiment, it will be understood by those skilled in the art that all or part of the steps in the methods of the above embodiments may be performed by a program for instructing a terminal device to execute the steps, where the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the method of the various embodiments of the present application.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided by the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and are merely a logical functional division, and there may be other manners of dividing the apparatus in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (15)

1. A window control method, comprising:
displaying a virtual operation window and a target operation control in a virtual reality scene;
Triggering a target event for preparing to control the virtual operation window under the condition that a target gesture executed on the target operation control is acquired;
and in the process that the target event is in a trigger state, responding to control operation executed on the target operation control, adjusting a first space attribute of the target operation control, and displaying a second space attribute of the virtual operation window to change along with the change of the first space attribute.
2. The method of claim 1, wherein prior to the triggering a target event that is ready to control the virtual operating window, the method further comprises:
displaying a plurality of gesture detection points on the target operation control, wherein the gesture detection points are used for detecting gestures executed on the target operation control;
and acquiring the target gesture by utilizing the gesture detection points.
3. The method of claim 2, wherein the acquiring the target gesture using the plurality of gesture detection points comprises:
acquiring first suspension data detected on a first gesture detection point in the gesture detection points;
acquiring second suspension data detected on a second gesture detection point in the plurality of gesture detection points;
Integrating the first suspension data and the second suspension data to obtain target suspension data;
and determining the suspension gesture as the target gesture under the condition that the similarity between the suspension gesture corresponding to the target suspension data and the first preset gesture is greater than or equal to a first preset threshold value.
4. The method of claim 1, wherein the adjusting the first spatial attribute of the target operational control and displaying the second spatial attribute of the virtual operational window changes as the first spatial attribute changes comprises at least one of:
adjusting a first position of the target operation control, and displaying a second position of the virtual operation window to change along with the change of the first position;
and adjusting a first form of the target operation control, and displaying a second form of the virtual operation window to change along with the change of the first form.
5. The method of claim 4, wherein prior to said adjusting the first position of the target operational control and displaying the second position of the virtual operational window as a function of the first position, the method further comprises:
And when the execution position of the target gesture changes, acquiring a movement operation triggered by the target operation control, wherein the movement operation is used for adjusting the first position.
6. The method of claim 5, wherein in said adjusting the first position of the target operational control and displaying the second position of the virtual operational window as a function of the first position, the method further comprises:
reducing the display area of the virtual operation window when the distance between the execution position and the interface boundary of the display interface corresponding to the virtual reality scene is smaller than or equal to a second preset threshold value and larger than or equal to a third preset threshold value;
and hiding or closing the virtual operation window under the condition that the distance between the execution position and the interface boundary is smaller than the third preset threshold value.
7. The method of claim 4, wherein prior to said adjusting the first modality of the target operational control and displaying the second modality of the virtual operational window as a function of the first modality, the method further comprises:
Acquiring a first distribution position corresponding to a first part in the target gesture and a second distribution position corresponding to a second part in the target gesture;
and when the relative distance between the first distribution position and the second distribution position is changed, acquiring a second scaling operation triggered by the target operation control, wherein the second scaling operation is used for adjusting the first form.
8. The method of claim 4, wherein prior to the triggering a target event that is ready to control the virtual operating window, the method further comprises:
responding to a first zooming operation executed on a window zooming control displayed in the virtual reality scene, displaying a virtual slider and a virtual progress panel where the virtual slider is positioned in the virtual reality scene, wherein the second form is related to the slider position of the virtual slider in the virtual progress panel;
acquiring a slider gesture executed on the virtual slider;
and determining the slider gesture as the target gesture under the condition that the similarity between the slider gesture and the second preset gesture is greater than or equal to a fourth preset threshold value.
9. The method of claim 8, wherein prior to said adjusting the first modality of the target operational control and displaying the second modality of the virtual operational window as a function of the first modality, the method further comprises:
and when the position of the sliding block is changed, acquiring a first scaling operation triggered by the target operation control, wherein the first scaling operation is used for adjusting the first form.
10. The method according to any one of claims 1 to 9, wherein,
the displaying the virtual operation window and the target operation control in the virtual reality scene comprises: displaying target content associated with the virtual reality scene in the virtual operation window; in the process of responding to the control operation executed on the target operation control, adjusting the first spatial attribute of the target operation control and displaying the second spatial attribute of the virtual operation window to change along with the change of the first spatial attribute, the method further comprises: hiding the target content displayed in the virtual operation window.
11. The method according to any one of claims 1 to 9, wherein,
After the triggering of the target event ready to control the virtual operating window, the method further comprises at least one of: displaying first adjustment information of the first spatial attribute, wherein the first adjustment information is candidate information of a control operation which is currently allowed to be executed by the first spatial attribute; displaying second adjustment information of the second spatial attribute, wherein the first adjustment information is candidate information of a control operation which is currently allowed to be executed by the second spatial attribute;
in the process of responding to the control operation executed on the target operation control, adjusting the first spatial attribute of the target operation control and displaying the second spatial attribute of the virtual operation window to change along with the change of the first spatial attribute, the method further comprises: displaying first preview information of the first spatial attribute, wherein the first preview information is preview information of the first spatial attribute after adjustment according to the current control operation; and displaying second preview information of the second spatial attribute, wherein the second preview information is preview information of the second spatial attribute after adjustment according to the current control operation.
12. A window control device, comprising:
the first display unit is used for displaying a virtual operation window and a target operation control in the virtual reality scene;
the first control unit is used for triggering a target event for preparing to control the virtual operation window under the condition that a target gesture executed on the target operation control is acquired;
the first adjusting unit is used for responding to the control operation executed on the target operation control in the process that the target event is in the trigger state, adjusting the first space attribute of the target operation control, and displaying the second space attribute of the virtual operation window to change along with the change of the first space attribute.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the program is executable by a terminal device or a computer to perform the method of any one of claims 1 to 11.
14. A computer program product comprising computer programs/instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 11.
15. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of the claims 1 to 11 by means of the computer program.
CN202210974762.6A 2022-08-15 2022-08-15 Window control method, device, storage medium and electronic equipment Pending CN117008774A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210974762.6A CN117008774A (en) 2022-08-15 2022-08-15 Window control method, device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210974762.6A CN117008774A (en) 2022-08-15 2022-08-15 Window control method, device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117008774A true CN117008774A (en) 2023-11-07

Family

ID=88566085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210974762.6A Pending CN117008774A (en) 2022-08-15 2022-08-15 Window control method, device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117008774A (en)

Similar Documents

Publication Publication Date Title
US20230147019A1 (en) Modes of control of virtual objects in 3d space
Jang et al. 3d finger cape: Clicking action and position estimation under self-occlusions in egocentric viewpoint
JP6031071B2 (en) User interface method and system based on natural gestures
EP4006847A1 (en) Virtual object processing method and apparatus, and storage medium and electronic device
US20090251407A1 (en) Device interaction with combination of rings
EP2814000A1 (en) Image processing apparatus, image processing method, and program
WO2013138489A1 (en) Approaches for highlighting active interface elements
KR20140130675A (en) Image processing device, and computer program product
WO2013192454A2 (en) Fingertip location for gesture input
Shim et al. Gesture-based interactive augmented reality content authoring system using HMD
EP4307096A1 (en) Key function execution method, apparatus and device, and storage medium
Linqin et al. Dynamic hand gesture recognition using RGB-D data for natural human-computer interaction
CN111860252A (en) Image processing method, apparatus and storage medium
CN112020694A (en) Method, system, and non-transitory computer-readable recording medium for supporting object control
CN106909219B (en) Interaction control method and device based on three-dimensional space and intelligent terminal
Ong et al. 3D bare-hand interactions enabling ubiquitous interactions with smart objects
Ueng et al. Vision based multi-user human computer interaction
CN110069126B (en) Virtual object control method and device
CN117008774A (en) Window control method, device, storage medium and electronic equipment
Ogiela et al. Natural user interfaces for exploring and modeling medical images and defining gesture description technology
CN109643182A (en) Information processing method and device, cloud processing equipment and computer program product
CN110688012B (en) Method and device for realizing interaction with intelligent terminal and vr equipment
Raees et al. THE-3DI: Tracing head and eyes for 3D interactions: An interaction technique for virtual environments
CN111385489B (en) Method, device and equipment for manufacturing short video cover and storage medium
US20230342026A1 (en) Gesture-based keyboard text entry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination