CN115390723A - Method and device for controlling object display effect and wearable device - Google Patents

Method and device for controlling object display effect and wearable device Download PDF

Info

Publication number
CN115390723A
CN115390723A CN202211015194.3A CN202211015194A CN115390723A CN 115390723 A CN115390723 A CN 115390723A CN 202211015194 A CN202211015194 A CN 202211015194A CN 115390723 A CN115390723 A CN 115390723A
Authority
CN
China
Prior art keywords
snooping
target object
snoop
space
condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211015194.3A
Other languages
Chinese (zh)
Inventor
刘昕笛
王庚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shining Reality Wuxi Technology Co Ltd
Original Assignee
Shining Reality Wuxi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shining Reality Wuxi Technology Co Ltd filed Critical Shining Reality Wuxi Technology Co Ltd
Priority to CN202211015194.3A priority Critical patent/CN115390723A/en
Publication of CN115390723A publication Critical patent/CN115390723A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a method and a device for controlling object display effect and wearable equipment. The specific implementation scheme is as follows: setting a snoop window in a virtual space of a head-mounted display device, the snoop window configured to: the snooping window has a positive direction, for an object meeting a preset snooping condition, under the condition that the sight of a user looks at the object through the positive direction of the snooping window, the part of the object positioned in the snooping space range is in a visible state, the part of the object beyond the snooping space range is in an invisible state, and the snooping space is an internal space limited by a user viewpoint and the edge of the snooping window; and under the condition that the target object meeting the preset snooping condition is positioned in the snooping space, responding to a received object moving-out instruction, processing at least part of the target object to be in a visible state in the whole virtual space, and moving at least part of the processed target object to a region outside the snooping space from the snooping space.

Description

Method and device for controlling object display effect and wearable device
Technical Field
The present disclosure relates to the field of head-mounted display devices, and in particular, to a method and an apparatus for controlling a display effect of an object, and a wearable device.
Background
Head-mounted display devices using technologies such as Augmented Reality (AR) and Virtual Reality (VR) are increasingly used. The head mounted display device may be used for content display, such as for displaying movie pictures, game pictures, web pages, and the like.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for controlling object display effect and wearable equipment.
According to an aspect of an embodiment of the present disclosure, there is provided a method for controlling an object display effect, including:
setting a snoop window in a virtual space of a head-mounted display device, the snoop window configured to: the snooping window has a positive direction, for an object meeting the preset snooping condition, under the condition that the sight of a user looks at the object through the positive direction of the snooping window, the part of the object positioned in the snooping space range is in a visible state, the part of the object beyond the snooping space range is in an invisible state, and the snooping space is an internal space limited by the viewpoint of the user and the edge of the snooping window;
and under the condition that the target object meeting the preset snooping condition is positioned in the snooping space, responding to the received object move-out instruction, processing at least part of the target object to be in a visible state in the whole virtual space, and moving at least part of the processed target object from the snooping space to a region outside the snooping space.
According to another aspect of the embodiments of the present disclosure, there is provided an apparatus for controlling an object display effect, including:
a setup module to set up a snoop window in a virtual space of a head-mounted display device, the snoop window configured to: the snooping window has a positive direction, for an object meeting the preset snooping condition, under the condition that the sight of a user looks at the object through the positive direction of the snooping window, the part of the object in the snooping space range is in a visible state, the part of the object beyond the snooping space range is in an invisible state, and the snooping space is an internal space defined by the viewpoint of the user and the edge of the snooping window;
and the processing module is used for responding to the received object moving-out instruction under the condition that the target object meeting the preset snooping condition is positioned in the snooping space, processing at least part of the target object to be in a visible state in the whole virtual space, and moving at least part of the processed target object to a region outside the snooping space from the snooping space.
According to still another aspect of an embodiment of the present disclosure, there is provided a wearable device including the above apparatus for controlling a display effect of an object.
According to still another aspect of an embodiment of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing processor-executable instructions;
and the processor is used for reading the executable instructions from the memory and executing the instructions to realize the method for controlling the object display effect.
According to still another aspect of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the above-described method for controlling an object display effect.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising computer program instructions which, when executed by a processor, implement the above-described method for controlling object display effects.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and embodiments.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally indicate like parts or steps.
Fig. 1 is a flowchart illustrating a method for controlling an object display effect according to an exemplary embodiment of the present disclosure.
FIG. 2 is a diagrammatic illustration of a snoop space in an exemplary embodiment of the present disclosure.
FIG. 3-1 is one of the schematic diagrams of a virtual space in an exemplary embodiment of the present disclosure.
Fig. 3-2 is a second schematic diagram of a virtual space in an exemplary embodiment of the disclosure.
Fig. 3-3 are third schematic diagrams of a virtual space in an exemplary embodiment of the present disclosure.
Fig. 3-4 are four schematic diagrams of virtual spaces in an exemplary embodiment of the present disclosure.
Fig. 3-5 are five schematic diagrams of virtual spaces in an exemplary embodiment of the present disclosure.
Fig. 3-6 are six schematic views of a virtual space in an exemplary embodiment of the present disclosure.
Fig. 3-7 are seven schematic diagrams of virtual spaces in an exemplary embodiment of the present disclosure.
Fig. 3-8 are eight schematic diagrams of virtual spaces in an exemplary embodiment of the present disclosure.
Fig. 3-9 are nine schematic diagrams of virtual spaces in an exemplary embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating a method for controlling an object display effect according to another exemplary embodiment of the present disclosure.
Fig. 5 is a flowchart illustrating a method for controlling an object display effect according to still another exemplary embodiment of the present disclosure.
Fig. 6 is a schematic structural diagram of an apparatus for controlling an object display effect according to an exemplary embodiment of the present disclosure.
Fig. 7 is a schematic structural diagram of an apparatus for controlling an object display effect according to another exemplary embodiment of the present disclosure.
Fig. 8 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
It should be noted that: the relative arrangement of parts and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more and "at least one" may refer to one, two or more.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the present disclosure may be generally understood as one or more, unless explicitly defined otherwise or indicated to the contrary hereinafter.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The disclosed embodiments may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with electronic devices, such as terminal devices, computer systems, servers, and the like, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set top boxes, programmable consumer electronics, network pcs, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above systems, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Exemplary method
Fig. 1 is a flowchart illustrating a method for controlling an object display effect according to an exemplary embodiment of the present disclosure. The method shown in fig. 1 includes steps 110 and 120, which are described below.
Setting a snoop window in a virtual space of the head-mounted display device, the snoop window being configured to: the snooping window has a positive direction, for an object meeting the preset snooping condition, under the condition that the user looks at the object through the positive direction of the snooping window by sight, the part of the object in the snooping space range is in a visible state, the part of the object beyond the snooping space range is in an invisible state, and the snooping space is an internal space defined by a user viewpoint and the edge of the snooping window.
It should be noted that, the Head-Mounted Display device may also be referred to as a Head-Mounted Display (HMD) or a Head Display, and the Head-Mounted Display device may be used to implement an AR effect, a VR effect, a Mixed Reality (MR) effect, and the like. Alternatively, the head mounted display device may be AR glasses, VR glasses, MR glasses, or the like.
In step 110, the snoop window may be set at a preset position in the virtual space by default, and then the setting position of the snoop window in the virtual space may be adjusted according to the actual requirement of the user, so that the snoop window is set at any position in the virtual space specified by the user. In an alternative example of the present disclosure, the snoop window may be a two-dimensional surface or a plate having a thickness. The snoop window can be a three-dimensional grid model, the shape of the snoop window can be defined by the grid, and the shape of the snoop window can be rectangular, circular, rhombic and the like. The snoop window may not be displayed at the time of final rendering, i.e., the user does not see the snoop window.
It should be noted that various objects for presentation to the user, such as application icons, web pages, movie posters, and the like, may be placed in the virtual space, and for any object placed in the virtual space, it may be determined whether the object satisfies a preset snooping condition.
If any object meets the preset snooping condition, under the condition that the sight of a user views the object through the positive direction of the snooping window, the part of the object, which is positioned in the snooping space range, is in a visible state, the part of the object, which exceeds the snooping space range, is in an invisible state, and the snooping space is an internal space defined by the viewpoint of the user and the edge of the snooping window. That is, rendering a snoop effect corresponding to the object means: the display of the object is influenced by the user's perspective and the snoop window, and only the portion of the object that is within the scope of the snoop space can be presented to the user.
It should be noted that the snoop window is directional. The user can see the part of the object meeting the preset snooping condition in the snooping space by looking in the positive direction of the snooping window. And if the sight of the user looks from the opposite direction of the snooping window, no snooping space exists. Therefore, when the user views the object from the front side of the snoop window, the user can observe the snoop effect aiming at the object, and when the user views the object from the back side of the snoop window, the user can not observe the snoop effect aiming at the object and can not see the object.
It should be noted that a device for acquiring a pose, a posture or a position may be disposed in the head-mounted display device, for example, an Inertial sensor (IMU) is disposed, and the position of the user viewpoint in the virtual space may be determined by acquiring data of the IMU and combining a mapping relationship between the virtual space and the real world, and the snoop space may be determined by combining the user viewpoint and an edge of the snoop window.
In an alternative example of the present disclosure, point P in fig. 2 may be the user's viewpoint, and the circular plane Q in fig. 2 may be the snoop window, with the side of the circular plane Q facing point P being the positive direction of the snoop window. Then, the point P can be used as a starting point of the ray, and the point P is respectively connected to each point located on the edge of the circular plane Q to form a corresponding connecting ray, and a cone space (see the portion with the right-oblique line in fig. 2) surrounded by the connecting rays can be used as the snooping space. Thus, for any object that satisfies the preset snoop condition, if the object is located completely inside the pyramidal volume, the object is visible to the user as a whole; if the object is located completely outside of the pyramidal volume, the object is not visible to the user as a whole; if the object is partly inside the pyramidal space and partly outside the pyramidal space, the object is partly visible to the user.
Since the snoop space is closely related to the user viewpoint, the change of the user viewpoint can cause the snoop space to change correspondingly. For example, if the viewpoint changes from the point P to the point R in fig. 2, the cone space obtained based on the formed connecting ray by connecting the point P to each point located on the edge of the circular plane Q, respectively, is obviously different from the cone space obtained based on the formed connecting ray by connecting the point R to each point located on the edge of the circular plane Q, respectively.
And step 120, in the case that the target object meeting the preset snooping condition is located in the snooping space, responding to the received object moving-out instruction, processing at least part of the target object to be in a visible state in the whole virtual space, and moving at least part of the processed target object from the snooping space to a region outside the snooping space.
In one optional example of the present disclosure, the target object may be a mesh model, and a shape of the target object may be defined by the mesh.
In an optional embodiment of the present disclosure, the head-mounted display device may be equipped with a handheld controller, and the user may initiate the object movement instruction by manipulating the handheld controller, for example, a move-out button may be displayed in the virtual space corresponding to the target object, and the user may click the move-out button by manipulating the handheld controller, and after clicking the move-out button, the user may drag the target object by manipulating the handheld controller. Based on the click of the move-out button, the initiation of an object move-out instruction can be realized; the position of the target object in the virtual space can be moved by dragging the target object. The handheld controller may be a mobile phone, a handle, or other device having a means for acquiring a pose or position, such as an IMU.
In another optional embodiment of the present disclosure, the head mounted display device may have gesture control functionality. The head-mounted display device can complete interaction and control on the object in the virtual space by recognizing the setting gesture. The user can send out a setting gesture, and under the condition that real image data collected by the head-mounted display device is identified as the setting gesture through an algorithm, rays related to the setting gesture can be rendered. The origin of the ray may be at an ergonomically set point (e.g., at the shoulder, elbow, etc.), and the direction of the ray may be along the line connecting the point and the hand (e.g., at the wrist, base of the index finger, etc.). The object movement instruction is initiated by ray-selecting and clicking a move-out button corresponding to the target object. The operation of emitting rays and the operation of clicking can be completed by two types of gestures in the setting gestures. For example, a rayed gesture may be a finger protracting gesture, while a rayed tap gesture may be a fist making or like gesture.
At least a portion of the target object may be processed to be visible throughout the virtual space in response to the received object move-out instruction. At this point, the display of at least a portion of the target object is unaffected by the snoop window, and at least a portion of the target object is visible to the user regardless of where in the virtual space the at least a portion of the target object is located. That is, at least a portion of the target object may be in a state of being fully spatially visible. In this way, at least a portion of the target object can always be present in the user field of view during movement of the target object from the snoop space to the region outside of the snoop space, and during movement of the target object from the snoop space to the region outside of the snoop space.
Of course, the manner in which the object move-out instruction is initiated is not limited to the above. For example, the head mounted display device may be a stand-alone device and the user may initiate the subject removal instruction by voice. In this way, the user can specify by voice which position in the region outside the snoop space the target object is finally displayed after being moved.
In an optional embodiment of the present disclosure, at least part of the target object is processed to be in a visible state in the whole virtual space, and at least part of the target object may be completely processed immediately when the move-out instruction is received, so that at least part of the target object is in a visible state in the whole virtual space. Partial processing may also occur upon receipt of a move-out instruction, according to a distance of at least a portion of the target object from a snoop space boundary. For example, when any sub-object in at least part of the target object is at a distance less than or equal to the distance threshold from the snoop space boundary, the sub-object is processed to a visible state in the whole virtual space, and other sub-objects which have not entered the snoop space boundary at a distance less than the distance threshold are not processed for the moment. Therefore, the target object can be gradually processed in the process of moving from the inside of the snooping space to the outside of the snooping space, and the smooth transition from the inside of the snooping space to the outside of the snooping space can still be realized.
In the embodiment of the disclosure, by setting the snooping space in the virtual space of the head-mounted display device, for an object meeting a preset snooping condition, a snooping effect can be presented corresponding to the object under the condition that a user sight looks at the object through the snooping window. And under the condition that the target object meeting the preset snooping condition is positioned in the snooping space and the object moving-out instruction is received, at least part of the target object can be processed to be in a visible state in the whole virtual space, and then at least part of the processed target object is moved out from the snooping space. In this way, the target object can be presented in the user field of view while the target object is located in the snoop space. At least a portion of the target object can be present in the user field of view during movement of the target object from within the snoop space to outside the snoop space, and while the target object is outside the snoop space. Then, for the user, the transition of the display effect of the target object is smooth and no abrupt change occurs during the entire movement of the target object. It can be seen that, in the embodiment of the present disclosure, the head-mounted display device can provide a snooping effect based on the snooping window, and thus the display effect that the head-mounted display device can provide is richer, and even if the snooping window is set, the display of the object is also smooth, so that the natural sense of the display effect of the head-mounted display device can be ensured, and therefore, the embodiment of the present disclosure is favorable for improving the use experience of the user.
In an optional embodiment of the present disclosure, the target object comprises a stereo module.
It should be noted that the stereoscopic module may also be referred to as a three-dimensional module or a 3D module, the stereoscopic module may include, but is not limited to, a three-dimensional head portrait, a three-dimensional pet, and the like, the three-dimensional head portrait may include, but is not limited to, a three-dimensional character head portrait, a three-dimensional animal head portrait, and the like, and the three-dimensional pet may include, but is not limited to, a pet dragon (e.g., the lovely dragon shown in fig. 3-1 to 3-9), a pet pig, a pet bear, and the like.
Because the target object comprises the three-dimensional module, and the three-dimensional module has the Z-axis depth information, the target object can present the depth sense, so that the target object can be more vivid and lifelike, and the head-mounted display device can present the snooping effect with the depth sense.
In an alternate embodiment of the present disclosure, the snoop window has a target snoop identification. It should be noted that any snoop identification involved in embodiments of the present disclosure may be in the form of a snoop ID.
In an alternative embodiment of the present disclosure, based on the embodiment shown in fig. 1, as shown in fig. 4, the method may further include step 112 and step 114.
Step 112, at least one sub-object included in the target object is determined, and each sub-object is configured with at least one material.
If the target object is a pet dragon, the eyes, corner, ears and skin of the pet dragon can be respectively used as a child object of the target object, and the eyes, corner, ears and skin of the pet dragon can be respectively configured with a material. In one optional example of the present disclosure, the texture includes, but is not limited to, texture information, tile information, color tone information, reflectivity information, roughness information, etc., and the texture may be a bitmap image.
At step 114, it is determined whether the target object satisfies the predetermined snoop condition based on the at least one texture configured for the target object and the target snoop identifier.
It should be noted that, the set of at least one material composition configured by each of the at least one sub-object included in the target object may be regarded as at least one material configured by the target object.
In an alternative embodiment, step 114 includes:
and under the condition that at least one material configured for the target object has a snooping identifier and the snooping identifiers of the at least one material configured for the target object are the same as the target snooping identifier, determining that the target object meets a preset snooping condition.
In another alternative embodiment, step 114 includes:
and under the condition that at least one material configured for the target object has a snooping identifier and the snooping identifier of the at least one material configured for the target object is different from the target snooping identifier, determining that the target object does not meet the preset snooping condition.
In yet another alternative embodiment, step 114 comprises:
and under the condition that at least one texture configured by the target object does not have the snooping identification, determining that the target object does not meet the preset snooping condition.
In the case where both the at least one texture of the target object configuration and the target snoop identification are known, it may be determined whether both the at least one texture of the target object configuration have snoop identifications.
If at least one material configured by the target object does not have the snooping identification, the target object can be directly judged not to meet the preset snooping condition.
If at least one texture of the target object configuration has a snoop identification, the snoop identification of each texture of the at least one texture of the target object configuration can be compared with the target snoop identification. If the comparison result is that the snoop identifier of each material in the at least one material configured for the target object is the same as the target snoop identifier, it can be determined that the target object meets the preset snoop condition. And if the comparison result shows that the snoop identifier of each material in the at least one material configured for the target object is different from the target snoop identifier, judging that the target object does not meet the preset snoop condition.
In one optional example of the present disclosure, the target snoop identifier is ID1, and the target object is configured with three textures, material 1, material 2, and material 3. If texture 1, texture 2, and texture 3 all have snoop identifications, and the snoop identifications of texture 1, texture 2, and texture 3 are all ID1, it may be determined that the target object satisfies the preset snoop condition. If texture 1, texture 2, and texture 3 all have snoop identifications, and texture 1, texture 2, and texture 3 all have snoop identifications different from ID1, then it may be determined that the target object does not satisfy the preset snoop condition. If none of texture 1, texture 2, and texture 3 have snoop identifications, it may be determined that the target object does not satisfy the preset snoop condition.
Note that if texture 1 has a snoop ID, and texture 1 has a snoop ID different from ID1, the child object corresponding to texture 1 may be in an invisible state in the entire virtual space. Similarly, if either Material 2 or Material 3 has a snoop ID and has a snoop ID different from ID1, the child object corresponding to either Material 2 or Material 3 may also be in an invisible state throughout the virtual space.
Therefore, whether the target object meets the preset snooping condition can be determined efficiently and reliably based on whether the at least one material configured by the target object has the snooping identifier and the comparison between the snooping identifier of the at least one material configured by the target object and the target snooping identifier. And by setting the snooping marks for each material in the at least one material, which sub-objects in the target object present the snooping effect and which sub-objects do not have the snooping effect can be determined by controlling the snooping marks set for each material in the at least one material, so that the display effect which can be provided by the head-mounted display device can be further enriched.
In the embodiment of the disclosure, by referring to at least one material configured by the target object and the target snooping identifier, whether the target object meets the preset snooping condition can be determined efficiently and reliably.
In an alternative embodiment of the present disclosure, if the target object needs to be updated from satisfying the preset snoop condition to not satisfying the preset snoop condition, one of the following two items may be adopted:
updating the snooping identifications of at least one material configured for the target object to be different from the target snooping identification;
and deleting the snoop identifications of at least one material configured by the target object.
If the target object meets the preset snooping condition, the snooping identifications of at least one material configured by the target object can be considered to be the same as the target snooping identifications, and then the snooping identifications of at least one material configured by the target object are updated to be different from the target snooping identifications by being the same as the target snooping identifications, so that the target object can be efficiently and reliably updated to not meet the preset snooping condition. For example, if the target snoop identification is ID1, then the snoop identifications of at least one texture configured for the target object may all be updated to ID2, ID3, or ID4. Alternatively, by deleting the snoop identifier of at least one material configured in the target object, the target object can be efficiently and reliably updated to not satisfy the preset snoop condition. For example, if the target snoop identifier is ID1 and the snoop identifiers of at least one texture allocated to the target object are all ID1, then ID1 can be deleted directly.
In an optional embodiment of the present disclosure, each of the at least one texture configured for the target object is pre-configured with two corresponding shaders, which are respectively a first shader supporting snooping identifier configured for the texture and a second shader not supporting snooping identifier configured for the texture. The first shader may include more than one shader. The second shader may include more than one shader.
Under the condition that at least one material configured for the target object has a snoop identifier and the snoop identifiers of the at least one material configured for the target object are the same as the target snoop identifier, determining that the target object meets a preset snoop condition, comprising:
and under the condition that all shaders used by at least one material configured for the target object are corresponding first shaders and the snoop identifications of the at least one material configured for the target object are the same as the target snoop identification, determining that the target object meets the preset snoop condition.
It will be appreciated that shaders are scripts that contain mathematical calculations and algorithms that can compute the color rendered by a pixel based on lighting input and texture configuration.
It should be noted that each first shader supports configuration of snoop tags for textures. When any texture uses the corresponding first shader, the snoop flag may be set for the texture based on the corresponding first shader, so that the texture has the snoop flag. And subsequently, under the condition that the snoop identifier of the texture is the same as the target snoop identifier, rendering processing is carried out based on the corresponding first shader, so that the sub-object corresponding to the texture presents a snoop effect. Because each second shader does not support the configuration of the snoop identifier for the texture, when any texture uses the corresponding second shader, the texture does not have the snoop identifier, the rendering processing is performed based on the corresponding second shader, the display of the sub-object corresponding to the texture is not affected by the snoop window, and the state that the whole space is visible is presented.
In the embodiment of the disclosure, whether the target object meets the preset snooping condition or not can be efficiently and reliably determined by presetting two corresponding shaders for each material of at least one material configured for the target object, referring to whether all the shaders used by the at least one material are corresponding first shaders or not and comparing the snooping identifier of the at least one material configured for the target object with the target snooping identifier.
In an optional embodiment of the present disclosure, the processing at least part of the target object to be visible in the entire virtual space in step 120 includes:
and switching the shaders used by at least part of at least one material in the at least one material configured for the target object into corresponding second shaders so that at least part of the target object is in a visible state in the whole virtual space.
When all shaders used by at least one material configured for the target object are corresponding first shaders, the shaders used by at least part of the at least one material can be switched from the corresponding first shaders to corresponding second shaders. Since the second shader does not support the allocation of snoop tags for textures, at least some of the textures are updated to have no snoop tags. Accordingly, the display of the child objects corresponding to at least part of the materials is not influenced by the snooping space, and the child objects are in a full-space visible state.
In the embodiment of the disclosure, the target object can be efficiently and reliably switched to the state of being at least partially visible in the full space from the rendering snoop effect through a simple shader switching operation.
In an embodiment of the present disclosure, on the basis of the embodiment shown in fig. 1, as shown in fig. 5, after step 120, the method may further include step 130.
In response to the received object reset instruction, at least a portion of the processed target object is moved from the region outside the snoop space to an initial position within the snoop space, and the target object is updated to satisfy the predetermined snoop condition, step 130.
In one optional example of the present disclosure, the user may initiate the object reset instruction by manipulating the handheld controller, for example, a reset button may be displayed in the virtual space corresponding to the target object, and the user may click the reset button by manipulating the handheld controller, thereby initiating the object reset instruction.
In another optional example of the present disclosure, the user may assist in initiating the object reset instruction by swinging out a set gesture. An object reset command is initiated by a ray selection by a set gesture and a click of a reset button corresponding to the target object.
Of course, similar to the object move-out instruction, the user may also initiate the object reset instruction in a voice form, and the embodiment of the present disclosure does not limit the initiating manner of the object move-out instruction.
In response to the received object reset instruction, at least a portion of the processed target object may be moved from a region outside of the snoop space to an initial position within the snoop space. The initial position may be a position of the target object within the snoop space at the time the object move-out instruction is received, or the initial position may be a preset position within the snoop space. For example, the preset position can be a position that is a depth increased by a preset distance from a center point of the snoop window, and the like.
It should be noted that, in step 120, the target object may also be updated to not satisfy the preset snoop identifier in response to the received object move-out instruction. For example, the snoop identifications of at least one of the textures configured by the target object may each be updated to be different from the target snoop identification by being the same as the target snoop identification. Then, in response to the received object reset instruction, the target object may also be re-updated to satisfy the preset snoop condition after moving at least a portion of the processed target object to the initial position within the snoop space. In an optional example of the present disclosure, the target snoop identifier is ID1, the target object is configured with three materials, i.e., material 1, material 2, and material 3, each of material 1, material 2, and material 3 has a snoop identifier, and the snoop identifiers of material 1, material 2, and material 3 are ID2, then the snoop identifiers of material 1, material 2, and material 3 may be updated to ID1 by ID2, so as to update the target object to meet the preset snoop condition.
In the embodiment of the disclosure, in response to the object reset instruction, after the target object moves back into the snooping space from the outside of the snooping space, the conversion of the target object from the state that the preset snooping condition is not met to the state that the preset snooping condition is met can be efficiently and reliably realized through the simple identification updating operation, so that the snooping effect corresponding to the target object can be presented again under the condition that the target object is viewed through the positive direction of the snooping window by the sight of a user.
In one embodiment of the present disclosure, the method further comprises: and in response to the received action instruction, controlling the target object to present a corresponding action effect.
In one optional example of the present disclosure, the user may initiate the action instruction by manipulating a handheld controller. For example, a user may initiate an action instruction by manipulating a handheld controller to click on a target object located in a snoop space. Of course, similar to the object moving-out instruction, the user may also initiate the action instruction by setting a gesture, voice, and the like, and the embodiment of the present disclosure does not set any limitation on the initiating manner of the action instruction.
In response to the received action instruction, the target object may be controlled to present a corresponding action effect. For example, if the user clicks the target object to initiate an action instruction by manipulating the handheld controller, the target object can be controlled to present a jumping action effect; if the user double-clicks the target object to initiate the action command by manipulating the handheld controller, the target object can be controlled to present the action effect of clapping hands. The action command can be initiated to the target object by clicking the corresponding action button, and the initiating mode of the action command is not limited in the disclosure. The action effect is not limited to the jumping and clapping, and the corresponding action effect can be set according to actual needs.
In an optional embodiment of the present disclosure, the target object may receive the action instruction to complete the action effect if the preset snoop condition is satisfied and the target object is located in the snoop space. The target object can also receive the action instruction to complete the action effect under the condition that the whole virtual space is in a visible state.
In the embodiment of the disclosure, the target object can be controlled to present a corresponding action effect through the action instruction, so that on one hand, the display effect which can be provided by the head-mounted display device can be richer, on the other hand, the interaction interestingness when the user interacts with the virtual object in the virtual space of the head-mounted display device can be increased, and the use experience of the user is improved.
In an alternative embodiment of the present disclosure, the snoop window may be given material a first, material a may use a pre-programmed shader a, and shader a may support adding a number ID (equivalent to the target snoop ID above) to material a. For example, the number ID corresponding to the material a may be added by snooping the setting interface of the window, and the added number ID may specifically be "1". And then, assigning a texture B to the target object, wherein the texture B can use a pre-programmed shader B, the shader B needs to be matched with the shader a for use, and the shader B can support adding a number ID (which is equivalent to at least one snooping identifier configured for the target object above) to the texture B. Shader a and shader b are different shaders. When the number ID corresponding to the shader a is the same as the number ID corresponding to the shader b, the effect of "the target object can be observed only through the snooping window" (equivalent to that the target object presents a snooping effect) is realized, and the snooping effect can refer to the display effect of observing the bud dragon at different angles in fig. 3-1 to fig. 3-3. The number of target objects may be one or more than one.
The target object may include a plurality of sub-objects, each of which may be assigned at least one material. For example, one sub-object is assigned a material B, another sub-object is assigned a material C, and yet another sub-object is assigned a material D. And the snooping effect corresponding to the target object can be presented only by meeting the condition that the shaders used by the materials B, C and D are the shaders B matched with the shaders a, and the numbers ID corresponding to the materials B, C and D are the same as the numbers ID corresponding to the materials A. 3-4 through 3-5, the target object may now be viewed within the scope of the snoop space.
In an alternative embodiment of the present disclosure, the preset snoop condition may include: the shader configured by the snooped object is the shader corresponding to the shader configured by the snooping window, and the snooping identifier of each texture in at least one texture configured by the snooped object is the same as the target snooping identifier. The target snoop identification is an identification that the snoop window is configured. The shader configured for the snooping window is used for configuring the snooping window, and the shader configured for the snooped object is used for configuring the snooped object. The shader of the snoop window configuration supports configuring target snoop identifications for the snoop window. A shader configured by the snooped object supports configuring snoop identifications for at least one texture configured by the snooped object. A shader configured for a snooped object may include more than one.
If the user clicks the "walk the dragon" button (equivalent to the move-out button in the above description) in fig. 3-6 to indicate that the target object needs to be moved out of the snoop space, the target object may be given a material B2 (the material B2 and the material B may have the same attribute and attribute value except the snoop identifier), the shader used by the material B2 may be a shader B2, and the shader B2 may be different from the shader a and the shader B (equivalent to an implementation manner of processing at least part of the target object to be visible in the entire virtual space). At this time, the display of the target object is not affected by the snoop window.
As shown in fig. 3-7, during the process of moving the target object from the inside of the snoop window to the outside of the snoop window, the target object can be displayed normally and completely since the display of the shader b2 is not affected by the snoop window.
As shown in fig. 3-8, when the target object is already outside the snoop window, the target object can be displayed normally and completely since the display of the shader b2 is not affected by the snoop window.
If the user indicates a need to move the target object back to the snoop window by clicking on the "go home" button (equivalent to the reset button above) in FIGS. 3-7 or 3-8, the texture B2 corresponding to shader B2 may be maintained. 3-9, since the display of shader b2 is not affected by the snoop window, the target object can be displayed normally and completely, and finally, the target object can return to the position shown in FIGS. 3-4.
After the target object moves back to the snoop window, the stereo module can be endowed with a texture B, and a shader corresponding to the texture B is a shader B. Since the display of shader b will be affected by the snoop window, a snoop effect may be presented corresponding to the target object. At this point, if the target object needs to perform an action such as a jump, only a portion of the target object within the snoop space may be presented to the user.
In summary, in the embodiments of the present disclosure, by setting the snoop window with the number ID in the virtual space and placing the snooped content (equivalent to the target object in the foregoing) with the number ID in the virtual space, a snoop effect can be achieved in some cases, and the snoop effect can be exhibited in the case where the user views the virtual space through multiple angles. And, in other cases, the effect of the snoop window may be stripped, thereby achieving a smooth transition.
Any of the methods for controlling object display effects provided by embodiments of the present disclosure may be performed by any suitable device having data processing capabilities, including but not limited to: terminal equipment, a server and the like. Alternatively, any of the methods for controlling object display effects provided by the embodiments of the present disclosure may be executed by a processor, such as the processor executing any of the methods for controlling object display effects mentioned by the embodiments of the present disclosure by calling corresponding instructions stored in a memory. Which will not be described in detail below.
Exemplary devices
Fig. 6 is a schematic structural diagram of an apparatus for controlling an object display effect according to an exemplary embodiment of the present disclosure, where the apparatus shown in fig. 6 includes a setting module 610 and a processing module 620.
A setup module 610 for setting up a snoop window in a virtual space of a head mounted display device, the snoop window configured to: the snooping window has a positive direction, for an object meeting the preset snooping condition, under the condition that the sight of a user looks at the object through the positive direction of the snooping window, the part of the object in the snooping space range is in a visible state, the part of the object beyond the snooping space range is in an invisible state, and the snooping space is an internal space defined by the viewpoint of the user and the edge of the snooping window;
and the processing module 620 is configured to, in response to the received object move-out instruction, process at least part of the target object to be in a visible state in the entire virtual space and move at least part of the processed target object from the snoop space to a region outside the snoop space, in a case where the target object meeting the preset snoop condition is located in the snoop space.
In an alternative embodiment of the present disclosure, the snoop window has a target snoop identification;
as shown in fig. 7, the apparatus further includes:
a first determining module 630, configured to determine at least one sub-object included in the target object, where each sub-object is configured with at least one material;
the second determining module 640 is configured to determine whether the target object satisfies the predetermined snoop condition based on the at least one texture configured for the target object and the target snoop identifier.
In an optional embodiment of the present disclosure, the second determining module 640 is configured to:
and under the condition that at least one material configured for the target object has a snooping identifier and the snooping identifier of the at least one material is the same as the target snooping identifier, determining that the target object meets a preset snooping condition.
In an optional embodiment of the present disclosure, each texture of the at least one texture configured for the target object is pre-configured with two corresponding shaders, which are respectively a first shader supporting configuration of snoop identifiers for textures and a second shader not supporting configuration of snoop identifiers for textures;
a second determining module 640, configured to:
and under the condition that all shaders used by at least one material configured for the target object are corresponding first shaders and the snoop identifications of the at least one material configured for the target object are the same as the target snoop identification, determining that the target object meets the preset snoop condition.
In an optional implementation of the present disclosure, the processing module 620 is configured to:
and switching the shaders used by at least part of at least one material in the at least one material configured for the target object into corresponding second shaders so that at least part of the target object is in a visible state in the whole virtual space.
In an alternative embodiment of the present disclosure, as shown in fig. 7, the apparatus further comprises:
the reset module 650 is configured to, after at least a portion of the processed target object is moved from the snoop space to a region outside the snoop space, move at least a portion of the processed target object from the region outside the snoop space to an initial position within the snoop space in response to the received object reset instruction, and update the target object again to satisfy a preset snoop condition.
In an alternative embodiment of the present disclosure, as shown in fig. 7, the apparatus further comprises:
and the control module 660 is used for responding to the received action instruction and controlling the target object to present a corresponding action effect.
In an optional embodiment of the present disclosure, the target object comprises a stereo module.
In the device of the present disclosure, various optional embodiments, optional implementation manners, and optional examples disclosed above may be flexibly selected and combined as needed, so as to achieve corresponding functions and effects, and the present disclosure is not listed one by one.
The embodiment of the present disclosure also provides a wearable device, which includes the above apparatus for controlling the display effect of the object. It should be noted that, for specific implementation and technical effects of the apparatus for controlling a display effect of an object and the wearable device, reference may be made to the description in the foregoing method embodiment, and details are not described herein again.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present disclosure is described with reference to fig. 8. Fig. 8 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. As shown in fig. 8, electronic device 800 includes one or more processors 810 and memory 820.
Processor 810 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in electronic device 800 to perform desired functions.
Memory 820 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 810 to implement the methods for controlling object display effects of the various embodiments of the present disclosure described above and/or other desired functions. Various content such as an input signal, signal components, noise components, etc. may also be stored in the computer readable storage medium.
The memory 820 may be used to store a computer program product. The processor 810 may be configured to execute a computer program product stored in the memory 820, and when executed, perform steps in a method for controlling object display effects according to various embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
In one example, the electronic device 800 may further include: an input device 830 and an output device 840, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device 800 is a first device or a second device, the input device 830 may be a microphone or a microphone array. When the electronic device 800 is a stand-alone device, the input means 830 may be a communication network connector for receiving the acquired input signals from the first device and the second device.
The input device 830 may also include, for example, a keyboard, a mouse, and the like.
The output device 840 may output various information to the outside. The output devices 840 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device 800 relevant to the present disclosure are shown in fig. 8, omitting components such as buses, input/output interfaces, and so forth. In addition, electronic device 800 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method for controlling an object display effect according to various embodiments of the present disclosure described in the "exemplary methods" section above of this specification.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in a method for controlling an object display effect according to various embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure will be described in detail with reference to specific details.
In the present specification, the embodiments are described in a progressive manner, and each embodiment focuses on differences from other embodiments, and the same or similar parts in each embodiment are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, devices, systems involved in the present disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The method and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (13)

1. A method for controlling object display effects, comprising:
setting a snoop window in a virtual space of a head-mounted display device, the snoop window configured to: the snooping window has a positive direction, for an object meeting a preset snooping condition, under the condition that a user looks at the object through the positive direction of the snooping window by sight, the part of the object positioned in a snooping space range is in a visible state, the part of the object beyond the snooping space range is in an invisible state, and the snooping space is an internal space defined by a user viewpoint and the edge of the snooping window;
under the condition that the target object meeting the preset snooping condition is positioned in the snooping space, responding to a received object moving-out instruction, processing at least part of the target object to be in a visible state in the whole virtual space, and moving at least part of the processed target object to a region outside the snooping space from the snooping space.
2. The method of claim 1, wherein the snoop window has a target snoop identification;
the method further comprises the following steps:
determining at least one sub-object included in the target object, wherein each sub-object is configured with at least one material;
and determining whether the target object meets the preset snooping condition or not based on at least one texture configured by the target object and the target snooping identifier.
3. The method of claim 2, wherein the determining whether the target object satisfies the preset snoop condition based on at least one texture of the target object configuration and the target snoop identification comprises:
and under the condition that all the at least one texture configured on the target object has a snooping identifier and all the snooping identifiers of the at least one texture configured on the target object are the same as the target snooping identifier, determining that the target object meets the preset snooping condition.
4. The method according to claim 3, wherein each texture of the at least one texture configured for the target object is pre-configured with two corresponding shaders, namely a first shader that supports snoop tags configured for textures and a second shader that does not support snoop tags configured for textures;
determining that the target object meets the preset snoop condition under the condition that the at least one texture configured to the target object has a snoop identifier and the snoop identifier of the at least one texture configured to the target object is the same as the target snoop identifier, including:
and determining that the target object meets the preset snooping condition under the condition that all shaders used by at least one material configured for the target object are corresponding first shaders and the snooping identifications of the at least one material configured for the target object are the same as the target snooping identification.
5. The method of claim 4, wherein said processing at least a portion of the target object to be visible throughout the virtual space comprises:
and switching shaders used by at least part of at least one material in the at least one material configured for the target object to the corresponding second shaders so that at least part of the target object is visible in the whole virtual space.
6. The method of any of claims 1-5, wherein, after the moving at least a portion of the processed target object from the snoop space to a region outside of the snoop space, the method further comprises:
in response to receiving an object reset instruction, moving at least a portion of the processed target object from a region outside the snoop space to an initial position within the snoop space, and re-updating the target object to satisfy the preset snoop condition.
7. The method of any of claims 1 to 5, further comprising:
and responding to the received action instruction, and controlling the target object to present a corresponding action effect.
8. The method of any of claims 1 to 5, wherein the target object comprises a stereo module.
9. An apparatus for controlling an object display effect, comprising:
a setup module to setup a snoop window in a virtual space of a head-mounted display device, the snoop window configured to: the snooping window has a positive direction, for an object meeting a preset snooping condition, under the condition that a user looks at the object through the positive direction of the snooping window by sight, the part of the object positioned in a snooping space range is in a visible state, the part of the object beyond the snooping space range is in an invisible state, and the snooping space is an internal space defined by a user viewpoint and the edge of the snooping window;
and the processing module is used for responding to a received object move-out instruction under the condition that a target object meeting the preset snooping condition is positioned in the snooping space, processing at least part of the target object to be in a visible state in the whole virtual space, and moving the at least part of the processed target object to a region outside the snooping space from the snooping space.
10. A wearable device comprising the apparatus for controlling object display effects of claim 9.
11. An electronic device, comprising:
a memory for storing a computer program product;
a processor for executing the computer program product stored in the memory, and when executed, implementing the method for controlling object display effects of any of the above claims 1 to 8.
12. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method for controlling object display effects of any of the preceding claims 1 to 8.
13. A computer program product comprising computer program instructions, characterized in that the computer program instructions, when executed by a processor, implement the method for controlling an object display effect of any of the preceding claims 1 to 8.
CN202211015194.3A 2022-08-23 2022-08-23 Method and device for controlling object display effect and wearable device Pending CN115390723A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211015194.3A CN115390723A (en) 2022-08-23 2022-08-23 Method and device for controlling object display effect and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211015194.3A CN115390723A (en) 2022-08-23 2022-08-23 Method and device for controlling object display effect and wearable device

Publications (1)

Publication Number Publication Date
CN115390723A true CN115390723A (en) 2022-11-25

Family

ID=84120185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211015194.3A Pending CN115390723A (en) 2022-08-23 2022-08-23 Method and device for controlling object display effect and wearable device

Country Status (1)

Country Link
CN (1) CN115390723A (en)

Similar Documents

Publication Publication Date Title
US9886102B2 (en) Three dimensional display system and use
US10303323B2 (en) System and method for facilitating user interaction with a three-dimensional virtual environment in response to user input into a control device having a graphical interface
CN107810465B (en) System and method for generating a drawing surface
US10409443B2 (en) Contextual cursor display based on hand tracking
JP6133972B2 (en) 3D graphic user interface
EP3055755B1 (en) Scaling of visual elements on a user interface
US20220011924A1 (en) Annotation using a multi-device mixed interactivity system
US10895966B2 (en) Selection using a multi-device mixed interactivity system
EP3814876B1 (en) Placement and manipulation of objects in augmented reality environment
US11232643B1 (en) Collapsing of 3D objects to 2D images in an artificial reality environment
WO2022218146A1 (en) Devices, methods, systems, and media for an extended screen distributed user interface in augmented reality
CN109725956B (en) Scene rendering method and related device
US11507019B2 (en) Displaying holograms via hand location
US20080252661A1 (en) Interface for Computer Controllers
CN113318428A (en) Game display control method, non-volatile storage medium, and electronic device
CN114942737A (en) Display method, display device, head-mounted device and storage medium
US20160363767A1 (en) Adjusted location hologram display
EP3864494B1 (en) Locating spatialized sounds nodes for echolocation using unsupervised machine learning
US20220130100A1 (en) Element-Based Switching of Ray Casting Rules
CN115390723A (en) Method and device for controlling object display effect and wearable device
US10990251B1 (en) Smart augmented reality selector
CN109697001A (en) The display methods and device of interactive interface, storage medium, electronic device
EP4089506A1 (en) Element-based switching of ray casting rules
US20230334724A1 (en) Transposing Virtual Objects Between Viewing Arrangements
CN115454255B (en) Switching method and device for article display, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination