CN117572992A - Virtual object display method, device, electronic equipment, storage medium and program product - Google Patents

Virtual object display method, device, electronic equipment, storage medium and program product Download PDF

Info

Publication number
CN117572992A
CN117572992A CN202210945256.4A CN202210945256A CN117572992A CN 117572992 A CN117572992 A CN 117572992A CN 202210945256 A CN202210945256 A CN 202210945256A CN 117572992 A CN117572992 A CN 117572992A
Authority
CN
China
Prior art keywords
plane
target
preset
type
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210945256.4A
Other languages
Chinese (zh)
Inventor
程林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210945256.4A priority Critical patent/CN117572992A/en
Publication of CN117572992A publication Critical patent/CN117572992A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to a virtual object display method, a device, an electronic device, a storage medium and a program product, which can improve the triggering efficiency of a virtual object and improve the user experience. The method comprises the following steps: determining a target position of a target plane in an interaction space based on an augmented reality technology, wherein the target plane is a plane of a target object in a real space corresponding to the interaction space; a target virtual object is displayed at the target location in the interaction space.

Description

Virtual object display method, device, electronic equipment, storage medium and program product
Technical Field
The disclosure relates to the technical field of augmented reality, and in particular relates to a virtual object display method, a virtual object display device, electronic equipment, a storage medium and a program product.
Background
Currently, virtual objects (such as GUI objects or functional objects such as a keyboard) are usually irregularly displayed in a certain spatial range centered on a user in an interaction space based on an augmented Reality technology such as Virtual Reality (VR), mixed Reality (MR), or augmented Reality (Augmented Reality), and the user performs a certain function in the interaction space by performing a triggering operation on the Virtual objects.
However, since the virtual object is usually displayed in a floating manner, the problem of the floating manner is lack of triggering feedback, and the user only perceives the virtual object in the triggering space by vision to be counterintuitive, so that the user may need to trigger the virtual object by multiple triggering operations when triggering the virtual object, and the virtual object may be repeatedly triggered due to the fact that the user perceives the virtual object by vision to trigger the multiple triggering operations, so that the triggering efficiency of the virtual object is low, and the user experience is poor.
Disclosure of Invention
To solve or at least partially solve the above technical problems, the present disclosure provides a virtual object display method, apparatus, electronic device, storage medium, and program product.
In a first aspect of an embodiment of the present disclosure, there is provided a virtual object display method, including: determining a target position of a target plane in an interaction space based on an augmented reality technology, wherein the target plane is a plane of a target object in a real space corresponding to the interaction space; a target virtual object is displayed at the target location in the interaction space.
Optionally, before the determining the target position where the target plane is located, the method further includes: determining at least one plane in the real space, the at least one plane being a plane of at least one object in the real space; the target plane is determined from the at least one plane.
Optionally, the determining at least one plane in the real space includes: acquiring point cloud data of the real space; identifying the at least one plane based on the point cloud data; alternatively, the at least one object is identified based on the point cloud data, and the at least one plane is determined from the at least one object.
Optionally, the determining at least one plane in the real space includes: receiving a target input in the interaction space, wherein the target input is gesture input or peripheral input; determining the at least one plane in response to the target input; alternatively, the at least one object is determined in response to the target input, and the at least one plane is determined from the at least one object.
Optionally, after the determining the at least one plane in the real space, the method further comprises: mapping the at least one plane to point cloud data of the real space respectively to obtain mapping information corresponding to the at least one plane; the mapping information is saved.
Optionally, the determining the target plane from the at least one plane includes: determining the type of the target object to which the target virtual object belongs; and determining a plane matched with the target object type in the at least one plane as the target plane according to a preset mapping relation.
Optionally, the preset mapping relationship includes at least one preset object type and at least one preset plane information matched with each preset object type, where the preset plane information is a preset plane type or a preset plane, and the preset plane type is used for indicating a class of planes.
Optionally, the determining, according to a preset mapping relationship, a plane matching the target object type in the at least one plane as the target plane includes: determining at least one target preset plane information matched with the target object type according to the preset mapping relation; determining the target plane from the at least one plane based on the at least one target preset plane information; when the at least one target preset plane information is at least one target preset plane type, the plane type to which the target plane belongs is one of the at least one target preset plane types; when the at least one target preset plane information is at least one target preset plane, the target plane is one of the at least one target preset plane.
Optionally, the preset mapping relationship further includes a matching degree between each preset object type and each corresponding preset plane information; the determining at least one target preset plane information matched with the target object type according to the preset mapping relation comprises the following steps: determining at least one target preset plane information matched with the target object type and the matching degree of the target object type and each target preset plane information according to the preset mapping relation; the determining the target plane from the at least one plane based on the at least one target preset plane information includes: determining the target plane from the at least one plane based on the at least one target preset plane information and the matching degree of the target object type and each target preset plane information; when the at least one target preset plane information is the at least one target preset plane type, the plane type to which the target plane belongs is the one of the at least one target preset plane type, which has the highest matching degree with the target object type; when the at least one target preset plane information is the at least one target preset plane, the target plane is the target object type with the highest matching degree in the at least one target preset plane.
Optionally, the interaction space comprises a virtual space, the method further comprising: and rendering the target object based on the position information of the target object in the interaction space.
In a second aspect of the embodiments of the present disclosure, there is provided a virtual object display apparatus including: a determining module and a display module; the determining module is used for determining a target position of a target plane in an interaction space based on an augmented reality technology, wherein the target plane is a plane of a target object in a real space corresponding to the interaction space; the display module is used for displaying a target virtual object on the target position of the interaction space.
Optionally, the determining module is further configured to determine at least one plane in the real space, where the at least one plane is a plane of at least one object in the real space, before determining the target position where the target plane is located; the target plane is determined from the at least one plane.
Optionally, the determining module is specifically configured to obtain point cloud data of the real space; identifying the at least one plane based on the point cloud data; alternatively, the at least one object is identified based on the point cloud data, and the at least one plane is determined from the at least one object.
Optionally, the determining module is specifically configured to receive a target input in the interaction space, where the target input is a gesture input or a peripheral input; determining the at least one plane in response to the target input; alternatively, the at least one object is determined in response to the target input, and the at least one plane is determined from the at least one object.
Optionally, the device further comprises a mapping module and a saving module; the mapping module is used for mapping at least one plane in the real space into point cloud data of the real space respectively after determining the at least one plane, so as to obtain mapping information corresponding to the at least one plane; the storage module is used for storing the mapping information.
Optionally, the determining module is specifically configured to determine a target object type to which the target virtual object belongs; and determining a plane matched with the target object type in the at least one plane as the target plane according to a preset mapping relation.
Optionally, the preset mapping relationship includes at least one preset object type and at least one preset plane information matched with each preset object type, where the preset plane information is a preset plane type or a preset plane, and the preset plane type is used for indicating a class of planes.
Optionally, the determining module is specifically configured to determine, according to the preset mapping relationship, at least one target preset plane information matched with the target object type; determining the target plane from the at least one plane based on the at least one target preset plane information; when the at least one target preset plane information is at least one target preset plane type, the plane type to which the target plane belongs is one of the at least one target preset plane types; when the at least one target preset plane information is at least one target preset plane, the target plane is one of the at least one target preset plane.
Optionally, the preset mapping relationship further includes a matching degree between each preset object type and each corresponding preset plane information; the determining module is specifically configured to determine, according to the preset mapping relationship, at least one target preset plane information that matches the target object type, and a matching degree between the target object type and each target preset plane information; determining the target plane from the at least one plane based on the at least one target preset plane information and the matching degree of the target object type and each target preset plane information; when the at least one target preset plane information is the at least one target preset plane type, the plane type to which the target plane belongs is the one of the at least one target preset plane type, which has the highest matching degree with the target object type; when the at least one target preset plane information is the at least one target preset plane, the target plane is the target object type with the highest matching degree in the at least one target preset plane.
Optionally, the interaction space comprises a virtual space, and the apparatus further comprises a rendering module; the rendering module is used for rendering the target object based on the position information of the target object in the interaction space.
A third aspect of an embodiment of the present disclosure provides an electronic device, the electronic device including a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program implementing the virtual object display method according to the first aspect when executed by the processor.
In a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the virtual object display method according to the first aspect.
A fifth aspect of embodiments of the present disclosure provides a computer program product, wherein the computer program product comprises a computer program, which when run on a processor causes the processor to execute the computer program to implement the virtual object display method as described in the first aspect.
A sixth aspect of embodiments of the present disclosure provides a chip, the chip including a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute program instructions to implement the virtual object display method according to the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages: in the embodiment of the disclosure, in an interaction space based on an augmented reality technology, determining a target position where a target plane is located, wherein the target plane is a plane of a target object in a real space corresponding to the interaction space; a target virtual object is displayed at the target location in the interaction space. According to the scheme, the virtual object is displayed at the position of the interaction space based on the augmented reality technology by determining the position of the plane of the object in the real space, and then the virtual object is attached to the position of the interaction space, so that the effect of attaching the virtual object to the object in the real space is achieved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic flow chart of a virtual object display method according to an embodiment of the disclosure;
FIG. 2 is a second flow chart of a virtual object display method according to an embodiment of the disclosure;
FIG. 3 is a third flow chart of a virtual object display method according to an embodiment of the disclosure;
fig. 4 is a block diagram of a virtual object display device according to an embodiment of the present disclosure;
fig. 5 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, where appropriate, such that embodiments of the disclosure may be practiced in sequences other than those illustrated and described herein, and that the objects identified by "first," "second," etc. are generally of the same type and are not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
In an interaction space (for example, an interaction space of a head-mounted device) based on an augmented reality technology such as virtual reality, mixed reality or augmented reality, a virtual object hanging display lacks trigger feedback and has a mode penetration phenomenon. The lack of triggering feedback results in low triggering efficiency of the virtual object, and the mode penetration phenomenon refers to that the virtual object in the interaction space is arranged in a real object in a corresponding real space, penetrates through the real object, and causes that in the interaction space, a user cannot interactively trigger the virtual object through a near-field gesture. For example, a virtual object is set to be displayed in an interaction space of the head-mounted device at a position which is 1 meter away from a user, but we cannot predict what environment the user uses the head-mounted device in, if the user uses the head-mounted device in an environment which is less than 1 meter, the position 80cm away from the user is a wall surface, because the interaction space corresponds to the position of the real space, the virtual object is actually displayed through the wall surface in the real space, if far-field interaction is performed, the virtual object can be triggered by rays, but if near-field gesture interaction is performed, the user can only touch the wall surface at 80cm, and the virtual object cannot be triggered. Therefore, the space environment of the user using the head-mounted device cannot be judged, so that near-field interaction possibly is blocked by a real physical object due to the mode penetration phenomenon, a virtual object cannot be triggered, and even the user is injured.
In order to solve the above-mentioned problem, in the embodiment of the present disclosure, by determining the position of the plane of the object in the real space in the interaction space based on the augmented reality technology, and then displaying the virtual object at the position of the interaction space, the effect of attaching the virtual object to the object in the real space is achieved, when the user triggers the virtual object, the user needs to touch the object in the real space to trigger the virtual object, so on one hand, it is convenient to detect whether the triggering operation is effective, on the other hand, since the user can feel the object in the real space without performing multiple touch operations, multiple triggers to the virtual object will not be caused, and on the other hand, the occurrence of the mold penetration phenomenon can be avoided. Therefore, the triggering efficiency of the virtual object can be improved, the safety of triggering operation can be ensured, and the user experience is improved.
Moreover, compared with a method for giving feedback to a user through handle vibration, voice prompt or the like after the user triggers the virtual object, the scheme provided by the embodiment of the disclosure can give corresponding feedback to the user when the user triggers the virtual object, so that the triggering efficiency of the virtual object can be better improved.
The embodiments of the present disclosure may be applied to a headset based on an augmented reality technology, where the headset may be a headset having VR function, MR function or AR function, for example, VR head, VR glasses, VR helmet, MR head, MR glasses, MR helmet, AR head, AR glasses or AR helmet, and may specifically be determined according to practical situations, and the embodiments of the present disclosure are not limited. Embodiments of the present disclosure may also be applied to other devices based on augmented reality technology, without limitation.
The execution subject of the display control method provided in the embodiments of the present disclosure may be an apparatus based on an augmented reality technology, or may be a functional module and/or a functional entity capable of implementing the display control method in an apparatus based on an augmented reality technology, which may specifically be determined according to actual use requirements, and embodiments of the present disclosure are not limited.
The virtual object display method provided by the embodiment of the present disclosure is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, an embodiment of the present disclosure provides a virtual object display method, which may include steps 101 to 102 described below.
101. And determining the target position of the target plane in an interaction space based on the augmented reality technology.
The target plane is a plane of a target object in real space corresponding to the interaction space.
It can be understood that each position in the interaction space based on the augmented reality technology corresponds to each position in the real space one by one, so that the position of the target plane in the interaction space (i.e. the target position) can be directly determined, or the position of the target plane in the real space can be determined first, then the position of the target plane in the real space is mapped to the interaction space, so as to obtain the position of the target plane in the interaction space (i.e. the target position), and the specific method for determining the target position can be determined according to the actual situation, which is not limited herein.
102. A target virtual object is displayed at the target location in the interaction space.
The target virtual object may be a graphical user interface (Graphical User Interface, GUI), a control, a logo, or other virtual objects, which may be specifically determined according to the actual situation, and is not limited herein.
Wherein the GUI refers to a computer-operated user interface displayed in a graphical manner.
The control means a control with a certain function, such as a keyboard control.
Wherein the identification is an identification indicating a certain object or function, such as an icon, chinese character or picture.
When the target virtual object is a seat identifier (such as a seat icon), the target object is a seat, the target plane is a plane corresponding to the seat of the seat, and the user can sit on the virtual object to achieve the purpose of sitting on the seat in real space. And, after recognizing the triggering operation of the user sitting on the virtual object, the display of the target virtual object is canceled, and a voice prompt may be played to prompt the user to sit on the seat, or prompt the user to adjust the sitting posture, sitting position, or the like on the seat.
It can be appreciated that in the embodiment of the present disclosure, by binding the display position of the target virtual object with the target position in the interaction space, the target position is again a position on the target plane of the target object in the real space, and thus, the target virtual object is displayed on the target position in the interaction space, the target virtual object may be attached to the target object.
The attachment may be a relationship in which one object is attached to another object, such as adsorption, adhesion, or the like.
It will be appreciated that, before the step 101, the user may trigger to display the target virtual object through other triggering operations, which may be specifically determined according to the actual situation, and is not limited herein.
In this disclosed embodiment, through determining the position of the plane of the object in the real space in the interaction space based on the augmented reality technology, then the virtual object is displayed on this position of this interaction space, so as to realize the effect of attaching the virtual object on the object in the real space, when the user triggers the virtual object, the user needs to touch the object in the real space, and the triggering of the virtual object can be completed, so, on one hand, it is convenient to detect whether this triggering operation is effective, on the other hand, because the user can feel the object in the real space, and does not need to perform multiple touch operations, multiple triggers to the virtual object will not be caused, therefore, the triggering efficiency of the virtual object can be improved, and the user experience is improved.
Alternatively, the target plane may be directly determined, including automatic identification or user specification, or at least one plane may be determined first, and then the target plane may be determined from at least one plane, specifically, may be determined according to the actual situation, which is not limited herein.
Optionally, in conjunction with fig. 1, as shown in fig. 2, before the step 101, the virtual object display method provided in the embodiment of the present disclosure may further include the following steps 103 and 104.
103. At least one plane in the real space is determined.
Wherein the at least one plane is a plane of at least one object in the real space.
Each plane of the at least one plane may be a plane of a different object, or a part of planes of the at least one plane may be planes of different objects, and a part of planes may be planes belonging to the same object, which may be specifically determined according to actual situations, and is not limited herein.
The at least one plane may be obtained by automatically identifying based on the point cloud data in the real space, or may be determined by receiving a peripheral (e.g. handle) input or a gesture input triggered by a user, or may be determined by automatically identifying in combination with the point cloud data, and a peripheral input or a gesture input triggered by a user, which may be specifically determined according to an actual situation, and is not limited herein.
104. The target plane is determined from the at least one plane.
The target plane may be any one of at least one plane, or may be a plane meeting a certain requirement in at least one plane, which may be specifically determined according to an actual situation, and is not limited herein.
Alternatively, the above step 103 may be specifically realized by the following steps 103a and 103b, or steps 103a, 103c, and 103 d.
103a, acquiring point cloud data of the real space.
The point cloud data may be point cloud data of synchronous positioning and mapping (Simultaneous localization and mapping, slam), or may be other point cloud data, which may be specifically determined according to actual situations, and is not limited herein.
For example, the Slam point cloud data can be obtained by performing Slam map construction on the real space through equipment based on the augmented reality technology.
103b, identifying the at least one plane based on the point cloud data.
It can be understood that the at least one plane is obtained by automatically identifying and matching the planes such as the wall surface, the ground surface, the desktop and the like in the real space through a point cloud identification algorithm according to the point cloud data.
103c, identifying the at least one object based on the point cloud data.
And identifying objects such as a ground, a wall, a chair, a table and the like in real space through a point cloud identification algorithm according to the point cloud data to obtain at least one object.
103d, determining the at least one plane from the at least one object.
In the embodiment of the disclosure, a scheme for automatically identifying at least one plane based on point cloud data is provided, one scheme is that at least one plane is directly identified based on the point cloud data, and the other scheme is that at least one object is identified based on the point cloud data, and then at least one plane is determined from the at least one object, specifically, the at least one plane can be determined according to actual conditions, so that the flexibility of determining a target plane is improved.
In the embodiment of the disclosure, after the object or the plane is identified, the corresponding position of the object or the plane in the point cloud data can be marked, so that the identification efficiency of the object or the plane is improved.
Alternatively, the above step 103 may be specifically implemented by the following steps 103e and 103f, or steps 103e, 103g and 103 h.
103e, receiving a target input in the interaction space.
Wherein the target input is a gesture input or a peripheral input.
The peripheral, i.e. the controller of the device based on the augmented reality technology, may be, for example, a handle or other type of controller. Accordingly, the peripheral input may be a handle input, or may be another type of controller input, which is not limited herein.
103f, responsive to the target input, determining the at least one plane.
Wherein the target input is an input of a plane of a real space specified (or delimited) by a user, the target input can be recognized by a tracking technique to determine the plane of the real space.
Optionally, after each plane is determined by the device based on the augmented reality technology, a prompt message may also be output to prompt the user whether the plane is the plane, and then the plane is finally determined according to the confirmation information of the user.
103g, responsive to the target input, determining the at least one object.
The target input is input of an object in a real space designated (or delimited) by a user, the target input can be identified through a tracking technology so as to determine the object in the real space, specifically, at least one plane of the object can be determined through multiple inputs, and then the three-dimensional object is spliced based on the at least one plane.
The tracking technique in the step 103f and the step 103g may be a six-degree-of-freedom (6 DoF) tracking technique, or may be other tracking techniques, which may be specifically determined according to practical situations.
Optionally, after each object is determined by the device based on the augmented reality technology, a prompt message may also be output to prompt the user whether the object is the object, and then the object is finally determined according to the confirmation message of the user.
103h, determining the at least one plane from the at least one object.
It will be appreciated that after the at least one object is determined based on the target input, at least one plane may be determined from the at least one object as desired, and in particular, may be determined based on the actual situation, without limitation.
Optionally, after the step 103f or 103h, the virtual object display method provided by the embodiment of the present disclosure may further include the following step 103i, or the virtual object display method provided by the embodiment of the present disclosure may further include the following steps 103i and 103j.
103i, mapping the at least one plane to point cloud data in the real space respectively to obtain mapping information corresponding to the at least one plane.
Wherein the mapping information, i.e. which part of the point cloud data indicates that plane.
It can be appreciated that after determining at least one plane, the position information of the at least one plane may be mapped to the point cloud data to obtain the position information of each plane in the at least one plane in the point cloud data, that is, the mapping information corresponding to the at least one plane is obtained. And further, the position (namely the target position) of the target plane in the interaction space is conveniently acquired based on the point cloud data. The relative position of each plane in the interaction space can be obtained by mapping at least one plane into the point cloud data, and therefore, when the interaction space changes, the relative position of each plane in the interaction space is not changed, so that the target plane is not required to be recognized again, the target virtual object is displayed on the target position again, the display of the virtual object is facilitated, and the display efficiency of the virtual object can be improved.
103j, saving the mapping information.
It can be understood that the mapping information is stored in the point cloud data, so that the plane or the object in the point cloud data can be conveniently identified again later, and the identification efficiency can be improved.
After the step 103g, the at least one object may be mapped to the point cloud data in the real space, so as to obtain mapping information corresponding to the at least one object, and then the mapping information corresponding to the at least one object is stored.
Optionally, the name type of the plane and/or the object specified by the user can be calibrated, such as an office desktop 1, a bedroom wall surface 2 and the like, so that further interactive experience optimization can be conveniently carried out by the background and the developer.
According to the embodiment of the disclosure, the type of the object plane to be attached when the virtual object is displayed can be determined according to the type of the object to which the virtual object belongs, then the plane of which object is required to be attached in the real space is determined according to the type of the object plane to which the virtual object is required to be attached, then the position of the plane of the object is determined, and finally the virtual object is displayed at the position of the plane of the object, so that the effect of attaching the virtual object to the object is achieved, when a user triggers the virtual object, real touch feedback is given to the user, and the operation safety and comfort experience of the user are improved.
Alternatively, as shown in fig. 3 in conjunction with fig. 2, the above step 104 may be implemented specifically by the following steps 104a and 104 b.
104a, determining the target object type to which the target virtual object belongs.
The method comprises the steps of classifying various virtual objects, setting preset classification rules according to purposes, functions and the like, classifying the target virtual objects according to the preset classification rules when the target virtual objects need to be displayed, and determining the types of the target objects to which the target virtual objects belong, wherein the method can be specifically determined according to actual conditions and is not limited.
104b, determining a plane matched with the target object type in the at least one plane as the target plane according to a preset mapping relation.
The preset mapping relation is used for indicating at least one preset object type and at least one plane matched with each preset object type.
Optionally, the preset mapping relationship includes at least one preset object type and at least one preset plane information matched with each preset object type, where the preset plane information is a preset plane type or a preset plane, and the preset plane type is used for indicating a class of planes.
It may be understood that the preset mapping relationship may include at least one preset object type and at least one preset plane matching each preset object type, or may include the at least one preset object type and at least one preset plane type matching each preset object type, or may include the at least one preset object type and at least one object type matching each preset object type, where each object type corresponds to a plane, or may include other mapping relationships, and the specific preset mapping relationship may be determined according to actual situations, and is not limited herein. In this way, various preset mapping relations are provided in the embodiment of the disclosure, and flexibility and diversity of the preset mapping relations are increased.
In the embodiment of the disclosure, the plane matched with the virtual object is determined according to the type of the virtual object, and then the virtual object is displayed at the position of the plane, so that the user operation can be facilitated, and the user experience is improved.
Optionally, the preset plane information is a preset plane type, the at least one target preset plane information is at least one target preset plane type, the preset mapping relationship includes at least one preset object type and at least one preset plane type matched with each preset object type, and each preset plane type is used for indicating a class of planes. In the embodiment of the present disclosure, at least one preset plane type matched with each preset object type may be understood that the preset plane type matched with each preset object type includes one or more preset plane types, which may be specifically determined according to actual situations, and is not limited herein.
Illustratively, the step 104b may be implemented by the following steps 104b1 and 104b 2.
104b1, determining at least one target preset plane type matched with the target object type according to the preset mapping relation. Wherein the at least one target preset plane type is one or more preset plane types.
The plane type may be divided according to a direction in which the plane is located, for example, a vertical plane, a horizontal plane, etc.; the plane type can also be divided according to the object to which the plane belongs, such as a wall surface, a ground surface, a tabletop and the like; the plane types can be divided according to other modes, and can be specifically determined according to actual situations, and the plane types are not limited herein.
104b2, determining the target plane from the at least one plane based on the at least one target preset plane type.
Wherein the plane type to which the target plane belongs is one of the at least one target preset plane types.
In the embodiment of the disclosure, the preset mapping relationship includes the at least one preset object type and at least one preset plane type matched with each preset object type, so that a target preset object type matched with a target object type is determined according to the preset mapping relationship, then at least one preset plane type corresponding to the target preset object type is determined as the at least one target preset plane type, and then a plane, of which the plane type belongs to the at least one plane type is the type in the at least one target preset plane type, is determined as the target plane. Thus, a plane which is more convenient for the user to operate can be determined, and the user experience can be improved.
Optionally, when the preset mapping relationship includes the at least one preset object type and at least one preset plane type matched with each preset object type, the preset mapping relationship further includes a matching degree between each preset object type and each corresponding preset plane type.
It can be understood that the preset mapping relationship further includes a matching degree of each preset object type and each preset plane type matched with the corresponding preset object type. For example, the at least one preset object type is preset object type 1, preset object type 2, preset object type 3. The preset mapping relation further comprises: preset object type 1 and the matching degree of each preset plane type matched with preset object type 1; preset object type 2 and the matching degree of each preset plane type matched with preset object type 2; preset object type 3 and the matching degree of each preset plane type matched with preset object type 3.
Illustratively, the step 104b1 may be implemented by the following step 104b1a, and the step 104b2 may be implemented by the following step 104b2 a.
104b1a, determining at least one target preset plane type matched with the target object type according to the preset mapping relation, and determining the matching degree of the target object type and each target preset plane type.
The matching degree between the target object type and each target preset plane type can be factory set, or can be customized by a user according to requirements, specifically can be determined according to actual conditions, and is not limited herein.
104b2a, determining the target plane from the at least one plane based on the at least one target preset plane type and the degree of matching of the target object type with each of the target preset plane types.
The plane type of the target plane is the highest matching degree with the target object type in the at least one target preset plane type.
In the embodiment of the disclosure, the preset mapping relationship includes the at least one preset object type and at least one preset plane type matched with each preset object type, so that a target preset object type matched with a target object type is determined according to the preset mapping relationship, then at least one preset plane type corresponding to the target preset object type is determined as the at least one target preset plane type, and then in the at least one plane, the plane type which is the plane of the type with the highest matching degree with the target object type in the at least one target preset plane type is determined as the target plane. The target preset plane type with higher matching degree with the target object type is a plane type which is more convenient for the user to operate, so that the plane which is more convenient for the user to operate can be determined, and the user experience can be improved.
Optionally, the preset plane information is a preset plane, the at least one target preset plane information is at least one target preset plane, and the preset mapping relationship includes the at least one preset object type and at least one preset plane matched with each preset object type. In the embodiment of the present disclosure, the at least one preset plane matching each preset object type may be understood that the preset plane matching each preset object type includes one or more preset planes, which may be specifically determined according to actual situations, and is not limited herein.
Illustratively, the step 104b may be implemented by the following steps 104b3 and 104b 4.
104b3, determining at least one target preset plane matched with the target object type according to the preset mapping relation.
Wherein the at least one target preset plane is one or more preset planes.
104b4, determining the target plane from the at least one plane based on the at least one target preset plane.
Wherein the target plane is one of the at least one target preset plane.
In the embodiment of the disclosure, the preset mapping relationship includes the at least one preset object type and preset planes matched with each preset object type, so that a target preset object type matched with a target object type is determined according to the preset mapping relationship, then at least one preset plane corresponding to the target preset object type is determined as the at least one target preset plane, and then a plane belonging to both the at least one plane and the at least one target preset plane is determined as a target plane. Thus, a plane which is more convenient for the user to operate can be determined, and the user experience can be improved.
Optionally, when the preset mapping relationship includes the at least one preset object type and a preset plane matched with each preset object type, the preset mapping relationship further includes a matching degree between each preset object type and each corresponding preset plane.
It can be understood that the preset mapping relationship further includes a sum of each preset object type and a matching degree of each preset plane matched with the corresponding preset object type. For example, the at least one preset object type is preset object type 1, preset object type 2, preset object type 3. The preset mapping relation further comprises: preset object type 1 and the matching degree of each preset plane matched with preset object type 1; preset object type 2 and the matching degree of each preset plane matched with preset object type 2; preset object type 3 and the matching degree of each preset plane matched with preset object type 3.
Illustratively, the step 104b3 may be implemented by the following step 104b3a, and the step 104b4 may be implemented by the following step 104b4 a.
104b3a, determining at least one target preset plane matched with the target object type according to the preset mapping relation, and determining the matching degree of the target object type and each target preset plane.
The matching degree between the target object type and each target preset plane can be factory set, or can be customized by a user according to requirements, specifically can be determined according to actual conditions, and is not limited herein.
104b4a, determining the target plane from the at least one target preset plane based on the at least one target preset plane and the degree of matching of the target object type with each target preset plane.
The target plane is the target preset plane with the highest matching degree with the target object type.
In this embodiment of the present disclosure, the preset mapping relationship includes the at least one preset object type and preset planes matched with each preset object type, so that a target preset object type matched with a target object type is determined according to the preset mapping relationship, then at least one preset plane corresponding to the target preset object type is determined as the at least one target preset plane, then the plane belonging to both the at least one plane and the at least one target preset plane, and a plane with the highest matching degree with the target object type in the at least one target preset plane is determined as a target plane. The target preset plane with higher matching degree with the target object type is a plane which is more convenient for the user to operate, so that the plane which is more convenient for the user to operate can be determined, and the user experience can be improved.
The target virtual object may be a virtual keyboard, the target plane is a desktop, the virtual keyboard is displayed at the position of the desktop, the effect that the virtual keyboard is attached to the desktop is achieved, which is equivalent to placing the real keyboard on the real desktop, so that when the user triggers the virtual keyboard in real time, the user can feel the real feeling of clicking the desktop, and the user can be prompted that the triggering operation on the virtual keyboard is successful based on the real feeling of clicking the desktop, so that the operation efficiency can be improved.
The target virtual object is a meeting reminding object, the target plane is a wall surface, the meeting reminding virtual object is displayed at the position of the wall surface, the effect that the meeting reminding object is attached to the wall surface is achieved, the effect is similar to that a meeting reminding note is attached to an office wall surface, when a user does not need the meeting reminding note, the meeting reminding note can be canceled to be displayed through triggering operation, and therefore the meeting reminding note is displayed at the position of the wall surface, and the operation of the user is facilitated.
Optionally, the interaction space includes a virtual space, and the virtual object display method provided in the embodiment of the present disclosure may further include step 105 described below.
105. And rendering the target object based on the position information of the target object in the interaction space.
It can be understood that the target object can be rendered based on the position information of the target object, so that the virtual model of the target object is displayed in the interactive environment, and thus, the virtual model of the real object in the real space can be displayed in the virtual space by the user without using perspective technologies such as See-Through or pass Through, and when the user needs to use the real object in the real space, the user can find the real object in time, and the user experience can be improved.
The rendering of the target object may be performed according to the real skin of the target object, or may be performed according to other skins, which may be specifically determined according to the actual situation, and is not limited herein.
The target object is a seat, the target virtual object is a seat identifier, and the virtual model of the seat is displayed by rendering the seat in the VR virtual environment, so that a user can conveniently find a real seat to sit through the virtual model of the seat, and user experience can be improved.
The embodiment of the disclosure can increase interaction feedback (such as mixed reality interaction) when a user wears equipment based on an augmented reality technology, and can improve interaction safety and comfort.
Fig. 4 is a block diagram of a virtual object display device according to an embodiment of the present disclosure, and as shown in fig. 4, includes: a determining module 401 and a display module 402; the determining module 401 is configured to determine, in an interaction space based on an augmented reality technology, a target position where a target plane is located, where the target plane is a plane of a target object in a real space corresponding to the interaction space; the display module 402 is configured to display a target virtual object at the target location in the interaction space.
Optionally, the determining module 401 is further configured to determine at least one plane in the real space, where the at least one plane is a plane of at least one object in the real space, before determining the target position where the target plane is located; the target plane is determined from the at least one plane.
Optionally, the determining module 401 is specifically configured to obtain point cloud data of the real space; identifying the at least one plane based on the point cloud data; alternatively, the at least one object is identified based on the point cloud data, and the at least one plane is determined from the at least one object.
Optionally, the determining module 401 is specifically configured to receive a target input in the interaction space, where the target input is a gesture input or a peripheral (e.g. a handle) input; determining the at least one plane in response to the target input; alternatively, the at least one object is determined in response to the target input, and the at least one plane is determined from the at least one object.
Optionally, the device further comprises a mapping module and a saving module; the mapping module is used for mapping at least one plane in the real space into point cloud data of the real space respectively after determining the at least one plane, so as to obtain mapping information corresponding to the at least one plane; the storage module is used for storing the mapping information.
Optionally, the determining module 401 is specifically configured to determine a target object type to which the target virtual object belongs; and determining a plane matched with the target object type in the at least one plane as the target plane according to a preset mapping relation.
Optionally, the preset mapping relationship includes at least one preset object type and at least one preset plane information matched with each preset object type, where the preset plane information is a preset plane type or a preset plane, and the preset plane type is used for indicating a class of planes.
Optionally, the determining module 401 is specifically configured to determine, according to the preset mapping relationship, at least one target preset plane information matched with the target object type; determining the target plane from the at least one plane based on the at least one target preset plane information; when the at least one target preset plane information is at least one target preset plane type, the plane type to which the target plane belongs is one of the at least one target preset plane types; when the at least one target preset plane information is at least one target preset plane, the target plane is one of the at least one target preset plane.
Optionally, the preset mapping relationship further includes a matching degree between each preset object type and each corresponding preset plane information; the determining module 401 is specifically configured to determine, according to the preset mapping relationship, at least one target preset plane information that matches the target object type, and a matching degree between the target object type and each target preset plane information; determining the target plane from the at least one plane based on the at least one target preset plane information and the matching degree of the target object type and each target preset plane information; when the at least one target preset plane information is the at least one target preset plane type, the plane type to which the target plane belongs is the one of the at least one target preset plane type, which has the highest matching degree with the target object type; when the at least one target preset plane information is the at least one target preset plane, the target plane is the target object type with the highest matching degree in the at least one target preset plane.
Optionally, the interaction space comprises a virtual space, and the apparatus further comprises a rendering module; the rendering module is used for rendering the target object based on the position information of the target object in the interaction space.
In the embodiment of the disclosure, each module may implement the virtual object display method provided in the embodiment of the method, and may achieve the same technical effects, so that repetition is avoided and redundant description is omitted.
Fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure, which is used to exemplarily illustrate an electronic device implementing any virtual object display method in an embodiment of the present disclosure, and should not be construed as specifically limiting the embodiment of the present disclosure.
As shown in fig. 5, the electronic device 500 may include a processor (e.g., a central processing unit, a graphics processor, etc.) 501 that may perform various suitable actions and processes in accordance with programs stored in a Read Only Memory (ROM) 502 or loaded from a storage 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processor 501, ROM 502, and RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While an electronic device 500 having various means is shown, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or from the storage means 508, or from the ROM 502. The functions defined in any virtual object display method provided by the embodiments of the present disclosure may be performed when the computer program is executed by the processor 501.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the client, server, may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining a target position of a target plane in an interaction space based on an augmented reality technology, wherein the target plane is a plane of a target object in a real space corresponding to the interaction space; a target virtual object is displayed at the target location in the interaction space.
In an embodiment of the present disclosure, computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the computer, partly on the computer, as a stand-alone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a computer-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a computer-readable storage medium would include one or more wire-based electrical connections, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (14)

1. A virtual object display method, the method comprising:
determining a target position of a target plane in an interaction space based on an augmented reality technology, wherein the target plane is a plane of a target object in a real space corresponding to the interaction space;
and displaying a target virtual object on the target position of the interaction space.
2. The method of claim 1, wherein prior to determining the target location at which the target plane is located, the method further comprises:
determining at least one plane in the real space, wherein the at least one plane is a plane of at least one object in the real space;
the target plane is determined from the at least one plane.
3. The method of claim 2, wherein the determining at least one plane in the real space comprises:
Acquiring point cloud data of the real space;
identifying the at least one plane based on the point cloud data;
or,
based on the point cloud data, the at least one object is identified, and the at least one plane is determined from the at least one object.
4. The method of claim 2, wherein the determining at least one plane in the real space comprises:
receiving a target input in the interaction space, wherein the target input is gesture input or peripheral input;
determining the at least one plane in response to the target input;
or,
in response to the target input, the at least one object is determined, and the at least one plane is determined from the at least one object.
5. The method of claim 4, wherein after the determining the at least one plane in real space, the method further comprises:
mapping the at least one plane to point cloud data of the real space respectively to obtain mapping information corresponding to the at least one plane;
and storing the mapping information.
6. The method of claim 2, wherein said determining the target plane from the at least one plane comprises:
Determining a target object type to which the target virtual object belongs;
and determining a plane matched with the target object type in the at least one plane as the target plane according to a preset mapping relation.
7. The method of claim 6, wherein the preset mapping relationship includes at least one preset object type and at least one preset plane information matched with each preset object type, the preset plane information being a preset plane type or a preset plane, the preset plane type being used to indicate a class of planes.
8. The method of claim 7, wherein the step of determining the position of the probe is performed,
the determining, according to a preset mapping relationship, a plane matched with the target object type in the at least one plane as the target plane includes:
determining at least one target preset plane information matched with the target object type according to the preset mapping relation;
determining the target plane from the at least one plane based on the at least one target preset plane information;
when the at least one target preset plane information is at least one target preset plane type, the plane type to which the target plane belongs is one of the at least one target preset plane type; when the at least one target preset plane information is at least one target preset plane, the target plane is one of the at least one target preset plane.
9. The method of claim 8, wherein the preset mapping relationship further includes a degree of matching between each preset object type and each corresponding preset plane information;
the determining, according to the preset mapping relationship, at least one target preset plane information matched with the target object type includes:
determining at least one target preset plane information matched with the target object type and the matching degree of the target object type and each target preset plane information according to the preset mapping relation;
the determining the target plane from the at least one plane based on the at least one target preset plane information includes:
determining the target plane from the at least one plane based on the at least one target preset plane information and the matching degree of the target object type and each target preset plane information;
when the at least one target preset plane information is the at least one target preset plane type, the plane type to which the target plane belongs is the one of the at least one target preset plane type, which has the highest matching degree with the target object type; when the at least one target preset plane information is the at least one target preset plane, the target plane is the target object type with the highest matching degree in the at least one target preset plane.
10. The method of claim 1, wherein the interaction space comprises a virtual space, the method further comprising:
and rendering the target object in the interaction space based on the position information of the target object.
11. A virtual object display device, comprising: a determining module and a display module;
the determining module is used for determining a target position of a target plane in an interaction space based on an augmented reality technology, wherein the target plane is a plane of a target object in a real space corresponding to the interaction space;
and the display module is used for displaying a target virtual object on the target position of the interaction space.
12. An electronic device, comprising: a memory and a processor, the memory for storing a computer program; the processor is configured to execute the virtual object display method of any one of claims 1 to 10 when the computer program is invoked.
13. A computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the virtual object display method of any one of claims 1 to 10.
14. A computer program product, characterized in that the computer program product has stored thereon a computer program which, when executed by a processor, implements the virtual object display method of any of claims 1 to 10.
CN202210945256.4A 2022-08-08 2022-08-08 Virtual object display method, device, electronic equipment, storage medium and program product Pending CN117572992A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210945256.4A CN117572992A (en) 2022-08-08 2022-08-08 Virtual object display method, device, electronic equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210945256.4A CN117572992A (en) 2022-08-08 2022-08-08 Virtual object display method, device, electronic equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN117572992A true CN117572992A (en) 2024-02-20

Family

ID=89861217

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210945256.4A Pending CN117572992A (en) 2022-08-08 2022-08-08 Virtual object display method, device, electronic equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN117572992A (en)

Similar Documents

Publication Publication Date Title
EP3629290B1 (en) Localization for mobile devices
US9841814B1 (en) Intentional user experience
KR102649197B1 (en) Electronic apparatus for displaying graphic object and computer readable recording medium
CN110765620B (en) Aircraft visual simulation method, system, server and storage medium
CN106846497B (en) Method and device for presenting three-dimensional map applied to terminal
JP7339386B2 (en) Eye-tracking method, eye-tracking device, terminal device, computer-readable storage medium and computer program
CN107801413A (en) The terminal and its processing method being controlled to electronic equipment
CN115956259A (en) Generating an underlying real dataset for a virtual reality experience
KR20170066054A (en) Method and apparatus for providing audio
CN116249953A (en) Mixed reality teleconferencing across multiple locations
KR20170052976A (en) Electronic device for performing motion and method for controlling thereof
US20180267603A1 (en) Physical object addition and removal based on affordance and view
US20180373411A1 (en) Systems and methods for seat selection in virtual reality
CN109754464A (en) Method and apparatus for generating information
CN110794962A (en) Information fusion method, device, terminal and storage medium
CN108829595B (en) Test method, test device, storage medium and electronic equipment
US11769495B2 (en) Conversational AI platforms with closed domain and open domain dialog integration
WO2023221926A1 (en) Image rendering processing method and apparatus, device, and medium
CN114296843A (en) Latency determination for human interface devices
CN114564106B (en) Method and device for determining interaction indication line, electronic equipment and storage medium
CN111652675A (en) Display method and device and electronic equipment
KR20180082273A (en) Computer readable recording medium and electronic apparatus for performing video call
CN111462269B (en) Image processing method and device, storage medium and electronic equipment
CN117572992A (en) Virtual object display method, device, electronic equipment, storage medium and program product
Gelšvartas et al. Projection mapping user interface for disabled people

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination